Azure Reference Architecture for External and Intranet based applications.Ā
In the realm of cloud network design, a highly secure and widely adopted architecture for internet and intranet-based applications has become a cornerstone, especially in the public sector. This meticulously crafted design, encompassing five subnets for both internet and intranet Virtual Networks (VNETs), along with a shared Management Zone VNET, provides a fortified framework for safeguarding sensitive data and processes.
The architecture employs a thoughtful approach to network segmentation, differentiating between human and machine traffic, and segregating the Network Subsystems (NS) and Execution Subsystems (ES). This division ensures a clear and robust control mechanism, allowing each subnet to perform essential functions such as authentication, authorization, and payload inspection independently.
A pivotal feature of this design is the strategic routing of traffic. Any inbound or outbound trafficāfrom both humans and machinesāpasses through the Inter-Zone VNET. This meticulous routing structure acts as a gateway, ensuring that all data exchanges between the internet and intranet undergo stringent controls and inspections.
To fortify the architecture, a multi-layered security strategy is in place. Each subnet is equipped to handle authentication and authorization processes, while payload inspection adds an additional layer of scrutiny. This multi-tiered approach ensures that traffic from untrusted networks to trusted network segments is subject to rigorous controls, bolstering the overall security posture.
Two critical integration tiers, the Gateway Utility Tier for outbound traffic and the Integration Tier for network communication from the internet to the intranet, further enhance the security landscape. Outbound traffic is systematically channeled through the Gateway Utility Tier, providing a centralized point for monitoring and control. Simultaneously, the Integration Tier acts as the gateway for any network communication between the internet and the intranet, consolidating the points of entry and fortifying the perimeter defenses.
The primary objective of this meticulously secured architectural design is to establish proper controls at every layer. Each subnet becomes a fortress of authentication and authorization, and traffic undergoes thorough scrutiny to prevent any compromise of sensitive data. The architecture not only aligns with industry best practices but also addresses the specific needs of the public sector, where data integrity and confidentiality are paramount.
In conclusion, this internet and intranet architecture exemplifies a robust and secure framework, showcasing the fusion of technological sophistication and strategic design. Its application in the public sector underscores its reliability and adaptability, providing a blueprint for organizations aiming to fortify their cloud infrastructure against evolving cyber threats.
OCI Reference Architecture for External and Intranet based applications.
In the ever-evolving landscape of cloud computing, Oracle Cloud Infrastructure (OCI) stands out as a robust platform for hosting external and intranet-based applications. This OCI Reference Architecture has been meticulously crafted to ensure optimal performance, security, and availability.
The architecture is structured around multiple Virtual Cloud Networks (VCNs), strategically designed to cater to both internet and intranet traffic. Each VCN houses dedicated subnets, fortified by security lists and route tables that meticulously orchestrate inbound and outbound traffic. The use of Network Address Translation (NAT) gateways facilitates secure outbound internet traffic from resources within the private network.
A key feature of this reference architecture is the deployment of OCI cloud resources across different fault domains. This deliberate distribution enhances availability and redundancy, minimizing the impact of potential failures and ensuring continuous service delivery.
The separation of LB (Load Balancer) and Web components into distinct VCNs and subnets, apart from the App and DB components, is a strategic security measure. Placing LB and Web components in public subnets ensures controlled exposure to the internet, while the App and DB components remain tucked away in private subnets, adding an extra layer of defense against unauthorized access.
To enable seamless connectivity between OCI and on-premises data centers, a site-to-site VPN has been established. This ensures secure access and data exchange, creating a cohesive environment that bridges the gap between on-premises infrastructure and the cloud.
While the application and database components serve both internet and intranet traffic, the LB and Web components operate in separate VCNs and subnets. This distinct deployment contributes to a robust security posture, segregating the public-facing elements from the core application and database layers.
In adherence to security best practices, only the firewall and Load Balancer components are strategically placed in public subnets. This deliberate positioning allows for controlled and monitored exposure to the external environment, ensuring that critical elements remain shielded within private subnets.
The Hub and Spoke networking model is a centralized architecture that simplifies network management and enhances security by routing traffic through a core hub before reaching its destination. This design is widely utilized in various cloud environments and is especially beneficial for connecting multiple Cloud Service Providers (CSPs), on-premises data centers, and remote sites.
In this model, the hub acts as the principal point of connectivity and control, linking to various spokes. These spokes represent the network endpoints, such as branch offices, data centers, or separate cloud environments. Traffic between these endpoints is mediated by the hub, which can enforce security policies, perform routing decisions, and provide services like network address translation (NAT) and firewalls.
The model is also designed to handle both East-West and North-South traffic flows efficiently:
East-West traffic refers to communications that occur within the data center, between different servers or applications. In a cloud environment, this might mean traffic moving between virtual machines or containers.
North-South traffic is external traffic that moves in and out of the data center, typically between the data center and the wider internet or other remote locations.
Centralized Security: By channeling all traffic through the hub, security measures such as firewalls, intrusion detection/prevention systems, and data loss prevention can be centralized, making it easier to manage and monitor security across the network.
Simplified Management: The centralized nature of the hub and spoke model simplifies network configuration and management, as changes can be made centrally and propagated to all spokes.
Cost-Effectiveness: This model can reduce the cost of connectivity, as spokes can share the hub's resources and internet connectivity, rather than requiring individual connections.
Scalability: It is easy to add new spokes to the network without significantly reconfiguring the existing architecture.
Traffic Control: The hub can effectively manage and prioritize traffic, ensuring critical data flows smoothly and reducing potential bottlenecks.
Single Point of Failure: If the hub goes down, all connectivity can be lost. This risk can be mitigated through redundant hub configurations and backup systems.
Potential Bottlenecks: As all traffic must pass through the hub, there is a chance of creating a bottleneck if the hub's capacity is exceeded. Proper sizing and scaling strategies are essential.
Latency: For spoke-to-spoke communications, routing through the hub can introduce additional latency, as data must travel to the hub and then out to the destination spoke.
Complexity in Large Networks: While the model simplifies management, it can become complex when dealing with a large number of spokes or when integrating with multiple CSPs, requiring advanced routing and policy management.
Costs at Scale: While initially cost-effective, as the network scales and the amount of traffic increases, the costs associated with maintaining a high-capacity hub may escalate.
In conclusion, the Hub and Spoke networking model is a robust framework for organizing a network's traffic flows, offering a balance between centralized control and management efficiency. However, it is crucial to consider both the advantages and the potential challenges when designing and implementing this architecture, especially in complex or highly distributed environments.
In today's dynamic digital landscape, organizations often leverage multiple cloud providers to harness the benefits of diverse services and resources. This article explores a robust and secure network design for microservices deployment in a multi-cloud environment, specifically focusing on AWS and Azure.
To ensure stringent security controls, the architecture employs three Virtual Private Clouds (VPCs) each for internet and intranet applications. This isolation is achieved through distinct subnets, Network Access Control Lists (NACLs), and Network Security Groups (NSGs). This segmentation minimizes the attack surface and enhances control over traffic flow.
A dedicated management VPC serves as the nerve center for common solutions such as security, patching, monitoring, logging, administration, and directory services. This shared resource optimizes efficiency and consistency across both internet and intranet components.
Critical to the architecture is the Integration VPC housing the API Gateway. This gateway oversees authentication, authorization, and payload inspection for all inbound and outbound APIs. All traffic between internet and intranet transits through these integration components, ensuring a centralized and controlled communication channel.
The architecture enforces strict controls, allowing only specific communication flows. For instance, web services communicate exclusively with application services, and application services, in turn, communicate solely with databases and integration components. This meticulous control is extended across various VPCs and subnets.
Human traffic is meticulously directed through Internet and Intranet Web services, ensuring a structured entry point for users. System-level inbound and outbound traffic adheres strictly to the designated gateway services for both internet and intranet scenarios.
To fortify the network against DDoS attacks, a Content Delivery Network (CDN) is strategically integrated. Additionally, a Web Application Firewall (WAF) is deployed to mitigate top OWASP attacks, enhancing the security posture for internet-facing services.
Microservices are deployed within application subnets, encapsulated in Docker containers and orchestrated using Amazon's Elastic Kubernetes Service (EKS). Network Load Balancers (NLBs) and Application Load Balancers (ALBs) are thoughtfully positioned across different VPCs and subnets, ensuring optimal load balancing and fail-over capabilities.
The architecture seamlessly incorporates a variety of AWS native services, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). This ensures a comprehensive and well-integrated solution stack.
Both internet and intranet VPCs are equipped to communicate seamlessly with real-time and SFTP services, facilitating diverse functionalities across the ecosystem.
In summary, this multi-cloud architecture is characterized by its stringent controls, well-defined traffic directions, and judicious use of firewall rules, NACLs, and NSGs. By combining the strengths of AWS and Azure, organizations can confidently deploy microservices in a secure, scalable, and efficient manner, laying the foundation for a resilient digital infrastructure.
In this technical blog, we will explore the deployment architecture of Finastra's banking solution in Oracle Cloud Infrastructure (OCI). Finastra offers a range of banking products and solutions, including Core Banking, Payments and Transaction Processing, Digital Banking, Risk and Compliance Management, and Analytics and Business Intelligence. We will discuss the key components and design considerations that contribute to a highly secure and scalable deployment in OCI.
The deployment architecture of Finastra's banking solution in OCI is designed to ensure both internet-facing and intranet-facing access. Retail customers can access the product from the open internet, while maintaining strict security measures. Let's dive into the architecture details:
Multiple VCNs and public/private subnets are provisioned in OCI to segregate and control traffic flow. The architecture places load balancers and bastion hosts in the public subnet to handle internet traffic. Finastra's technical components, such as Apache RPS, cash Web, VA API, Mobile, integrator, report server, and the database, are placed in private subnets to enhance security.
To ensure high availability within a single Availability Domain, the architecture leverages multiple fault domains. This design choice minimizes the impact of failures and provides a resilient solution.
A site-to-site VPN tunnel is established to establish a private and secure communication channel between OCI and on-premises services. This enables seamless integration between the Finastra banking solution running in OCI and existing infrastructure.
Separate load balancers are deployed to handle internet and intranet traffic. This configuration ensures load distribution, high availability, and eliminates single points of failure.
The entire deployment architecture is designed with security in mind. Internet traffic is routed through a Web Application Firewall (WAF) to mitigate the top 10 OWASP attacks. Different subnets are created to control traffic flow between resources, and access to the database is restricted to necessary subnets and services. Additionally, OCI's native security features, such as Network Security Groups, are utilized to enforce fine-grained security policies.
Traffic between VCNs is routed through Oracle Integration Cloud (OIC) Dynamic Routing Gateways (DRGs), ensuring secure and efficient communication. Private resources are connected to the internet via OCI Network Address Translation (NAT), enabling outbound connectivity while maintaining a secure environment. OCI Service Gateway is utilized to allow OCI resources to communicate with other OCI services, such as Object Storage, via a private network, avoiding exposure to the public internet.
The deployment architecture of Finastra's banking solution in OCI provides a highly secure and scalable environment. By leveraging OCI's capabilities, such as fault domains, load balancing, VPN tunnels, and native security features, financial institutions can confidently deploy Finastra's banking products while ensuring the confidentiality, integrity, and availability of their banking services.