2024 DevOps Lifecycle: Share your expertise on CI/CD, deployment metrics, tech debt, and more for our Feb. Trend Report (+ enter a raffle!).
Kubernetes in the Enterprise: Join our Virtual Roundtable as we dive into Kubernetes over the past year, core usages, and emerging trends.
Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!
Distributed Cloud Architecture for Resilient Systems
How To Use KubeDB and Postgres Sidecar for Database Integrations in Kubernetes
In the ever-evolving landscape of software engineering, the database stands as a cornerstone for storing and managing an organization's critical data. From ancient caves and temples that symbolize the earliest forms of information storage to today's distributed databases, the need to persistently store and retrieve data has been a constant in human history. In modern applications, the significance of a well-managed database is indispensable, especially as we navigate the complexities of cloud-native architectures and application modernization. Why a Database? 1. State Management in Microservices and Stateless Applications In the era of microservices and stateless applications, the database plays a pivotal role in housing the state and is crucial for user information and stock management. Despite the move towards stateless designs, certain aspects of an application still require a persistent state, making the database an integral component. 2. Seizing Current Opportunities The database is not just a storage facility; it encapsulates the current opportunities vital for an organization's success. Whether it's customer data, transaction details, or real-time analytics, the database houses the pulse of the organization's present, providing insights and supporting decision-making processes. 3. Future-Proofing for Opportunities Ahead As organizations embrace technologies like Artificial Intelligence (AI) and Machine Learning (ML), the database becomes the bedrock for unlocking new opportunities. Future-proofing involves not only storing current data efficiently but also structuring the database to facilitate seamless integration with emerging technologies. The Challenges of Database Management Handling a database is not without its challenges. The complexity arises from various factors, including modeling, migration, and the constant evolution of products. 1. Modeling Complexity The initial modeling phase is crucial, often conducted when a product is in its infancy, or the organization lacks the maturity to perform optimally. The challenge lies in foreseeing the data requirements and relationships accurately. 2. Migration Complexity Unlike code refactoring on the application side, database migration introduces complexity that surpasses application migration. The need for structural changes, data transformations, and ensuring data integrity makes database migration a challenging endeavor. 3. Product Evolution Products evolve, and so do their data requirements. The challenge is to manage the evolutionary data effectively, ensuring that the database structure remains aligned with the changing needs of the application and the organization. Polyglot Persistence: Exploring Database Options In the contemporary software landscape, the concept of polyglot persistence comes into play, allowing organizations to choose databases that best suit their specific scenarios. This approach involves exploring relational databases, NoSQL databases, and NewSQL databases based on the application's unique needs. Integrating Database and Application: Bridging Paradigms One of the critical challenges in mastering Java Persistence lies in integrating the database with the application. This integration becomes complex due to the mismatch between programming paradigms in Java and database systems. Patterns for Integration Several design patterns aid in smoothing the integration process. Patterns like Driver, Active Record, Data Mapper, Repository, DAO (Data Access Object), and DTO (Data Transfer Object) provide blueprints for bridging the gap between the Java application and the database. Data-Oriented vs. Object-Oriented Programming While Java embraces object-oriented programming principles like inheritance, polymorphism, encapsulation, and types, the database world revolves around normalization, denormalization, and structural considerations. Bridging these paradigms requires a thoughtful approach. Principles of Database-Oriented Programming: Separating Code (Behavior) from Data Encourage a clean separation between business logic and data manipulation. Representing Data with Generic Data Structures Use generic structures to represent data, ensuring flexibility and adaptability. Treating Data as Immutable Embrace immutability to enhance data consistency and reliability. Separating Data Schema from Data Representation Decouple the database schema from the application's representation of data to facilitate changes without affecting the entire system. Principles of Object-Oriented Programming Expose Behavior and Hide Data Maintain a clear distinction between the functionality of objects and their underlying data. Abstraction Utilize abstraction to simplify complex systems and focus on essential features. Polymorphism Leverage polymorphism to create flexible and reusable code. Conclusion Mastering Java Persistence requires a holistic understanding of these principles, patterns, and paradigms. The journey involves selecting the proper database technologies and integrating them seamlessly with Java applications while ensuring adaptability to future changes. In this dynamic landscape, success stories, documentation, and a maturity model serve as guiding beacons, aiding developers and organizations in their pursuit of efficient and robust database management for cloud-native applications and modernization initiatives. Video and Slide Presentation Slides
Amazon Elastic Compute Cloud (EC2) stands as a cornerstone of AWS's suite of cloud services, providing a versatile platform for computing on demand. Yet, the true power of EC2 lies in its diverse array of instance types, each meticulously crafted to cater to distinct computational requirements, underpinned by a variety of specialized hardware architectures. This article goes into detail, exploring the intricacies of these instance types and dissecting the hardware that drives them. Through this foundational approach, we aim to furnish a more profound comprehension of EC2's ecosystem, equipping you with the insights necessary to make the right decisions when selecting the most apt instance for your specific use case. Why Understand the Hardware Beneath the Instances? When diving into cloud computing, it's tempting to view resources like EC2 instances as abstracted boxes, merely serving our applications without much thought to their inner workings. However, having a fundamental understanding of the underlying hardware of your chosen EC2 instance is crucial. This knowledge not only empowers you to make more informed decisions, optimizing both performance and costs, but also ensures your applications run smoothly, minimizing unexpected disruptions. Just as a chef selects the right tools for a dish or a mechanic chooses the correct parts for a repair, knowing the hardware components of your EC2 instances can be the key to unlocking their full potential. In this article, we'll demystify the hardware behind the EC2 curtains, helping you bridge the gap between abstract cloud resources and tangible hardware performance. Major Hardware Providers and Their Backgrounds Intel For years, Intel has been the cornerstone of cloud computing, with its Xeon processors powering a vast majority of EC2 instances. Renowned for their robust general-purpose computing capabilities, Intel's chips excel in a wide array of tasks, from data processing to web hosting. Their Hyper-Threading technology allows for higher multi-tasking, making them versatile for varied workloads. However, premium performance often comes at a premium cost. AMD AMD instances, particularly those sporting the EPYC series of processors, have started gaining traction in the cloud space. They are often pitched as cost-effective alternatives to Intel without compromising much on performance. AMD's strength lies in providing a high number of cores, making them suitable for tasks that benefit from parallel processing. They can offer a balance between price and performance, particularly for businesses operating on tighter budgets. ARM (Graviton) ARM's Graviton and Graviton2 processors represent a departure from traditional cloud computing hardware. These chips are known for their energy efficiency, derived from ARM's heritage in mobile computing. As a result, Graviton-powered instances can deliver a superior price-performance ratio, especially for scale-out workloads that can distribute tasks across multiple servers. They're steadily becoming the go-to choice for businesses prioritizing efficiency and cost savings. NVIDIA When it comes to GPU-intensive tasks, NVIDIA stands uncontested. Their Tesla and A100 GPUs, commonly found in EC2's GPU instances, are designed for workloads that demand heavy computational power. Whether machine learning training, 3D rendering, or high-performance computing, NVIDIA-powered instances offer accelerated performance. However, the specialized nature of these instances means they might not be the best choice for general computing tasks and can be more expensive. In essence, while EC2 instance families provide a high-level categorization, the real differentiation in performance, cost, and suitability comes from these underlying hardware providers. By understanding the strengths and limitations of each, businesses can tailor their cloud deployments to achieve the desired balance of performance and cost. 1. General Purpose Instances Notable types: T3/T4g (Intel/ARM), M7i/M7g (Intel/ARM), etc. Primary use: Balancing compute, memory, and networking Practical application: Web servers: A standard web application or website that requires balanced resources can run seamlessly on general-purpose instances Developer environments: The burstable performance of t2 and t3 makes them ideal for development and testing environments where resource demand fluctuates. 2. Compute Optimized Instances Notable Types: C7i/C7g (Intel/ARM), etc. Primary Use: High computational tasks Practical application: High-performance web servers: Websites with massive traffic or services that require quick response times Scientific modeling: Simulating climate patterns, genomic research, or quantum physics calculations 3. Memory Optimized Instances Notable Types: R7i/R7g (Intel/ARM), X1/X1e (Intel), etc. Primary Use: Memory-intensive tasks Practical Application: Large-scale databases: Running applications like MySQL, PostgreSQL, or big databases like SAP HANA Real-time Big Data analytics: Analyzing massive data sets in real-time, such as stock market trends or social media sentiment analysis 4. Storage Optimized Instances Notable types: I3/I3en (Intel), D3/D3en (Intel), H1 (Intel), etc. Primary use: High random I/O access Practical Application: NoSQL databases: Deploying high-transaction databases like Cassandra or MongoDB Data warehousing: Handling and analyzing vast amounts of data, such as user data for large enterprises 5. Accelerated Computing Instances Notable types: P5 (NVIDIA/AMD), Inf1 (Intel), G5 (NVIDIA), etc. Primary use: GPU-intensive tasks Practical application: Machine Learning: Training complex models or neural networks Video rendering: Creating high-quality animation or special effects for movies 6. High-Performance Computing (HPC) Instances Notable types: Hpc7g, Hpc7a Primary use: Tasks requiring extremely high frequencies or hardware acceleration Practical Application: Electronic Design Automation (EDA): Designing and testing electronic circuits Financial simulations: Predicting stock market movements or calculating complex investment scenarios 7. Bare Metal Instances Notable types: m5.metal, r5.metal (Intel Xeon) Primary use: Full access to underlying server resources Practical application: High-performance databases: When databases like Oracle or SQL Server require direct access to server resources Sensitive workloads: Tasks that must comply with strict regulatory or security requirements Each EC2 instance family is tailored for specific workload requirements, and the underlying hardware providers further influence their performance. Users can achieve optimal performance and cost efficiency by aligning the workload with the appropriate instance family and hardware.
Uploading massive datasets to Amazon S3 can be daunting, especially when dealing with gigabytes of information. However, a solution exists within reach. We can revolutionize this process by harnessing the streaming capabilities of a Node.js TypeScript application. Streaming enables us to transfer substantial data to AWS S3 with remarkable efficiency, all while conserving memory resources and ensuring scalability. In this article, we embark on a journey to unveil the secrets of developing a Node.js TypeScript application that seamlessly uploads gigabytes of data to AWS S3 using the magic of streaming. Setting up the Node.js Application Let's start by setting up a new Node.js project: Shell mkdir aws-s3-upload cd aws-s3-upload npm init -y Next, install the necessary dependencies: Shell npm install aws-sdk axios npm install --save-dev @types/aws-sdk @types/axios typescript ts-node npm install --save-dev @types/express @types/multer multer multer-s3 Configuring AWS SDK and Multer In this section, we'll configure the AWS SDK to enable communication with Amazon S3. Ensure you have your AWS credentials ready. JavaScript import { S3 } from 'aws-sdk'; import multer from 'multer'; import multerS3 from 'multer-s3'; import { v4 as uuidv4 } from 'uuid'; const app = express(); const port = 3000; const s3 = new S3({ accessKeyId: 'YOUR_AWS_ACCESS_KEY_ID', secretAccessKey: 'YOUR_AWS_SECRET_ACCESS_KEY', region: 'YOUR_AWS_REGION', }); We'll also set up Multer to handle file uploads directly to S3. Define the storage configuration and create an upload middleware instance. JavaScript const upload = multer({ storage: multerS3({ s3, bucket: 'YOUR_S3_BUCKET_NAME', contentType: multerS3.AUTO_CONTENT_TYPE, acl: 'public-read', key: (req, file, cb) => { cb(null, `uploads/${uuidv4()}_${file.originalname}`); }, }), }); Creating the File Upload Endpoint Now, let's create a POST endpoint for handling file uploads: JavaScript app.post('/upload', upload.single('file'), (req, res) => { if (!req.file) { return res.status(400).json({ message: 'No file uploaded' }); } const uploadedFile = req.file; console.log('File uploaded successfully. S3 URL:', uploadedFile.location); res.json({ message: 'File uploaded successfully', url: uploadedFile.location, }); }); Testing the Application To test the application, you can use tools like Postman or cURL. Ensure you set the Content-Type header to multipart/form-data and include a file in the request body with the field name 'file.' Choosing Between Database Storage and Cloud Storage Whether to store files in a database or an S3 bucket depends on your specific use case and requirements. Here's a brief overview: Database Storage Data Integrity: Ideal for ensuring data integrity and consistency between structured data and associated files, thanks to ACID transactions. Security: Provides fine-grained access control mechanisms, including role-based access control. File Size: Suitable for small to medium-sized files in terms of performance and storage cost. Transactional workflows: Useful for applications with complex transactions involving both structured data and files. Backup and recovery: Facilitates inclusion of files in database backup and recovery processes. S3 Bucket Storage Scalability: Perfect for large files and efficient file storage, scaling to gigabytes, terabytes, or petabytes of data. Performance: Optimized for fast file storage and retrieval, especially for large media files or binary data. Cost-efficiency: Cost-effective for large volumes of data compared to databases, with competitive pricing. Simplicity: Offers straightforward file management, versioning, and easy sharing via public or signed URLs. Use cases: Commonly used for storing static assets and content delivery and as a scalable backend for web and mobile file uploads. Durability and availability: Ensures high data durability and availability, suitable for critical data storage. Hybrid Approach: In some cases, metadata and references to files are stored in a database, while the actual files are stored in an S3 bucket, combining the strengths of both approaches. The choice should align with your application's needs, considering factors like file size, volume, performance requirements, data integrity, access control, and budget constraints. Multer vs. Formidable — Choosing the Right File Upload Middleware When building Node.js applications with Express, choosing the suitable file upload middleware is essential. Let's compare two popular options: Multer and Formidable. Multer With Express Express integration: Seamlessly integrates with Express for easy setup and usage. Abstraction layer: Provides a higher-level abstraction for handling file uploads, reducing boilerplate code. Middleware chain: Easily fits into Express middleware chains, enabling selective usage on specific routes or endpoints. File validation: Supports built-in file validation, enhancing security and control over uploaded content. Multiple file uploads: Handles multiple file uploads within a single request efficiently. Documentation and community: Benefits from extensive documentation and an active community. File renaming and storage control: Allows customization of file naming conventions and storage location. Formidable With Express Versatility: Works across various HTTP server environments, not limited to Express, offering flexibility. Streaming: Capable of processing incoming data streams, ideal for handling huge files efficiently. Customization: Provides granular control over the parsing process, supporting custom logic. Minimal dependencies: Keeps your project lightweight with minimal external dependencies. Widely adopted: A well-established library in the Node.js community. Choose Multer and Formidable based on your project's requirements and library familiarity. Multer is excellent for seamless integration with Express, built-in validation, and a straightforward approach. Formidable is preferred when you need more customization, versatility, or streaming capabilities for large files. Conclusion In conclusion, this article has demonstrated how to develop a Node.js TypeScript application for efficiently uploading large data sets to Amazon S3 using streaming. Streaming is a memory-efficient and scalable approach, mainly when dealing with gigabytes of data. Following the steps outlined in this guide can enhance your data upload capabilities and build more robust applications.
You’ve probably heard that AWS is no longer allowing its customers to resell Reserved Instances starting January 15, 2024. If you’ve been reselling unused RI capacity directly on the Marketplace or via a third-party provider, this is no longer an option. Keep reading to learn more about the ban and find a way out. Quick Summary of AWS’s Ban on RI Resale AWS will prohibit the resale of Reserved Instances (RIs) acquired at a discount on the Amazon EC2 Reserved Instance Marketplace as of January 15, 2024. This is due to Section 5.5 of the AWS service agreements, which prohibits the sale of discounted RIs. However, as a courtesy, customers can still resell discounted RIs for sale on the Marketplace until January 15, 2024 – but only if they were obtained before October 1, 2023. Full Letter From AWS About the RI Resale Ban AWS does not permit the resale of RIs obtained through a discount program (per AWS Service Terms 5.5). […] AWS does not permit the resale of RIs obtained through a discount program (per AWS Service Terms 5.5). We are extending a compliance period to give customers time to move their RI’s to come into compliance with AWS Service Terms. During this time, customer may list any RIs (even if the RIs received a discount) purchased before 1-Oct-2023, on the Amazon EC2 Reserved Instance Marketplace for sale through 15-Jan-2024. However, the compliance window will close, and after 15-Jan-2024, customers may no longer have any listings and/or sales of RIs purchased via a discount program on Amazon EC2 Reserved Instance Marketplace. How To Overcome the RI Resale Ban: Alternatives To Reserved Instances Solution 1: AWS Savings Plans One way to deal with the ban when planning your future purchases is to go for AWS Savings Plans instead. A Savings Plan is a pricing scheme that offers a discount on On-Demand instances in exchange for committing to one or three years of use, where you set your daily spending limit. Up until that point, all compute usage is available at a reduced cost. When you exceed your limit, AWS charges you the normal on-demand price. EC2 Savings Plan vs. Compute Savings Plan EC2 Instance Savings Plan offers up to 72% price reductions on EC2 instances, and you can choose the plan’s size, OS, and tenancy. AWS also provides Compute Savings Plans, which give a similar discount amount (66% versus 72%) and include choices such as family, region, operating system, tenancy, and even individual compute services. Solution 2: Look Beyond Commitment for Cost Savings Savings Plans can help you save money on AWS, but you’re still in charge of infrastructure optimization. This is why picking the right size and type of compute instances is such an important task. If you manage a large cloud environment, you’ll need a system that automates cost optimization activities like rightsizing, autoscaling, instance type selection, and others. It takes time to figure out which resources are running, which families control them, and whose teams own them. Trying to make sense of all 500+ EC2 instances offered by AWS is no walk in the park. It can take you many days or weeks to assess your inventory and use it to determine which instances to keep and which ones to get rid of. AWS policy change means commitment plans are now at a 100% consumption lock-in. Using them for any workloads that don’t have a steady demand is risky. For applications with changing demand, use an automation solution that matches resources to real-time demand. What you need is a unified platform that combines all the cost optimization tactics, including: Workload and node rightsizing to achieve optimal setup even without RIs, Spot instance automation for up to 90% of cost savings, Automated rebalancing to quickly achieve an optimized state, Automated bin packing for optimal resource utilization. CAST AI is a fully automated cloud cost optimization platform that generates cost savings of 60% and more on average without any sort of vendor lock-in. Book a call with one of our solution engineers to find out what alternative cost-cutting measures you can take to score cost savings without making any commitments to AWS. In case you’re not sure what we’re talking about, here’s a primer on AWS Reserved Instances. Recap: What Are Reserved Instances? Companies pick Reserved Instances because they offer significant savings over pay-as-you-go On-Demand pricing – 72%, to be exact. All you need is to make a commitment to a specific cloud capacity for a set length of time. AWS gives you two options: a one-year or three-year commitment. In certain situations, you will also be guaranteed that specific resources will be available to you at a particular hosting location. Choose an instance type, size, platform, and area, then click Finish. It’s like receiving a coupon that you can use to earn a discount at any moment throughout your selected reservation period, which can be shared across teams. And the greater your initial payment, the greater the savings. However, there is a catch. A Reserved Instance has a “use it or lose it” policy. Every hour your instance is idle is an hour lost (along with any financial rewards you could get). To make the most of your Reserved Instance, you must anticipate exactly what your team will require. Types of Reserved Instances Standard Reserved Instances A Standard Reserved Instance offers greater savings than a Convertible Reserved Instance, but it cannot be exchanged. You could, however, sell them via the Reserved Instance Marketplace (with certain limitations on discounted resources). This is the type of Reserved Instance the ban addresses. Convertible Reserved Instances Convertible Reserved Instances, on the other hand, can be exchanged during the term for a new Convertible Reserved Instance with additional properties such as instance family, instance type, platform, scope, or tenancy. You can’t resell it on the Reserved Instance Marketplace. Scheduled Reserved Instances Purchasing Reserved Instances on a recurring schedule lets you pay for compute power by the hour and reserve capacity ahead of time for only the periods when you’ll need it. Amazon EC2 sets the pricing, and it may fluctuate depending on supply and demand for Scheduled Reserved Instance capacity, as well as the time characteristics of your schedule. However, once you get a Scheduled Reserved Instance, the price you were quoted for is the one you’ll pay. Key Factors Influencing Reserved Instance Pricing Commitment period (1 year vs. three years), Payment option (Full up-front, Partial up-front, No up-front), Region and availability zones, Instance type and family you choose. Reserved Instance Optimization Reserved Instance Optimization is the process of consistently increasing the value you get from using Reserved Instances. The idea is to maximize your RI consumption and associated charges as your AWS setup and computing demands vary over the course of your subscription. Here are a few best practices for RI optimization: Continuously monitor infrastructure usage: Utilization is a key metric for those looking to make the most of reserved capacity, so make sure that you have a viable way to measure it (this includes real-time monitoring). Make sure that instances you launch match your discount: AWS will try to match your deployed instances to your current RI discounts, but what if some teams believe that they’re launching instances fulfilling all of the criteria, but in reality, they’re not? By not ensuring this, you risk that your contracts will get underutilized. Adjust RI purchases based on workload changes: Knowing what lies ahead is hard in the cloud world, but it’s still worth it to forecast your predicted usage to have a rough idea of how your workload changes may affect your RI buying plan. Use tools and platforms for optimization: Tools like the AWS Trusted Advisor come in handy for managing RIs. AWS Trusted Advisor examines your EC2 consumption history and generates an ideal number of Partial Upfront Reserved Instances to help you maximize RI utilization. The AWS Reserved Instance Marketplace The AWS Reserved Instance Marketplace is a virtual marketplace where AWS users can sell or buy Reserved Instances from AWS or other third parties. The idea behind it was to provide teams with greater flexibility and savings because estimating workload demands in advance is so hard. To buy RIs on the Marketplace, you can use the EC2 interface and click the “Purchase Reserved Instances” button on the Reserve Instance screen. From here, you can select the OS, instance type, tenancy, RI duration, and payment method. This is the same interface that consumers use to purchase conventional or convertible RIs, but it now lets them select any period from one month to 36 months. To sell your RIs as a third party on the Marketplace, you must first register as a seller. The root user of your AWS account needs to sign up. And let’s not forget about the limitations on reselling RIs. As of January 15, 2024, AWS will restrict the resale of Reserved Instances (RIs) purchased at a discount on the Amazon EC2 Reserved Instance Marketplace.
Docker Extensions was announced as a beta at DockerCon 2022. Docker Extensions became generally available in January 2023. Developing performance tools' related extensions was on my to-do list for a long time. Due to my master's degree, I couldn't spend time learning Docker Extensions SDK. I expected someone would have created the extension by now, considering it's almost 2024. It's surprising to me that none has been developed as far as I know. But no more. Introducing the Apache JMeter Docker Extension. Now, you can run Apache JMeter tests in Docker Desktop without installing JMeter locally. In this blog post, we will explore how to get started with this extension and understand its functionality. We will also cover generating HTML reports and other related topics. About Docker Extensions Docker Extensions enables third parties to extend the functionalities of Docker by integrating their tools. Think of it like a mobile app store but for Docker. I frequently use the official Docker Disk Usage extension to analyze disk usage and free up unused space. Extensions enhance the productivity and workflow of developers. Check out the Docker Extension marketplace for some truly amazing extensions. Go see it for yourself! Prerequisite for Docker Extension The only prerequisite for Docker Extension is to have Docker Desktop 4.8.0 and later installed in your local. Apache JMeter Docker Extension Apache JMeter Docker Extension is an open-source, lightweight extension and the only extension available as of this writing. It will help you to run JMeter tests on Docker without installing JMeter locally. This extension simplifies the process of setting up and executing JMeter tests within Docker containers, streamlining your performance testing workflow. Whether you're a seasoned JMeter pro or just getting started, this tool can help you save time and resources. Features Includes base image qainsights/jmeter:latest by default. Light-weight and secured container Supports JMeter plugins Mount volume for easy management Supports property files Supports proxy configuration Generates logs and results Intuitive HTML report Displays runtime console logs Timely notifications How To Install Apache JMeter Docker Extension Installation is a breeze. There are two ways you can install the extension. Command Line Run docker extension install qainsights/jmeter-docker-extension:0.0.2 in your terminal and follow the prompts. IMPORTANT: Before you install, make sure you are using the latest version tag. You can check the latest tags in Docker Hub. Dockerfile $> docker extension install qainsights/jmeter-docker-extension:0.0.1 Extensions can install binaries, invoke commands, access files on your machine and connect to remote URLs. Are you sure you want to continue? [y/N] y Image not available locally, pulling qainsights/jmeter-docker-extension:0.0.1... Extracting metadata and files for the extension "qainsights/jmeter-docker-extension:0.0.1" Installing service in Desktop VM... Setting additional compose attributes Installing Desktop extension UI for tab "JMeter"... Extension UI tab "JMeter" added. Starting service in Desktop VM...... Service in Desktop VM started Extension "JMeter" installed successfully Web Here is the direct link to install the JMeter extension. Follow the prompts to get it installed. Install JMeter Docker Extension Click on Install anyway to install the extension. How To Get Started With JMeter Docker Extension After installing the JMeter Docker extension, navigate to the left sidebar as shown below, then click on JMeter. Now, it is time to execute our first tests on Docker using the JMeter extension. The following are the prerequisites to execute the JMeter tests. valid JMeter test plan optional proxy credentials optional JMeter properties file The user interface is pretty simple, intuitive, and self-explanatory. All it has is text fields, buttons, and the output console log. The extension has the following sections: Image and Volume This extension works well with the qainsights/jmeter:latest image Other images might not work; I have not tested it. Mapping the volume from the host to the Docker container is crucial to sharing the test plan, CSV test data, other dependencies, property files, results, and other files. Test Plan A valid test plan must be kept inside the shared volume. Property Files This section helps you to pass the runtime parameters to the JMeter test plan. Logs and Results This section helps you to configure the logs and results. After each successful test, logs and an HTML report will be generated and saved in a shared volume. Proxy and its credentials Optionally, you can send a proxy and its credentials. This is helpful when you are on the corporate network so that the container can access the application being tested. Below is the example test where the local volume /Users/naveenkumar/Tools/apache-jmeter-5.6.2/bin/jmeter-tests is mapped to the container volume jmeter-tests. Here is the content in /Users/naveenkumar/Tools/apache-jmeter-5.6.2/bin/jmeter-tests folder in my local. The above artifacts will be shared with the Docker container once it is up and running. In the above example, /jmeter-tests/CSVSample.jmx will be executed inside the container. It will use the below loadtest.properties. Once all the values are configured, hit the Run JMeter Test button. During the test, you can pay attention to a couple of sections. One is console logs. For each test, the runtime logs will be streamed from the Docker container, as shown below. In case there are any errors, you can check them under the Notifications section. Once the test is done, Notifications will display the status and the location of the HTML report (your mapped volume). Here is the auto-generated HTML report. How JMeter Docker Extension Works and Its Architecture On a high level, this extension is simple, as shown in the below diagram. Once you click on the Run button, the extension first validates all the input and the required fields. If the validation check passes, then the extension will look up the artifacts from the mapped volume. Then, it passes all respective JMeter arguments to the image qainsights/jmeter:latest. If the image is not present, it will get pulled from the Docker container registry. Then, the container will be created by Docker and perform the test execution. During the test execution, container logs will be streamed to the output console logs. To stop the test, click the Terminate button to nuke the container. This action is irreversible and will not generate any test results. Once the test is done, the HTML report and the logs will be shared with the mapped volume. How To Uninstall the Extension There are two ways to uninstall the extension. Using the CLI, issue docker extension uninstall qainsights/jmeter-docker-extension:0.0.1 or from the Docker Desktop. Navigate to Docker Desktop > Extensions > JMeter, then click on the menu to uninstall, as shown below. Known Issues There are a couple of issues (or more :) if you find) you can start the test as much as you want to generate more load to the target under test. Supports only frequently used JMeter arguments. If you would like to add more arguments, please raise an issue in the GitHub repo. Upcoming Features There are a couple of features I am planning to implement based on the reception. Add a dashboard to track the tests Display graphs/charts runtime Way to add JMeter plugins on the fly If you have any other exciting ideas, please let me know. JMeter Docker Extension GitHub Repo Conclusion In conclusion, the introduction of the Apache JMeter Docker Extension is a significant step forward for developers and testers looking to streamline their performance testing workflow. With this open-source and lightweight extension, you can run JMeter tests in Docker without the need to install JMeter locally, saving you time and resources. Despite a few known issues and limitations, such as supporting only frequently used JMeter arguments, the extension holds promise for the future. In summary, the Apache JMeter Docker Extension provides a valuable tool for developers and testers, enabling them to perform JMeter tests efficiently within Docker containers, and it's a welcome addition to the Docker Extension ecosystem. It's worth exploring for anyone involved in performance testing and looking to simplify their workflow.
Tools and platforms form the backbone of seamless software delivery in the ever-evolving world of Continuous Integration and Continuous Deployment (CI/CD). For years, Jenkins has been the stalwart, powering countless deployment pipelines and standing as the go-to solution for many DevOps professionals. But as the tech landscape shifts towards cloud-native solutions, AWS CodePipeline emerges as a formidable contender. Offering deep integration with the expansive AWS ecosystem and the agility of a cloud-based platform, CodePipeline is redefining the standards of modern deployment processes. This article dives into the transformative power of AWS CodePipeline, exploring its advantages over Jenkins and showing why many are switching to this cloud-native tool. Brief Background About CodePipeline and Jenkins At its core, AWS CodePipeline is Amazon Web Services' cloud-native continuous integration and continuous delivery service, allowing users to automate the build, test, and deployment phases of their release process. Tailored to the vast AWS ecosystem, CodePipeline leverages other AWS services, making it a seamless choice for teams already integrated with AWS cloud infrastructure. It promises scalability, maintenance ease, and enhanced security, characteristics inherent to many managed AWS services. On the other side of the spectrum is Jenkins – an open-source automation server with a storied history. Known for its flexibility, Jenkins has garnered immense popularity thanks to its extensive plugin system. It's a tool that has grown with the CI/CD movement, evolving from a humble continuous integration tool to a comprehensive automation platform that can handle everything from build to deployment and more. Together, these two tools represent two distinct eras and philosophies in the CI/CD domain. Advantages of AWS CodePipeline Over Jenkins 1. Integration with AWS Services AWS CodePipeline: Offers a native, out-of-the-box integration with a plethora of AWS services, such as Lambda, EC2, S3, and CloudFormation. This facilitates smooth, cohesive workflows, especially for organizations already using AWS infrastructure. Jenkins: While integration with cloud services is possible, it usually requires third-party plugins and additional setup, potentially introducing more points of failure or compatibility issues. 2. Scalability AWS CodePipeline: Being a part of the AWS suite, it natively scales according to the demands of the deployment pipeline. There's no need for manual intervention, ensuring consistent performance even during peak loads. Jenkins: Scaling requires manual adjustments, such as adding agent nodes or reallocating resources, which can be both time-consuming and resource-intensive. 3. Maintenance AWS CodePipeline: As a managed service, AWS handles all updates, patches, and backups. This ensures that the latest features and security patches are always in place without user intervention. Jenkins: Requires periodic manual updates, backups, and patching. Additionally, plugins can introduce compatibility issues or security vulnerabilities, demanding regular monitoring and adjustments. 4. Security AWS CodePipeline: One of the key benefits of AWS's comprehensive security model. Features like IAM roles, secret management with AWS Secrets Manager, and fine-grained access controls ensure robust security standards. Jenkins: Achieving a similar security level necessitates additional configurations, plugins, and tools, which can sometimes introduce more vulnerabilities or complexities. 5. Pricing and Long-Term Value AWS CodePipeline: Operates on a pay-as-you-go model, ensuring you only pay for what you use. This can be cost-effective, especially for variable workloads. Jenkins: While the software itself is open-source, maintaining a Jenkins infrastructure (servers, electricity, backups, etc.) incurs steady costs, which can add up in the long run, especially for larger setups. When Might Jenkins Be a Better Choice? Extensive Customization Needs With its rich plugin ecosystem, Jenkins provides a wide variety of customization options. For unique CI/CD workflows or specialized integration needs, Jenkins' vast array of plugins can be invaluable, including integration with non-AWS services. On-Premise Solutions Organizations with stringent data residency or regulatory requirements might prefer on-premise solutions. Jenkins offers the flexibility to be hosted on local servers, providing complete control over data and processes. Existing Infrastructure and Expertise Organizations with an established Jenkins infrastructure and a team well-versed in its intricacies might find transitioning to another tool costly and time-consuming. The learning curve associated with a new platform and migration efforts can be daunting. The team needs to weigh in on the transition along with other items in their roadmap. Final Takeaways In the ever-evolving world of CI/CD, selecting the right tool can be the difference between seamless deployments and daunting processes. Both AWS CodePipeline and Jenkins have carved out their specific roles in this space, yet as the industry shifts more towards cloud-native solutions, AWS CodePipeline indeed emerges at the forefront. With its seamless integration within the AWS ecosystem, innate scalability, and reduced maintenance overhead, it represents the future-facing approach to CI/CD. While Jenkins has served many organizations admirably and offers vast customization, the modern tech landscape is ushering in a preference for streamlined, cloud-centric solutions like AWS CodePipeline. The path from development to production is critical, and while the choice of tools will vary based on organizational needs, AWS CodePipeline's advantages are undeniably compelling for those looking toward a cloud-first future. As we navigate the challenges and opportunities of modern software delivery, AWS CodePipeline offers a promising solution that is more efficient, scalable, secure, and worth considering.
Whether it's crafting personalized content or tailoring images to user preferences, the ability to generate visual assets based on a description is quite powerful. But text-to-image conversion typically involves deploying an end-to-end machine learning solution, which is quite resource-intensive. What if this capability was an API call away, thereby making the process simpler and more accessible for developers? This tutorial will walk you through how to use AWS CDK to deploy a Serverless image generation application implemented using AWS Lambda and Amazon Bedrock, which is a fully managed service that makes base models from Amazon and third-party model providers (such as Anthropic, Cohere, and more) accessible through an API. Developers can leverage leading foundation models through a single API while maintaining the flexibility to adopt new models in the future. The solution is deployed as a static website hosted on Amazon S3 accessible via an Amazon CloudFront domain. Users can enter the image description which will be passed on to a Lambda function (via Amazon API Gateway) which in turn will invoke the Stable Diffusion model on Amazon Bedrock to generate the image. The entire solution is built using Go - this includes the Lambda function (using the aws-lambda-go library) as well as the complete solution deployment using AWS CDK. The code is available on GitHub. Prerequisites Before starting this tutorial, you will need the following: An AWS Account (if you don't yet have one, you can create one and set up your environment here) Go (v1.19 or higher) AWS CDK AWS CLI Git Docker Clone this GitHub repository and change it to the right directory: git clone https://github.com/build-on-aws/amazon-bedrock-lambda-image-generation-golang cd amazon-bedrock-lambda-image-generation-golang Deploy the Solution Using AWS CDK To start the deployment, simply invoke cdk deploy. cd cdk export DOCKER_DEFAULT_PLATFORM=linux/amd64 cdk deploy You will see a list of resources that will be created and will need to provide your confirmation to proceed (output shortened for brevity). Bundling asset BedrockLambdaImgeGenWebsiteStack/bedrock-imagegen-s3/Code/Stage... ✨ Synthesis time: 7.84s //.... omitted This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening). Please confirm you intend to make the following modifications: //.... omitted Do you wish to deploy these changes (y/n)? y This will start creating the AWS resources required for the application. If you want to see the AWS CloudFormation template which will be used behind the scenes, run cdk synth and check the cdk.out folder. You can keep track of the progress in the terminal or navigate to the AWS console: CloudFormation > Stacks > BedrockLambdaImgeGenWebsiteStack. Once all the resources are created, you can try out the application. You should have: The image generation Lambda function and API Gateway An S3 bucket to host the website's HTML page CloudFront distribution And a few other components (like IAM roles, permissions, S3 Bucket policy, etc.) The deployment can take a bit of time since creating the CloudFront distribution is a time-consuming process. Once complete, you should get a confirmation along with the values for the S3 bucket name, API Gateway URL, and the CloudFront domain name. Update the HTML Page and Copy It to the S3 Bucket Open the index.html file in the GitHub repo, and locate the following text: ENTER_API_GATEWAY_URL. Replace this with the API Gateway URL that you received as the CDK deployment output above. To copy the file to S3, I used the AWS CLI: aws s3 cp index.html s3://<name of the S3 bucket from CDK output> Verify that the file was uploaded: aws s3 ls s3://<name of the S3 bucket from CDK output> Now you are ready to access the website! Verify the Solution Enter the CloudFront domain name in your web browser to navigate to the website. You should see the website with a pre-populated description that can be used as a prompt. Click Generate Image to start the process. After a few seconds, you should see the generated image. Modify the Model Parameters The Stability Diffusion model allows us to refine the generation parameters as per our requirements. The Stability.ai Diffusion models support the following controls: Prompt strength (cfg_scale) controls the image's fidelity to the prompt, with lower values increasing randomness. Generation step (steps) determines the accuracy of the result, with more steps producing more precise images. Seed (seed) sets the initial noise level, allowing for reproducible results when using the same seed and settings. Click Show Configuration to edit these. Max values for cfg_steps and steps are 30 and 150, respectively. Don’t Forget To Clean Up Once you're done, to delete all the services, simply use: cdk destroy #output prompt (choose 'y' to continue) Are you sure you want to delete: BedrockLambdaImgeGenWebsiteStack (y/n)? You were able to set up and try the complete solution. Before we wrap up, let's quickly walk through some of the important parts of the code to get a better understanding of what's going the behind the scenes. Code Walkthrough Since we will only focus on the important bits, a lot of the code (print statements, error handling, etc.) has been omitted for brevity. CDK You can refer to the CDK code here. We start by creating the API Gateway and the S3 bucket. apigw := awscdkapigatewayv2alpha.NewHttpApi(stack, jsii.String("image-gen-http-api"), nil) bucket := awss3.NewBucket(stack, jsii.String("website-s3-bucket"), &awss3.BucketProps{ BlockPublicAccess: awss3.BlockPublicAccess_BLOCK_ALL(), RemovalPolicy: awscdk.RemovalPolicy_DESTROY, AutoDeleteObjects: jsii.Bool(true), }) Then we create the CloudFront Origin Access Identity and grant S3 bucket read permissions to the CloudFront Origin Access Identity principal. Then we create the CloudFront Distribution: Specify the S3 bucket as the origin. Specify the Origin Access Identity that we created before. oai := awscloudfront.NewOriginAccessIdentity(stack, jsii.String("OAI"), nil) bucket.GrantRead(oai.GrantPrincipal(), "*") distribution := awscloudfront.NewDistribution(stack, jsii.String("MyDistribution"), &awscloudfront.DistributionProps{ DefaultBehavior: &awscloudfront.BehaviorOptions{ Origin: awscloudfrontorigins.NewS3Origin(bucket, &awscloudfrontorigins.S3OriginProps{ OriginAccessIdentity: oai, }), }, DefaultRootObject: jsii.String("index.html"), //name of the file in S3 }) Then, we create the image generation Lambda function along with IAM permissions (to the function execution IAM role) to allow it to invoke Bedrock operations. function := awscdklambdagoalpha.NewGoFunction(stack, jsii.String("bedrock-imagegen-s3"), &awscdklambdagoalpha.GoFunctionProps{ Runtime: awslambda.Runtime_GO_1_X(), Entry: jsii.String(functionDir), Timeout: awscdk.Duration_Seconds(jsii.Number(30)), }) function.AddToRolePolicy(awsiam.NewPolicyStatement(&awsiam.PolicyStatementProps{ Actions: jsii.Strings("bedrock:*"), Effect: awsiam.Effect_ALLOW, Resources: jsii.Strings("*"), })) Finally, we configure Lambda function integration with API Gateway, add the HTTP routes, and specify the API Gateway endpoint, S3 bucket name, and CloudFront domain name as CloudFormation outputs. functionIntg := awscdkapigatewayv2integrationsalpha.NewHttpLambdaIntegration(jsii.String("function-integration"), function, nil) apigw.AddRoutes(&awscdkapigatewayv2alpha.AddRoutesOptions{ Path: jsii.String("/"), Methods: &[]awscdkapigatewayv2alpha.HttpMethod{awscdkapigatewayv2alpha.HttpMethod_POST}, Integration: functionIntg}) awscdk.NewCfnOutput(stack, jsii.String("apigw URL"), &awscdk.CfnOutputProps{Value: apigw.Url(), Description: jsii.String("API Gateway endpoint")}) awscdk.NewCfnOutput(stack, jsii.String("cloud front domain name"), &awscdk.CfnOutputProps{Value: distribution.DomainName(), Description: jsii.String("cloud front domain name")}) awscdk.NewCfnOutput(stack, jsii.String("s3 bucket name"), &awscdk.CfnOutputProps{Value: bucket.BucketName(), Description: jsii.String("s3 bucket name")}) Lambda Function You can refer to the Lambda Function code here. In the function handler, we extract the prompt from the HTTP request body and the configuration from the query parameters. Then it's used to call the model using bedrockruntime.InvokeModel function. Note the JSON payload sent to Amazon Bedrock is represented by an instance of the Request struct. The output body returned from the Amazon Bedrock Stability Diffusion model is a JSON payload that is converted into a Response struct that contains the generated image as a base64 string. This is returned as an events.APIGatewayV2HTTPResponse object along with CORS headers. func handler(ctx context.Context, req events.APIGatewayV2HTTPRequest) (events.APIGatewayV2HTTPResponse, error) { prompt := req.Body cfgScaleF, _ := strconv.ParseFloat(req.QueryStringParameters["cfg_scale"], 64) seed, _ := strconv.Atoi(req.QueryStringParameters["seed"]) steps, _ := strconv.Atoi(req.QueryStringParameters["steps"]) payload := Request{ TextPrompts: []TextPrompt{{Text: prompt}, CfgScale: cfgScaleF, Steps: steps, } if seed > 0 { payload.Seed = seed } payloadBytes, err := json.Marshal(payload) output, err := brc.InvokeModel(context.Background(), &bedrockruntime.InvokeModelInput{ Body: payloadBytes, ModelId: aws.String(stableDiffusionXLModelID), ContentType: aws.String("application/json"), }) var resp Response err = json.Unmarshal(output.Body, &resp) image := resp.Artifacts[0].Base64 return events.APIGatewayV2HTTPResponse{ StatusCode: http.StatusOK, Body: image, IsBase64Encoded: false, Headers: map[string]string{ "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "POST,OPTIONS", }, }, nil } //request/response model type Request struct { TextPrompts []TextPrompt `json:"text_prompts"` CfgScale float64 `json:"cfg_scale"` Steps int `json:"steps"` Seed int `json:"seed"` } type TextPrompt struct { Text string `json:"text"` } type Response struct { Result string `json:"result"` Artifacts []Artifact `json:"artifacts"` } type Artifact struct { Base64 string `json:"base64"` FinishReason string `json:"finishReason"` } Conclusion In this tutorial, you used AWS CDK to deploy a serverless image generation solution that was implemented using Amazon Bedrock and AWS Lambda and was accessed using a static website on S3 via a CloudFront domain. If you are interested in an introductory guide to using the AWS Go SDK and Amazon Bedrock Foundation Models (FMs), check out this blog post. Happy building!
Kubernetes can be intricate to manage, and companies want to leverage its power while avoiding its complexity. A recent survey found that 84% of companies don’t see value in owning Kubernetes themselves. To address this complexity, Cloud Foundry introduced open-source Korifi, which preserves the classic Cloud Foundry experience of being able to deploy apps written in any language or framework with a single cf push command. But the big difference is that this time, apps are pushed to Kubernetes. In this tutorial, we’ll explore how to use Korifi to deploy web applications written in different languages: Ruby, Node.js, ASP.NET, and PHP. I will also provide insights into Korifi’s functioning and basic configuration knowledge, helping you kick-start your multi-cloud, multitenant, and polyglot journey. Ruby For all the examples in this tutorial, I will use sample web applications that you can download from this GitHub repository, but feel free to use your own. You can also find instructions on installing Korifi in this article, which guides you through the easiest way to achieve that by running two Bash scripts that will set everything up for you. Once you have Korifi installed and have cloned a Ruby sample application, go into the root folder and type the following command: Shell cf push my-ruby-app That’s it! That is all you need to deploy a Ruby application to Kubernetes. Keep in mind that while the first iteration of cf push will take some time as Korifi needs to download a number of elements (I will explain this in the next paragraph); all subsequent runs will be much faster. At any point, if you want to check the status of a Korifi app, you can use the cf app command, which, in the case of our Ruby app, would be: Shell cf app my-ruby-app Node.js Before deploying a Node.js application to Kubernetes using Korifi, let me explain how it works under the hood. One of the key components at play here is Cloud Native Buildpacks. The concept was initially introduced in 2011 and adopted by PaaS providers like Google App Engine, GitLab, Deis, and Dokku. This project became a part of the CNCF in 2018. Buildpacks are primarily designed to convert an application’s source code into an OCI image, such as a Docker image. This process unfolds in two steps: first, it scans the application to identify its dependencies and configures them for seamless operation across diverse clouds. Then, it assembles an image using a Builder, a structured amalgamation of Buildpacks, a foundational build image, a lifecycle, and a reference to a runtime image. Although you have the option to construct your own build images and Buildpacks, you can also leverage those provided by established entities such as Google, Heroku, and Paketo Buildpacks. In this tutorial, I will exclusively use ones provided by Paketo — an open-source project that delivers production-ready Buildpacks for popular programming languages. Let’s briefly demonstrate what Korifi does by manually creating a Buildpack from a Node.js application. You can follow the installation instructions here to install the pack CLI. Then, get into the root folder of your application and run the following command: Shell pack build my-nodejs-app --builder paketobuildpacks/builder:base Your Node.js OCI image is available; you can check this by running the command: Shell docker images Once the Docker image is ready, Korifi utilizes Kubernetes RBAC and CRDs to mimic the robust Cloud Foundry paradigm of orgs and spaces. But the beauty of Korifi is that you don’t have to manage any of that. You only need one command to push a Node.js application to Kubernetes: Shell cf push my-nodejs-app That’s it! ASP.NET Now, let’s push an ASP.NET application. If you run cf push my-aspnet-app, the build will fail, and you will get the following error message: Shell BuildFail: Check build log output FAILED 2023-08-11T19:12:58.11+0000 [STG/] OUT ERROR: No buildpack groups passed detection. 2023-08-11T19:12:58.11+0000 [STG/] OUT ERROR: failed to detect: buildpack(s) failed with err These logs tell us that Korifi may not know a valid Buildpack to package an ASP.NET application. We can verify that by running the following command: Shell cf buildpacks You should get the following output, and we can see that there are no .NET-related buildpacks. Shell position name stack enabled locked filename 1 paketo-buildpacks/java io.buildpacks.stacks.jammy true false paketo-buildpacks/java@9.18.0 2 paketo-buildpacks/go io.buildpacks.stacks.jammy true false paketo-buildpacks/go@4.4.5 3 paketo-buildpacks/nodejs io.buildpacks.stacks.jammy true false paketo-buildpacks/nodejs@1.8.0 4 paketo-buildpacks/ruby io.buildpacks.stacks.jammy true false paketo-buildpacks/ruby@0.39.0 5 paketo-buildpacks/procfile io.buildpacks.stacks.jammy true false paketo-buildpacks/procfile@5.6.4 To fix that, first, we need to tell Korifi which Buildpack to use for an ASP.NET application by editing the ClusterStore: Shell kubectl edit clusterstore cf-default-buildpacks -n tutorial-space Make sure to replace tutorial-space with the value you used during your Korifi cluster configuration. Add the line – image: gcr.io/paketo-buildpacks/python; your file should look like this: Shell spec: sources: - image: gcr.io/paketo-buildpacks/java - image: gcr.io/paketo-buildpacks/nodejs - image: gcr.io/paketo-buildpacks/ruby - image: gcr.io/paketo-buildpacks/procfile - image: gcr.io/paketo-buildpacks/go - image: gcr.io/paketo-buildpacks/python Then we need to tell Korifi in which order to use Buildbacks by editing our ClusterBuilder: Shell kubectl edit clusterbuilder cf-kpack-cluster-builder -n tutorial-space Add the line – id: paketo-buildpacks/dotnet-core at the top of the spec order list. your file should look like this: Shell spec: sources: - image: gcr.io/paketo-buildpacks/java - image: gcr.io/paketo-buildpacks/nodejs - image: gcr.io/paketo-buildpacks/ruby - image: gcr.io/paketo-buildpacks/procfile - image: gcr.io/paketo-buildpacks/go - image: gcr.io/paketo-buildpacks/python If everything was done right, you should see the .NET Core Paketo Buildpack in the list output by the cf buildpacks command. Finally, you can simply run cf push my-aspnet-app to push your ASP.NET application to Kubernetes. PHP We need to follow the same process for PHP with the Buildpack paketo-buildpacks/php that needs to be added to the ClusterStore and ClusterBuilder. For anyone using Korifi version 0.9.0 released a few days ago, the issue that I am about to discuss has been fixed. But in case you are using an older version, running cf push my-php-app will fail and return the following error message: Shell [APP/] OUT php: error while loading shared libraries: libxml2.so.2: cannot open shared object file: No such file or directory The OCI image is missing the libxml library, which is required by PHP, this is probably due to the builder not supporting PHP. To check that, let’s look what builder Korifi is using by running this command: Shell kubectl describe clusterbuilder cf-kpack-cluster-builder | grep 'Run Image' Which will output the following: Shell Run Image: index.docker.io/paketobuildpacks/run-jammy-base@sha256:4cf369b562808105d3297296efea68449a2ae17d8bb15508f573cc78aa3b3772a As you can see, Korifi currently uses Paketo Jammy Base, which, according to its Github repo description, does not support PHP. You also can check that by looking at the builder’s builder.toml file or by running the command pack builder suggest, which will return the output: Shell Suggested builders: [...] Paketo Buildpacks: paketobuildpacks/builder-jammy-base Ubuntu 22.04 Jammy Jellyfish base image with buildpacks for Java, Go, .NET Core, Node.js, Python, Apache HTTPD, NGINX and Procfile Paketo Buildpacks: paketobuildpacks/builder-jammy-buildpackless-static Static base image (Ubuntu Jammy Jellyfish build image, distroless-like run image) with no buildpacks included. To use, specify buildpack at build time. Paketo Buildpacks: paketobuildpacks/builder-jammy-full Ubuntu 22.04 Jammy Jellyfish full image with buildpacks for Apache HTTPD, Go, Java, Java Native Image, .NET, NGINX, Node.js, PHP, Procfile, Python, and Ruby [...] While Jammy Base does not support PHP, the Jammy Full builder does. There are multiple ways to get Korifi to use another builder, I will just cover one way in this tutorial. This way assumes that we used the easy way to install Korifi with the deploy-on-kind.sh script. You need to go to Korifi source code and edit the file scripts/assets/values.yaml so that the fields clusterStackBuildImage and clusterStackRunImage are set to paketobuildpacks/build-jammy-full by running this command: Shell sed -i 's/base/full/g' scripts/assets/values.yaml` Then, run the scripts/deploy-on-kind.sh script. That’s it! Korifi will use the Jammy full builder, and Korifi will be able to deploy your PHP application with a cf push my-php-app command. Summary Hopefully, now you’ve experienced just how easy it is to use Korifi to deploy applications to Kubernetes written in Ruby, Node.js, ASP.NET, and PHP. You can stay tuned with the Korifi project by following Cloud Foundry X account and joining the Slack workspace.
Choosing the right database solution is an essential factor that could significantly influence your application’s overall performance. This article aims to provide a comprehensive comparison between AWS RDS MySQL and Aurora MySQL, two powerful database solutions offered by Amazon Web Services (AWS). I will delve into the specifics of their architecture, performance, data replication capabilities, security measures, cost efficiency, ease of use, integration capabilities, and support resources. By the end of this guide, you will be equipped with all the necessary information to make an informed decision about the most suitable database solution for your specific needs. AWS RDS MySQL and Aurora MySQL are both managed database services offered by Amazon Web Services. AWS RDS MySQL is a relational database service that provides cost-efficient and resizable capacity while automating time-consuming administration tasks. On the other hand, Aurora MySQL is a MySQL-compatible relational database engine that offers superior performance akin to high-end commercial databases at a fraction of the cost. The right database solution not only ensures efficient data management but also supports your applications' performance and scalability requirements. It can help you avoid potential downtime, enhance application responsiveness, and ensure data security and compliance. Thus, understanding the nuances of AWS RDS MySQL and Aurora MySQL becomes crucial in determining the best fit for your particular scenario. Architecture and Performance AWS RDS MySQL uses a traditional monolithic architecture where the database exists on a single server or multiple servers working as one unit. This setup allows it to deliver a very fast, multi-threaded, and robust SQL database server, making it an ideal choice for mission-critical and heavy-load production systems. However, its architecture might have limitations when dealing with extremely high workloads. Unlike RDS MySQL, Aurora MySQL employs a distributed, fault-tolerant, self-healing storage system that auto-scales up to 64TB per database instance. This architecture enables Aurora MySQL to offer up to five times better performance than MySQL, making it a top choice for demanding applications that require high throughput and low latency. When we compare AWS RDS MySQL and Aurora MySQL in terms of performance and scalability, Aurora tends to outshine RDS MySQL. While RDS MySQL offers robust performance for a wide range of applications, Aurora MySQL's distributed architecture allows it to handle higher workloads, offering superior performance and scalability. However, the choice between the two will heavily depend on your specific workload and performance requirements. Data Replication and Availability AWS RDS MySQL supports data replication through its Read Replicas feature, allowing you to create up to five copies of your database. This process aids in enhancing the database's availability and durability. However, compared to Aurora MySQL, RDS MySQL's replication process is relatively slower. Aurora MySQL takes data replication a notch higher by allowing you to provision up to 15 replicas, and it performs replication in milliseconds. This quick replication process, coupled with automatic failover, mitigates data loss risks and ensures higher data availability. In terms of data availability and replication speed, Aurora MySQL has the upper hand over RDS MySQL. The ability to provision up to 15 replicas and its lightning-fast replication process make Aurora MySQL more resilient and reliable, especially for applications that demand high data availability. Security and Compliance AWS RDS MySQL offers robust security features, including network isolation using Amazon VPC, encryption at rest and in transit, IAM integration for access control, and automated patches and updates. It also complies with several key industry standards, providing a secure environment for your data. Just like RDS MySQL, Aurora MySQL also provides robust security features, including encryption at rest and in transit, network isolation using Amazon VPC, and IAM integration. Additionally, Aurora MySQL includes advanced features like database activity streams for real-time monitoring of the database, further enhancing its security posture. Both RDS MySQL and Aurora MySQL offer strong security features, ensuring that your data is protected against potential threats. However, Aurora MySQL's additional capabilities, like real-time database activity streams, give it a slight edge over RDS MySQL when it comes to security. Cost Efficiency AWS RDS MySQL follows a pay-as-you-go pricing model. The costs are based on the resources consumed, such as compute instances, storage, and data transfer. This flexible pricing structure can be cost-effective, especially for small to medium-sized workloads. Just like RDS MySQL, Aurora MySQL also follows a pay-as-you-go pricing model, with charges based on the resources used. However, considering its superior performance and scalability features, Aurora MySQL delivers similar performance to high-end commercial databases at almost one-tenth the cost. While both RDS MySQL and Aurora MySQL offer cost-effective solutions, the choice between the two should center around your specific requirements. If you require a database for small to medium-sized workloads, RDS MySQL could be your cost-effective choice. However, if you're dealing with high-volume workloads and need superior performance and scalability, Aurora MySQL's high-end features might justify its higher costs. Ease of Use and Management AWS RDS MySQL offers automated backups, software patching, automatic failover, and recovery mechanisms, which significantly reduce the administrative burden. It also allows easy scaling of compute resources and storage capacity to meet the demands of your application. Aurora MySQL also provides a fully managed service that automates time-consuming tasks such as hardware provisioning, database setup, patching, and backups. Furthermore, it allows on-the-fly modifications to the instance type or storage, providing flexibility in managing your database operations. Both RDS MySQL and Aurora MySQL provide a fully managed experience, simplifying database management. However, Aurora MySQL's ability to make on-the-fly adjustments to instance types and storage adds an extra layer of flexibility, making it slightly more user-friendly in terms of management. Integration Capabilities RDS MySQL integrates well with other AWS services like Lambda, CloudWatch, and IAM. It also supports integration with third-party applications, providing flexibility in building diverse applications. Aurora MySQL not only integrates seamlessly with other AWS services but also supports native integration with Lambda, enabling serverless computing. It also supports cross-region replication with RDS for MySQL, increasing its extensibility. While both RDS MySQL and Aurora MySQL provide efficient integration capabilities, Aurora MySQL's native integration with Lambda and support for cross-region replication with RDS MySQL gives it a slight edge when it comes to integration efficiency. Conclusion To summarize, while both AWS RDS MySQL and Aurora MySQL offer robust performance, security, and ease of use, there are key differences. Aurora MySQL stands out with its superior performance, faster data replication, more flexible management, and enhanced integration capabilities. However, RDS MySQL might still be the optimal choice for small to medium-sized workloads, given its cost-efficiency and robust feature set. The decision between AWS RDS MySQL and Aurora MySQL should be made based on your specific needs. If your priority is superior performance, high scalability, and advanced integration capabilities, Aurora MySQL might be the best fit. However, if you're looking for a cost-effective solution for moderate workloads, RDS MySQL might be your go-to option. Ultimately, the choice between RDS MySQL and Aurora MySQL depends on your unique situation. It's important to assess your requirements, workload size, budget, and future growth plans before making a decision. Remember, what works best for one organization may not necessarily work best for another. It's all about aligning your choice with your specific needs and goals.
Learn how to launch an Apache Kafka with the Apache Kafka Raft (KRaft) consensus protocol and SSL encryption. This article is a continuation of my previous article Running Kafka in Kubernetes with KRaft mode. Prerequisites An understanding of Apache Kafka, Kubernetes, and Minikube. The following steps were initially taken on a MacBook Pro with 32GB memory running MacOS Ventura v13.4. Make sure to have the following applications installed: Docker v23.0.5 Minikube v1.29.0 (running K8s v1.26.1 internally) It's possible the steps below will work with different versions of the above tools, but if you run into unexpected issues, you'll want to ensure you have identical versions. Minikube was chosen for this exercise due to its focus on local development. Deployment Components Server Keys and Certificates The first step to enable SSL encryption is to a create public/private key pair for every server. ⚠️ The commands in this section were executed in a Docker container running the image openjdk:11.0.10-jre because it's the same Java version (Java 11) that Confluent runs. With this approach, any possible Java version-related issue is prevented. The next commands were executed following the Confluent Security Tutorial: Shell docker run -it --rm \ --name openjdk \ --mount source=kafka-certs,target=/app \ openjdk:11.0.10-jre Once in the Docker container: Shell keytool -keystore kafka-0.server.keystore.jks -alias kafka-0 -keyalg RSA -genkey Output: Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: kafka-0.kafka-headless.kafka.svc.cluster.local What is the name of your organizational unit? [Unknown]: test What is the name of your organization? [Unknown]: test What is the name of your City or Locality? [Unknown]: Liverpool What is the name of your State or Province? [Unknown]: Merseyside What is the two-letter country code for this unit? [Unknown]: UK Is CN=kafka-0.kafka-headless.kafka.svc.cluster.local, OU=test, O=test, L=Liverpool, ST=Merseyside, C=UK correct? [no]: yes Repeating the command for each broker: Shell keytool -keystore kafka-1.server.keystore.jks -alias kafka-1 -keyalg RSA -genkey Shell keytool -keystore kafka-2.server.keystore.jks -alias kafka-2 -keyalg RSA -genkey Create Your Own Certificate Authority (CA) Generate a CA that is simply a public-private key pair and certificate, and it is intended to sign other certificates. Shell openssl req -new -x509 -keyout ca-key -out ca-cert -days 90 Output: Generating a RSA private key ...+++++ ........+++++ writing new private key to 'ca-key' Enter PEM pass phrase: Verifying - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:UK State or Province Name (full name) [Some-State]:Merseyside Locality Name (eg, city) []:Liverpool Organization Name (eg, company) [Internet Widgits Pty Ltd]:test Organizational Unit Name (eg, section) []:test Common Name (e.g. server FQDN or YOUR name) []:*.kafka-headless.kafka.svc.cluster.local Email Address []: Add the generated CA to the clients’ truststore so that the clients can trust this CA: Shell keytool -keystore kafka.client.truststore.jks -alias CARoot -importcert -file ca-cert Add the generated CA to the brokers’ truststore so that the brokers can trust this CA. Shell keytool -keystore kafka-0.server.truststore.jks -alias CARoot -importcert -file ca-cert keytool -keystore kafka-1.server.truststore.jks -alias CARoot -importcert -file ca-cert keytool -keystore kafka-2.server.truststore.jks -alias CARoot -importcert -file ca-cert Sign the Certificate To sign all certificates in the keystore with the CA that you generated: Export the certificate from the keystore: Shell keytool -keystore kafka-0.server.keystore.jks -alias kafka-0 -certreq -file cert-file-kafka-0 keytool -keystore kafka-1.server.keystore.jks -alias kafka-1 -certreq -file cert-file-kafka-1 keytool -keystore kafka-2.server.keystore.jks -alias kafka-2 -certreq -file cert-file-kafka-2 Sign it with the CA: Shell openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-kafka-0 -out cert-signed-kafka-0 -days 90 -CAcreateserial -passin pass:${ca-password} openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-kafka-1 -out cert-signed-kafka-1 -days 90 -CAcreateserial -passin pass:${ca-password} openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-kafka-2 -out cert-signed-kafka-2 -days 90 -CAcreateserial -passin pass:${ca-password} ⚠️ Don't forget to substitute ${ca-password} Import both the certificate of the CA and the signed certificate into the broker keystore: Shell keytool -keystore kafka-0.server.keystore.jks -alias CARoot -importcert -file ca-cert keytool -keystore kafka-0.server.keystore.jks -alias kafka-0 -importcert -file cert-signed-kafka-0 keytool -keystore kafka-1.server.keystore.jks -alias CARoot -importcert -file ca-cert keytool -keystore kafka-1.server.keystore.jks -alias kafka-1 -importcert -file cert-signed-kafka-1 keytool -keystore kafka-2.server.keystore.jks -alias CARoot -importcert -file ca-cert keytool -keystore kafka-2.server.keystore.jks -alias kafka-2 -importcert -file cert-signed-kafka-2 ⚠️ The keystore and truststore files will be used to create the ConfigMap for our deployment. ConfigMaps Create two ConfigMaps, one for the Kafka Broker and another one for our Kafka Client. Kafka Broker Create a local folder kafka-ssl and copy the keystore and truststore files into the folder. In addition, create a file broker_credswith the ${ca-password}. Your folder should look similar to this: Shell ls kafka-ssl broker_creds kafka-0.server.truststore.jks kafka-1.server.truststore.jks kafka-2.server.truststore.jks kafka-0.server.keystore.jks kafka-1.server.keystore.jks kafka-2.server.keystore.jks Create the ConfigMap: Shell kubectl create configmap kafka-ssl --from-file kafka-ssl -n kafka kubectl describe configmaps -n kafka kafka-ssl Output: Shell Name: kafka-ssl Namespace: kafka Labels: <none> Annotations: <none> Data ==== broker_creds: ---- <redacted> BinaryData ==== kafka-0.server.keystore.jks: 5001 bytes kafka-0.server.truststore.jks: 1306 bytes kafka-1.server.keystore.jks: 5001 bytes kafka-1.server.truststore.jks: 1306 bytes kafka-2.server.keystore.jks: 5001 bytes kafka-2.server.truststore.jks: 1306 bytes Events: <none> Kafka Client Create a local folder kafka-client and copy the kafka.client.truststore.jks file into the folder. In addition, create a file broker_creds with the ${ca-password} and a file client_security.properties. Shell #client_security.properties security.protocol=SSL ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks ssl.truststore.password=<redacted> Your folder should look similar to this: Shell ls kafka-client broker_creds client_security.properties kafka.client.truststore.jks Create the ConfigMap: Shell kubectl create configmap kafka-client --from-file kafka-client -n kafka kubectl describe configmaps -n kafka kafka-client Output: Shell Name: kafka-client Namespace: kafka Labels: <none> Annotations: <none> Data ==== broker_creds: ---- <redacted> client_security.properties: ---- security.protocol=SSL ssl.truststore.location=/etc/kafka/secrets/kafka.client.truststore.jks ssl.truststore.password=test1234 ssl.endpoint.identification.algorithm= BinaryData ==== kafka.client.truststore.jks: 1306 bytes Events: <none> Confluent Kafka This yaml file deploys a Kafka cluster within a Kubernetes namespace named kafka. It defines various Kubernetes resources required for setting up Kafka in a distributed manner. YAML --- apiVersion: v1 kind: ServiceAccount metadata: name: kafka namespace: kafka --- apiVersion: v1 kind: Service metadata: labels: app: kafka name: kafka-headless namespace: kafka spec: clusterIP: None clusterIPs: - None internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: tcp-kafka-int port: 9092 protocol: TCP targetPort: tcp-kafka-int - name: tcp-kafka-ssl port: 9093 protocol: TCP targetPort: tcp-kafka-ssl selector: app: kafka sessionAffinity: None type: ClusterIP --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: kafka name: kafka namespace: kafka spec: podManagementPolicy: Parallel replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: kafka serviceName: kafka-headless template: metadata: labels: app: kafka spec: serviceAccountName: kafka containers: - command: - sh - -exc - | export KAFKA_NODE_ID=${HOSTNAME##*-} && \ export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_NAME}.kafka-headless.kafka.svc.cluster.local:9092,SSL://${POD_NAME}.kafka-headless.kafka.svc.cluster.local:9093 export KAFKA_SSL_TRUSTSTORE_FILENAME=${POD_NAME}.server.truststore.jks export KAFKA_SSL_KEYSTORE_FILENAME=${POD_NAME}.server.keystore.jks export KAFKA_OPTS="-Djavax.net.debug=all" exec /etc/confluent/docker/run env: - name: KAFKA_SSL_KEY_CREDENTIALS value: "broker_creds" - name: KAFKA_SSL_KEYSTORE_CREDENTIALS value: "broker_creds" - name: KAFKA_SSL_TRUSTSTORE_CREDENTIALS value: "broker_creds" - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP value: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL" - name: CLUSTER_ID value: "6PMpHYL9QkeyXRj9Nrp4KA" - name: KAFKA_CONTROLLER_QUORUM_VOTERS value: "0@kafka-0.kafka-headless.kafka.svc.cluster.local:29093,1@kafka-1.kafka-headless.kafka.svc.cluster.local:29093,2@kafka-2.kafka-headless.kafka.svc.cluster.local:29093" - name: KAFKA_PROCESS_ROLES value: "broker,controller" - name: KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR value: "3" - name: KAFKA_NUM_PARTITIONS value: "3" - name: KAFKA_DEFAULT_REPLICATION_FACTOR value: "3" - name: KAFKA_MIN_INSYNC_REPLICAS value: "2" - name: KAFKA_CONTROLLER_LISTENER_NAMES value: "CONTROLLER" - name: KAFKA_LISTENERS value: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:29093,SSL://0.0.0.0:9093 - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name name: kafka image: docker.io/confluentinc/cp-kafka:7.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 6 initialDelaySeconds: 60 periodSeconds: 60 successThreshold: 1 tcpSocket: port: tcp-kafka-int timeoutSeconds: 5 ports: - containerPort: 9092 name: tcp-kafka-int protocol: TCP - containerPort: 29093 name: tcp-kafka-ctrl protocol: TCP - containerPort: 9093 name: tcp-kafka-ssl protocol: TCP resources: limits: cpu: "1" memory: 1400Mi requests: cpu: 250m memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsGroup: 1000 runAsUser: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/kafka/secrets/ name: kafka-ssl - mountPath: /etc/kafka name: config - mountPath: /var/lib/kafka/data name: data - mountPath: /var/log name: logs dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1000 terminationGracePeriodSeconds: 30 volumes: - emptyDir: {} name: config - emptyDir: {} name: logs - name: kafka-ssl configMap: name: kafka-ssl updateStrategy: type: RollingUpdate volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: standard volumeMode: Filesystem status: phase: Pending The deployment we will create will have the following components: Namespace: kafka This is the namespace within which all components will be scoped. Service Account: kafka Service accounts are used to control permissions and access to resources within the cluster. Headless Service: kafka-headless It exposes ports 9092 (for PLAINTEXT communication) and 9093 (for SSL traffic). StatefulSet: kafka It manages Kafka pods and ensures they have stable hostnames and storage. The source code for this deployment can be found in this GitHub repository. Specifically for the SSL configurations, the next parameters were configured in the StatefulSet: Configure the truststore, keystore, and password: KAFKA_SSL_KEY_CREDENTIALS KAFKA_SSL_KEYSTORE_CREDENTIALS KAFKA_SSL_TRUSTSTORE_CREDENTIALS Configure the ports for the Kafka brokers to listen for SSL:KAFKA_ADVERTISED_LISTENERS KAFKA_LISTENER_SECURITY_PROTOCOL_MAP KAFKA_LISTENERS Creating the Deployment Clone the repo:git clone https://github.com/rafaelmnatali/kafka-k8s.git cd ssl Deploy Kafka using the following commands: kubectl apply -f 00-namespace.yaml kubectl apply -f 01-kafka-local.yaml Verify Communication Across Brokers There should now be three Kafka brokers each running on separate pods within your cluster. Name resolution for the headless service and the three pods within the StatefulSet is automatically configured by Kubernetes as they are created,allowing for communication across brokers. See the related documentation for more details on this feature. You can check the first pod's logs with the following command: kubectl logs kafka-0The name resolution of the three pods can take more time to work than it takes the pods to start, so you may see UnknownHostException warnings in the pod logs initially: WARN [RaftManager nodeId=2] Error connecting to node kafka-1.kafka-headless.kafka.svc.cluster.local:29093 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient) java.net.UnknownHostException: kafka-1.kafka-headless.kafka.svc.cluster.local ... But eventually each pod will successfully resolve pod hostnames and end with a message stating the broker has been unfenced: INFO [Controller 0] Unfenced broker: UnfenceBrokerRecord(id=1, epoch=176) (org.apache.kafka.controller.ClusterControlManager) Create a Topic Using the SSL Endpoint The Kafka StatefulSet should now be up and running successfully. Now we can create a topic using the SSL endpoint. You can deploy Kafka Client using the following command: kubectl apply -f 02-kafka-client.yaml Check if the Pod is Running: kubectl get pods Output: NAME READY STATUS RESTARTS AGE kafka-cli 1/1 Running 0 12m Connect to the pod kafka-cli: kubectl exec -it kafka-cli -- bash Create a topic named test-ssl with three partitions and a replication factor of 3. kafka-topics --create --topic test-ssl --partitions 3 --replication-factor 3 --bootstrap-server ${BOOTSTRAP_SERVER} --command-config /etc/kafka/secrets/client_security.properties Created topic test-ssl. The environment variable BOOTSTRAP_SERVER contains the list of the brokers, therefore, we save time in typing. List all the topics in Kafka: kafka-topics --bootstrap-server kafka-0.kafka-headless.kafka.svc.cluster.local:9093 --list --command-config /etc/kafka/secrets/client_security.properties test test-ssl test-test Summary and Next Steps This tutorial showed you how to get Kafka running in KRaft mode on a Kubernetes cluster with SSL encryption. This is a step to secure communication between clients and brokers. I invite you to keep studying and investigating how to improve security in your environment.
Boris Zaikin
Lead Solution Architect,
CloudAstro GmBH
Ranga Karanam
Best Selling Instructor on Udemy with 1 MILLION Students,
in28Minutes.com
Samir Behara
Senior Cloud Infrastructure Architect,
AWS
Pratik Prakash
Principal Solution Architecht,
Capital One