2024 DevOps Lifecycle: Share your expertise on CI/CD, deployment metrics, tech debt, and more for our Feb. Trend Report (+ enter a raffle!).
Kubernetes in the Enterprise: Join our Virtual Roundtable as we dive into Kubernetes over the past year, core usages, and emerging trends.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Monitoring and Logging in Cloud Architecture With Python
The Advantage of Using Cache to Decouple the Frontend Code
PHP 8.2.12, released on 26 October 2023, is the latest bug fix update of the PHP 8.2 series. This release does not introduce new features but addresses issues in previous versions. These fixes range from core PHP functionalities to specific modules. Furthermore, PHP 8.2.12 is a PHP-supported release, which increases its significance. This article will provide a comprehensive outlook on all the issues and how they are addressed in this update. Core Fixed bug GH-12207 (Memory leak when using class and trait with doc block): Developers reported a memory leak when class and trait declared the same static property with documentation block. Memory leaks happen when a program improperly releases unused memory. Such scenarios lead to inefficient performance and potential slowdowns. The bug was fixed in the update 8.2.12 to prevent the wastage of memory resources in future cases. Fixed bug GH-12215 (Overwriting of module entry causes type errors in ext/dom): A bug related to the DOMDocument object was produced during dumping documentElement property of a newly created DOMDocument object. This resulted in errors that disrupted PHP scripts and XML’s combined operations. PHP 8.2.12 improved property handling in such cases to streamline processes and prevent unexpected errors while working with DOM elements. Fixed bug GH-12273 (__builtin_cpu_init Check): This technical issue emerged during the configuration check and testing of __builtin_cpu_init on FreeBSD. __builtin_cpu_init is a part of PHP internal operations that optimizes performance based on the capability of the CPU. The problem was fixed in the 8.2.12 update, enabling PHP to use CPU capabilities for performance optimization effectively. Fixed bug #80092 (ZTS + preload resulting in Segfault on shutdown): Segfault or segmentation fault refers to a major crash during which a program tries to access an invalid memory location. A segfault occurred while shutting down PHP during class preloading in Zend Thread Safety (ZTS) mode. In the PHP 8.2.12 update, the issue was resolved. The resolution ensures stable shutdowns even when ZTS is used with class preloading and maintains the stability of multi-threaded PHP applications. CLI (Command Line Interface) The CLI section of the PHP 8.2.12 update ensures that one single Date header is present. The Date header is a standard header in HTTP communication that indicates the date and time of a message’s origination. This change resolves an error related to the HTTP response header when PHP developers run PHP scripts from the command line. A single Date header in the response maintains HTTP standard compliance and prevents confusion/errors due to multiple Date headers. Furthermore, this change ensures properly structured headers that adhere to standard practices. CType The PHP 8.2.12 update addresses a significant performance issue (bug GH-11997) with the ctype_alnum function in the CType section. The ctype_alnum function checks whether all characters in a string are alphanumeric. In a test case, the performance of ctype_alnum with preg_match was compared using regular expression matching. According to the test results, this function worked slower in the 8.1 and later versions of PHP compared to its earlier versions. The latest PHP update fixed the issue and ensured the ctype_alnum function performs better. Hence, it improves the efficiency of scripts relying on ctype_alnum for string validation. Applications that heavily use string validation will benefit from this fix since it will directly affect the execution speed and resource utilization. DOM (Document Object Model) Restoring the Old Namespace Reconciliation Behavior: Namespace helps differentiate duplicate elements in XML files. This DOM update seems to revert to the previous namespace handling method in PHP’s DOM extension to improve compatibility with XML standards and XML document handling in PHP applications. Fixed bug GH-8996 (Serialization of DOMNode on PHP ^8.1): Developers faced an issue while serializing DOMNode objects in PHP 8.1 and above. Serialization translates data formats and object structures into reconstructable, storable, and transmittable formats. In a test case, a custom class implemented __sleep and __wakeup serialization methods to extend DOMDocument. The expected result was the error-free serialization and deserialization of the object since the necessary methods for serialization were implemented. However, it stopped the serialization and resulted in a fatal error. The PHP 8.2.12 update addressed this bug, ensuring serialization and deserialization of the objects extending DOMNode without issues. Hence, it will help developers to save and restore complex XML structures efficiently for applications that need it. Fileinfo Update 8.2.12 fixed the GH-11891 bug. Due to this bug, the function incorrectly returned text/xml instead of image/svg+xml while determining the MIME type of SVG files using the fileinfo extension. Since the MIME type helps to determine a file’s nature, this issue resulted in improper handling and management of SVG files in PHP applications. PHP 8.2.12 resolved this issue, leading to the correct identification of SVG files as image/svg+xml. Filter The PHP 8.2.12 update involves fixing an issue using FILTER_REQUIRE_SCALAR and FILTER_CALLBACK. In PHP, FILTER_REQUIRE_SCALAR ensures scalar value filtration, while FILTER_CALLBACK helps to apply a custom function to the value. This update ensures that both these filters properly work together. Hence, this update results in accurate and flexible sanitization and validation and enhances the reliability of custom filtering logic in PHP. Hash The PHP 8.2.12 update fixed the GH-12186 bug, an issue that showed a segmentation fault when copying or cloning a finalized HashContext. HashContext refers to a function that helps to create hash digests out of data, while a finalized HashContext means a completed hash process. The update will help to prevent such segmentation faults and ensure stable functionality of hash functions. Intl (Internationalization) Fixed Bug GH-12243 (Segmentation Fault on IntlDateFormatter::construct): While setting a negative value for the datatype parameter to create an IntlDateFormatter object outside the expected range, PHP showed a segfault. The latest PHP update fixed this issue, allowing stable IntlDateFormatter object creation with unconventional parameter values. Fixed Bug GH-12282 (IntlDateFormatter::construct does not throw an exception on an invalid locale): When setting invalid local values, the IntlDateFormatter returns a weak number rather than an exception. The latest PHP update fixed the issue, ensuring that IntlDateFormatter shows an error when an invalid locale is used. MySQLnd The latest PHP update fixed GH-12297, a 'mysqlnd.so' library-related startup warning issue during the compilation of MySQL extension from the source. This issue stated, "PHP Startup: Invalid library (maybe not a PHP library) 'mysqlnd.so'.” This statement indicates a problem with how PHP integrated and recognized “'mysqlnd.so' (the MySQL Native Driver library). The latest update ensured correct recognition and warning-free loading of the 'mysqlnd.so' library at PHP startup. Opcache Fixed opcache_invalidate() on deleted file: opcache_invalidate() is a function that invalidates cached scripts, ensuring they are properly reloaded and recompiled when it is called. However, the issue deleted the scenario where the script was invalidated. The PHP 8.2.12 update resolved this issue, ensuring Opache properly handles such situations and an efficient caching mechanism. Fixed bug GH-12380 (Access to private array property in child class and incorrectly referencing the property of parent class while using JIT compilation): When a developer tried to access private array properties inside a closure in a child class, the PHP's Just-In-Time (JIT) compilation feature showed an error. Closures refer to anonymous functions assigned to a variable, which allow you to access private properties only in defined classes. Due to the bug, the closure incorrectly accessed the private property from the parent class. When demonstrated using a class hierarchy, the closure was accessing a private array property of the child class, but it was referring to the parent’s property. The latest PHP update ensured that closes within a class access the class’s private property accurately, even when using JIT compilation. Hence, this fix ensures that the integrity of object-oriented programming (OOP) is maintained. PCRE (Perl Compatible Regular Expressions) The latest PHP 8.2.12 bug addressed the bug GH-11956. In this issue, a regex (regular expression) pattern - preg_match( '/<(\w+)[\s\w\-]+ id="S44_i89ew">/', '<br><div id="S44_i89ew">', $matches ) gave different results when PCRE JIT was enabled and disabled. The issue was traced back to the PCRE library and was resolved when the library was updated to a newer version. SimpleXML Fixed bug GH-12170 (Failure to use XPath with comments in SimpleXML): When using XPath in SimpleXML for finding comment nodes, XPath returned an empty tray instead of the comment. The latest PHP update ensured that XPath accurately locates and returns comment nodes in XML files. This ensures efficient XML parsing and manipulation. Fixed bug GH-12223 (Elements with entity reference resulted in an infinite loop): When using var_dump or print_r on SimpleXML elements with entity reference, the system delivered an infinite loop. This infinite loop could have resulted in the hanging or crashing of scripts. The PHP update ensured stable and reliable output from debugging and printing SimpleXML elements with entity reference. Fixed bug GH-12167 (Failure to get processing instruction contents in SimpleXML): When obtaining content related to processing instructions from a SimpleXML object, PHP returned an empty string rather than processing instruction content. The latest PHP update ensured processing instruction content could be derived from SimpleXML objects. This ensured enhanced specialized XML data handling and processing. Fixed bug GH-12169 (Failure to receive comments content in SimpleXML): When SimpleXML tried to obtain the content of comment nodes, this bug returned an empty string rather than a comment content. PHP 8.2.12 enabled developers to derive correct comment content from SimpleXML objects, effectively parsing and processing comments in XML documents. Streams The bug GH-12190 delivered the result "Invalid IP Address: 0" error when the stream_context_create function was used with the 'bindto' option set to '0:0' in PHP. The bindto option lets systems choose IP addresses and ports during network requests. However, it did not work as expected. PHP 8.2.12 resolved this issue and allowed using '0:0' for the 'bindto' option in stream contexts. This results in flexible and proper functionality when specifying network settings that can benefit PHP development projects involving network operations. XML Fixed the return type of stub of xml_parse_into_struct(): The xml_parse_into_struct() function parses XML data into an associative array. The XML update ensured that the behavior and documentation of the function reflect the type of value it returns. This ensures clear and consistent handling of XML data. Fixed the memory leak caused when calling xml_parse_into_struct() twice: Earlier, when calling xml_parse_into_struct() was called more than once, it resulted in a memory leak. During a memory leak, the system consumes more memory than required. The latest update ensured proper memory management and release, resulting in performance-optimized and reliable PHP applications. XSL When using the XSLTProcessor::transformToDoc method with SimpleXML objects, developers did not get the expected type of value. The transformToDoc method helps to transform XML documents using XSL stylesheets. The PHP 8.2.12 update resolved this issue and ensured that the method returns the appropriate type of value. This will result in smooth and error-free XML transformations. Conclusion The PHP 8.2.12 release has addressed several PHP vulnerabilities and issues, including core, CLI, CType, and others, improving its efficiency and reliability. PHP diligently addresses rising issues so that developers worldwide can build robust and performance-optimized PHP applications. In the upcoming months, PHP will release new updates to address emerging issues that developers may face, ensuring a reliable environment to work in. So stay tuned to learn about new PHP updates.
After laying the groundwork in our previous article on the basics of Unity's coroutines, we're now ready to delve deeper into the mechanics that drive coroutine execution. This article aims to explore two key aspects that make coroutines a powerful tool in Unity: the concept of yielding and the coroutine's relationship with Unity's main game loop. Yielding is a cornerstone of coroutine functionality, allowing a coroutine to pause its execution and yield control to other routines. This feature enables you to write asynchronous code that can wait for specific conditions to be met, such as time delays or external data, before resuming its execution. We'll explore the different types of yield statements available in Unity, like yield return null and yield return new WaitForSeconds(), and discuss their implications on coroutine behavior. Moreover, understanding how coroutines fit into Unity's main game loop is crucial for leveraging their full potential. Unlike standard methods that execute all their code at once, coroutines have the ability to pause and resume, interleaving their execution with the main game loop. This allows for more flexible and efficient code, especially in scenarios like animations, AI behaviors, and timed events. To illustrate these concepts, we'll provide Unity C# code examples that demonstrate how yielding works and how coroutines are executed in relation to the main game loop. By the end of this article, you'll have a deeper understanding of coroutine mechanics, setting the stage for our discussion on practical use cases and advanced coroutine patterns in Unity. So, let's dive in and unravel the intricacies of coroutine execution in Unity. Yielding Execution One of the most powerful features of coroutines in Unity is the ability to yield execution. This means that a coroutine can pause its operation, allowing other functions or coroutines to run, and then resume from where it left off. This is particularly useful for breaking up tasks that would otherwise block the main thread, making your game unresponsive. The concept of yielding is central to how coroutines function. When a coroutine yields, it effectively says, "I have reached a point where I can pause, so go ahead and run other tasks." This is done using the yield keyword in C#, followed by a return statement that specifies the condition under which the coroutine should resume. Here's a simple example that uses yield return null, which means the coroutine will resume on the next frame: C# using System.Collections; using UnityEngine; public class SimpleYieldExample : MonoBehaviour { IEnumerator Start() { Debug.Log("Coroutine started: " + Time.time); yield return null; Debug.Log("Coroutine resumed: " + Time.time); } } In this example, the coroutine starts and logs the current time. It then yields, allowing other functions and coroutines to execute. On the next frame, it resumes and logs the time again, showing that it paused for approximately one frame. Different Types of Yield Statements Unity provides several types of yield statements, each with its own use case: yield return null: Pauses the coroutine until the next frame yield return new WaitForSeconds(float seconds): Pauses the coroutine for a specified number of seconds yield return new WaitForEndOfFrame(): Pauses the coroutine until the end of the frame, after all graphical rendering is done yield return new WaitForFixedUpdate(): Pauses the coroutine until the next fixed frame rate update function Each of these yield statements serves a different purpose and can be crucial for various tasks like animations, loading, or any time-sensitive operations. Understanding the concept of yielding and the different types of yield statements available can significantly enhance your ability to write efficient and effective coroutines in Unity. In the next section, we'll explore how these coroutines fit into Unity's main game loop, providing a more holistic understanding of coroutine execution. Coroutine Execution Flow Understanding how coroutines operate within Unity's main game loop is crucial for mastering their behavior and capabilities. While it's easy to think of coroutines as separate threads running in parallel, they are actually executed within Unity's main game loop. However, their ability to pause and resume sets them apart and allows for more complex and flexible behavior. How Coroutines Run in Conjunction With Unity's Main Game Loop Coroutines in Unity are not separate threads but are instead managed by Unity's main game loop. When a coroutine yields, it essentially steps out of the game loop temporarily, allowing other game processes to take place. It then re-enters the loop either in the next frame or after a specified condition is met. Here's a simplified example to demonstrate this: C# using System.Collections; using UnityEngine; public class CoroutineFlowExample : MonoBehaviour { void Start() { StartCoroutine(MyCoroutine()); } IEnumerator MyCoroutine() { Debug.Log("Coroutine started at frame: " + Time.frameCount); yield return null; Debug.Log("Coroutine resumed at frame: " + Time.frameCount); } } In this example, the coroutine starts and logs the current frame count. It then yields, stepping out of the game loop. On the next frame, it resumes and logs the frame count again. You'll notice that the frame count will have incremented, indicating that the game loop continued while the coroutine was paused. An Illustration or Example Showing the Flow of Execution in Coroutines To further illustrate how a coroutine's execution is interleaved with the main game loop, consider the following pseudo-code that represents a simplified Unity game loop: Plain Text Game Loop: 1. Update Physics 2. Run Coroutines 3. Render Frame 4. Repeat Now, let's say we have a coroutine that performs some logic, waits for 2 seconds, and then continues: C# IEnumerator MyWaitingCoroutine() { Debug.Log("Logic Part 1: Frame " + Time.frameCount); yield return new WaitForSeconds(2); Debug.Log("Logic Part 2: Frame " + Time.frameCount); } In this scenario, "Logic Part 1" would execute during the "Run Coroutines" step of the game loop. The coroutine would then yield, waiting for 2 seconds. During this time, the game loop would continue to cycle through its steps, updating physics and rendering frames. After approximately 2 seconds, the coroutine would resume, executing "Logic Part 2" during the "Run Coroutines" step. Understanding this interleaved execution is key to mastering coroutines in Unity. It allows you to write code that is both efficient and easy to manage, as you can break up tasks into smaller parts without blocking the main game loop. In the next section, we'll explore some practical use cases where this capability is particularly beneficial. Use Cases for Coroutines Coroutines are a versatile tool in Unity, capable of handling a wide range of scenarios that require asynchronous or time-dependent behavior. Their ability to pause and resume makes them particularly useful for tasks that are too complex or time-consuming to be executed in a single frame. In this section, we'll explore some common use cases where coroutines shine and provide practical Unity C# examples to demonstrate their utility. Timed Events Coroutines are excellent for managing events that need to happen after a certain amount of time has passed. For example, you might want to delay a game character's action or trigger an event after a countdown. C# IEnumerator TriggerTimedEvent() { yield return new WaitForSeconds(5); Debug.Log("Timed event triggered!"); } In this example, the message "Timed event triggered!" will be logged after a 5-second delay. Animations Coroutines can also be used to control animations, especially those that require precise timing or sequencing. C# IEnumerator AnimateObject(Vector3 targetPosition) { Vector3 startPosition = transform.position; float journeyLength = Vector3.Distance(startPosition, targetPosition); float startTime = Time.time; float speed = 1.0f; float distanceCovered = (Time.time - startTime) * speed; float fractionOfJourney = distanceCovered / journeyLength; while (fractionOfJourney < 1) { distanceCovered = (Time.time - startTime) * speed; fractionOfJourney = distanceCovered / journeyLength; transform.position = Vector3.Lerp(startPosition, targetPosition, fractionOfJourney); yield return null; } } Here, the object will move from its current position to a target position, interpolating its position over time. AI Behaviors Coroutines can be used to manage complex AI behaviors, such as decision-making processes that occur over multiple frames. C# IEnumerator AIDecisionMaking() { Debug.Log("AI thinking..."); yield return new WaitForSeconds(2); Debug.Log("AI made a decision!"); } In this example, the AI "thinks" for 2 seconds before making a decision, represented by the log statements. Showcase Some Practical Examples in Unity Consider a game where a player's health regenerates over time. A coroutine can manage this efficiently: C# IEnumerator RegenerateHealth() { while (true) { if (playerHealth < 100) { playerHealth++; Debug.Log("Health: " + playerHealth); } yield return new WaitForSeconds(1); } } In this example, the player's health increases by 1 every second until it reaches 100, at which point the coroutine will still run but won't increase the health. Understanding these practical applications of coroutines can significantly improve the way you approach problem-solving in Unity. Whether it's managing time-dependent events, controlling animations, or implementing complex AI behaviors, coroutines offer a flexible and efficient way to achieve your goals. In the next article, we'll delve deeper into best practices, performance considerations, and more advanced coroutine patterns. Conclusion As we've explored in this article, understanding the mechanics of coroutine execution is not just an academic exercise; it's a practical skill that can significantly enhance your Unity projects. Coroutines offer a robust and flexible way to manage asynchronous and time-dependent tasks, from simple timed events and animations to more complex AI behaviors. For instance, we've seen how you can use coroutines to manage health regeneration in a game: C# IEnumerator RegenerateHealth() { while (true) { if (playerHealth < 100) { playerHealth++; Debug.Log("Health: " + playerHealth); } yield return new WaitForSeconds(1); } } This example demonstrates that coroutines can be an effective way to handle game mechanics that are dependent on time or other asynchronous events. The yield return new WaitForSeconds(1); line is a powerful yet straightforward way to introduce a delay, allowing other game processes to continue running smoothly. But this is just scratching the surface. As you become more comfortable with coroutines, you'll find that they can be used for much more than simple delays and animations. They can manage complex state machines for AI, handle user input in a non-blocking manner, and even manage resource-intensive tasks by spreading the workload over multiple frames. In the next article, we'll delve deeper into the world of coroutines, exploring best practices to optimize your usage of this feature. We'll look at performance considerations, such as how to avoid common pitfalls that can lead to frame rate drops. We'll also explore advanced coroutine patterns, like nested coroutines, and how to manage multiple coroutines efficiently. By mastering coroutines, you're adding a powerful tool to your Unity development toolkit. Whether you're developing a simple mobile game or a complex virtual reality experience, coroutines can help you create more efficient and responsive games. So stay tuned for our next piece, where we'll take your coroutine skills to the next level.
Kubernetes can be intricate to manage, and companies want to leverage its power while avoiding its complexity. A recent survey found that 84% of companies don’t see value in owning Kubernetes themselves. To address this complexity, Cloud Foundry introduced open-source Korifi, which preserves the classic Cloud Foundry experience of being able to deploy apps written in any language or framework with a single cf push command. But the big difference is that this time, apps are pushed to Kubernetes. In this tutorial, we’ll explore how to use Korifi to deploy web applications written in different languages: Ruby, Node.js, ASP.NET, and PHP. I will also provide insights into Korifi’s functioning and basic configuration knowledge, helping you kick-start your multi-cloud, multitenant, and polyglot journey. Ruby For all the examples in this tutorial, I will use sample web applications that you can download from this GitHub repository, but feel free to use your own. You can also find instructions on installing Korifi in this article, which guides you through the easiest way to achieve that by running two Bash scripts that will set everything up for you. Once you have Korifi installed and have cloned a Ruby sample application, go into the root folder and type the following command: Shell cf push my-ruby-app That’s it! That is all you need to deploy a Ruby application to Kubernetes. Keep in mind that while the first iteration of cf push will take some time as Korifi needs to download a number of elements (I will explain this in the next paragraph); all subsequent runs will be much faster. At any point, if you want to check the status of a Korifi app, you can use the cf app command, which, in the case of our Ruby app, would be: Shell cf app my-ruby-app Node.js Before deploying a Node.js application to Kubernetes using Korifi, let me explain how it works under the hood. One of the key components at play here is Cloud Native Buildpacks. The concept was initially introduced in 2011 and adopted by PaaS providers like Google App Engine, GitLab, Deis, and Dokku. This project became a part of the CNCF in 2018. Buildpacks are primarily designed to convert an application’s source code into an OCI image, such as a Docker image. This process unfolds in two steps: first, it scans the application to identify its dependencies and configures them for seamless operation across diverse clouds. Then, it assembles an image using a Builder, a structured amalgamation of Buildpacks, a foundational build image, a lifecycle, and a reference to a runtime image. Although you have the option to construct your own build images and Buildpacks, you can also leverage those provided by established entities such as Google, Heroku, and Paketo Buildpacks. In this tutorial, I will exclusively use ones provided by Paketo — an open-source project that delivers production-ready Buildpacks for popular programming languages. Let’s briefly demonstrate what Korifi does by manually creating a Buildpack from a Node.js application. You can follow the installation instructions here to install the pack CLI. Then, get into the root folder of your application and run the following command: Shell pack build my-nodejs-app --builder paketobuildpacks/builder:base Your Node.js OCI image is available; you can check this by running the command: Shell docker images Once the Docker image is ready, Korifi utilizes Kubernetes RBAC and CRDs to mimic the robust Cloud Foundry paradigm of orgs and spaces. But the beauty of Korifi is that you don’t have to manage any of that. You only need one command to push a Node.js application to Kubernetes: Shell cf push my-nodejs-app That’s it! ASP.NET Now, let’s push an ASP.NET application. If you run cf push my-aspnet-app, the build will fail, and you will get the following error message: Shell BuildFail: Check build log output FAILED 2023-08-11T19:12:58.11+0000 [STG/] OUT ERROR: No buildpack groups passed detection. 2023-08-11T19:12:58.11+0000 [STG/] OUT ERROR: failed to detect: buildpack(s) failed with err These logs tell us that Korifi may not know a valid Buildpack to package an ASP.NET application. We can verify that by running the following command: Shell cf buildpacks You should get the following output, and we can see that there are no .NET-related buildpacks. Shell position name stack enabled locked filename 1 paketo-buildpacks/java io.buildpacks.stacks.jammy true false paketo-buildpacks/java@9.18.0 2 paketo-buildpacks/go io.buildpacks.stacks.jammy true false paketo-buildpacks/go@4.4.5 3 paketo-buildpacks/nodejs io.buildpacks.stacks.jammy true false paketo-buildpacks/nodejs@1.8.0 4 paketo-buildpacks/ruby io.buildpacks.stacks.jammy true false paketo-buildpacks/ruby@0.39.0 5 paketo-buildpacks/procfile io.buildpacks.stacks.jammy true false paketo-buildpacks/procfile@5.6.4 To fix that, first, we need to tell Korifi which Buildpack to use for an ASP.NET application by editing the ClusterStore: Shell kubectl edit clusterstore cf-default-buildpacks -n tutorial-space Make sure to replace tutorial-space with the value you used during your Korifi cluster configuration. Add the line – image: gcr.io/paketo-buildpacks/python; your file should look like this: Shell spec: sources: - image: gcr.io/paketo-buildpacks/java - image: gcr.io/paketo-buildpacks/nodejs - image: gcr.io/paketo-buildpacks/ruby - image: gcr.io/paketo-buildpacks/procfile - image: gcr.io/paketo-buildpacks/go - image: gcr.io/paketo-buildpacks/python Then we need to tell Korifi in which order to use Buildbacks by editing our ClusterBuilder: Shell kubectl edit clusterbuilder cf-kpack-cluster-builder -n tutorial-space Add the line – id: paketo-buildpacks/dotnet-core at the top of the spec order list. your file should look like this: Shell spec: sources: - image: gcr.io/paketo-buildpacks/java - image: gcr.io/paketo-buildpacks/nodejs - image: gcr.io/paketo-buildpacks/ruby - image: gcr.io/paketo-buildpacks/procfile - image: gcr.io/paketo-buildpacks/go - image: gcr.io/paketo-buildpacks/python If everything was done right, you should see the .NET Core Paketo Buildpack in the list output by the cf buildpacks command. Finally, you can simply run cf push my-aspnet-app to push your ASP.NET application to Kubernetes. PHP We need to follow the same process for PHP with the Buildpack paketo-buildpacks/php that needs to be added to the ClusterStore and ClusterBuilder. For anyone using Korifi version 0.9.0 released a few days ago, the issue that I am about to discuss has been fixed. But in case you are using an older version, running cf push my-php-app will fail and return the following error message: Shell [APP/] OUT php: error while loading shared libraries: libxml2.so.2: cannot open shared object file: No such file or directory The OCI image is missing the libxml library, which is required by PHP, this is probably due to the builder not supporting PHP. To check that, let’s look what builder Korifi is using by running this command: Shell kubectl describe clusterbuilder cf-kpack-cluster-builder | grep 'Run Image' Which will output the following: Shell Run Image: index.docker.io/paketobuildpacks/run-jammy-base@sha256:4cf369b562808105d3297296efea68449a2ae17d8bb15508f573cc78aa3b3772a As you can see, Korifi currently uses Paketo Jammy Base, which, according to its Github repo description, does not support PHP. You also can check that by looking at the builder’s builder.toml file or by running the command pack builder suggest, which will return the output: Shell Suggested builders: [...] Paketo Buildpacks: paketobuildpacks/builder-jammy-base Ubuntu 22.04 Jammy Jellyfish base image with buildpacks for Java, Go, .NET Core, Node.js, Python, Apache HTTPD, NGINX and Procfile Paketo Buildpacks: paketobuildpacks/builder-jammy-buildpackless-static Static base image (Ubuntu Jammy Jellyfish build image, distroless-like run image) with no buildpacks included. To use, specify buildpack at build time. Paketo Buildpacks: paketobuildpacks/builder-jammy-full Ubuntu 22.04 Jammy Jellyfish full image with buildpacks for Apache HTTPD, Go, Java, Java Native Image, .NET, NGINX, Node.js, PHP, Procfile, Python, and Ruby [...] While Jammy Base does not support PHP, the Jammy Full builder does. There are multiple ways to get Korifi to use another builder, I will just cover one way in this tutorial. This way assumes that we used the easy way to install Korifi with the deploy-on-kind.sh script. You need to go to Korifi source code and edit the file scripts/assets/values.yaml so that the fields clusterStackBuildImage and clusterStackRunImage are set to paketobuildpacks/build-jammy-full by running this command: Shell sed -i 's/base/full/g' scripts/assets/values.yaml` Then, run the scripts/deploy-on-kind.sh script. That’s it! Korifi will use the Jammy full builder, and Korifi will be able to deploy your PHP application with a cf push my-php-app command. Summary Hopefully, now you’ve experienced just how easy it is to use Korifi to deploy applications to Kubernetes written in Ruby, Node.js, ASP.NET, and PHP. You can stay tuned with the Korifi project by following Cloud Foundry X account and joining the Slack workspace.
Let's continue our exploration of Python's magic methods in this second part of the series. This part will focus on numbers and containers, i.e., collections. You can read the first part here. Container-Related Methods Python provides the usual containers, e.g., lists, sets, and dictionaries. You can use the following methods when you want to implement your own. Common Methods Containers have a size. Python defines two methods to implement to return the number of items in a container: object.__len__(self) for the exact size and object.__length_hint__(self) for an approximation. You should use the latter when getting the exact size is computationally expensive. Item-Related Methods Containers contain objects. Some containers offer index-based access, e.g., list(1), while others offer key-based access, e.g., dict('mykey'). In both cases, here are the methods to implement: Method Functionality object.__getitem__(self, key) Get the object object.__setitem__(self, key, value) Set the object object.__delitem__(self, key) Remove the object object.__missing__(self, key) Called when the key is not found by the default get(key) implementation object.__iter__(self) Return an iterator over items (or keys) in the container object.__reversed__(self) Reverse the objects in the container object.__contains__(self, item) Check whether an item is part of the container Let's create a simple hash-map-like container for illustration purposes: Python class Container: def __init__(self): self.items = {} def __getattribute__(self, name): raise AttributeError() def __len__(self): return len(self.items) #1 def __setitem__(self, key, value): self.items[key] = value #1 def __getitem__(self, key): return self.items[key] #1 def __delitem__(self, key): return self.items.pop(key) #1 def __contains__(self, key): return key in self.items #2 def __iter__(self): return iter(self.items.keys()) #3 def __reversed__(self): return iter(reversed(self.items.keys())) #4 container = Container() container['foo'] = 'foo' container['bar'] = 'bar' print(len(container)) #5 for x in container: #6 print(f'{x}: {container[x]}') print('---') for x in reversed(container): #7 print(f'{x}: {container[x]}') print('---') del container['foo'] for x in container: #8 print(f'{x}: {container[x]}') print('---') print('foo' in container) #9 Delegate on the items dictionary Check if the key belongs to items Get the keys' iterator Get the reversed key's iterator Print 2 as the container has two items at this point Implicitly calls the __iter__() method Implicitly calls the __reversed__() method Print bar: bar since the foo key has been deleted Implicitly calls the __contains__() method Number-Related Methods Just as we can emulate containers, we can emulate numbers as well. Arithmetic Methods Arithmetic methods abound; it's easier to summarize them in a table: Kind Method Operator/function Comment All object.__add__(self, other) + object.__sub__(self, other) - object.__mul__(self, other) * object.__matmul__(self, other) @ Matrix multiplication object.__truediv__(self, other) / Regular division object.__floordiv__(self, other) // Division without the reminder object.__mod__(self, other) % Reminder of the division object.__divmod__(self, other) divmod() object.__pow__(self, other[, modulo]) pow() object.__lshift__(self, other) << object.__rshift__(self, other) >> object.__and__(self, other) & object.__xor__(self, other) ^ Exclusive OR object.__or__(self, other) | Inclusive OR Binary object.__radd__(self, other) + object.__rsub__(self, other) - object.__rmul__(self, other) * object.__rmatmul__(self, other) @ object.__rtruediv__(self, other) / object.__rfloordiv__(self, other) // object.__rmod__(self, other) % object.__rdivmod__(self, other) divmod() object.__rpow__(self, other[, modulo]) pow() object.__rlshift__(self, other) << object.__rrshift__(self, other) >> object.__rand__(self, other) & object.__rxor__(self, other) ^ object.__ror__(self, other) | Assignement object.__iadd__(self, other) += object.__isub__(self, other) -= object.__imul__(self, other) *= object.__imatmul__(self, other) @= object.__itruediv__(self, other) /= object.__ifloordiv__(self, other) //= object.__imod__(self, other) %= object.__ipow__(self, other[, modulo]) pow()= object.__ilshift__(self, other) <<= object.__irshift__(self, other) >>= object.__iand__(self, other) &= object.__ixor__(self, other) ^= object.__ior__(self, other) |= Unary object.__neg__(self) - object.__pos__(self) + object.__abs__(self) abs() Absolute value object.__invert__(self) ~ Bitwise NOT Imagine an e-commerce site with products and stocks of them dispatched in warehouses. We need to subtract stock levels when someone orders and add stock levels when the stock is replenished. Let's implement the latter with some of the methods we've seen so far: Python class Warehouse: #1 def __init__(self, id): self.id = id def __eq__(self, other): #2 if not isinstance(other, Warehouse): return False return self.id == other.id def __repr__(self): #3 return f'Warehouse(id={self.id})' class Product: #1 def __init__(self, id): self.id = id def __eq__(self, other): #2 if not isinstance(other, Product): return False return self.id == other.id def __repr__(self): #3 return f'Product(id={self.id})' class StockLevel: def __init__(self, product, warehouse, quantity): self.product = product self.warehouse = warehouse self.quantity = quantity def __add__(self, other): #4 if not isinstance(other, StockLevel): raise Exception(f'{other} is not a StockLevel') if self.warehouse != other.warehouse: raise Exception(f'Warehouse are not the same {other.warehouse}') if self.product != other.product: raise Exception(f'Product are not the same {other.product}') return StockLevel(self.product, self.warehouse,\ self.quantity + other.quantity) #5 def __repr__(self): return f'StockLevel(warehouse={self.warehouse},\ product={self.product},quantity={self.quantity})' warehouse1 = Warehouse(1) warehouse2 = Warehouse(2) product = Product(1) #6 product1 = Product(1) #6 stocklevel111 = StockLevel(product, warehouse1, 1) #7 stocklevel112 = StockLevel(product, warehouse1, 2) #7 stocklevel121 = StockLevel(product1, warehouse2, 1) #7 print(stocklevel111 + stocklevel112) #8 stocklevel111 + stocklevel121 #9 Define necessary classes Override equality to compare ids Override representation Implement addition. If the warehouse and product don't match, raise an exception. Create a new StockLevel with the same product and warehouse and the quantity as the sum of both quantities Define two products that point to the same id; it's the same product for equality purposes Create new stock-level objects Print StockLevel(warehouse=Warehouse(id=1),product=Product(id=1),quantity=3) Raise an exception as warehouses are different, though products are the same Conversion Methods Conversion methods allow changing an instance to a numeric type, i.e., int, float, or complex. Method Built-in function object.__complex__(self) complex() object.__int__(self) int() object.__float__(self) float() If no such method is implemented, Python falls back to the object.__index__(self), for example, when using the instance as an index. The following sample, however irrelevant it is, highlights the above: Python class Foo: def __init__(self, id): self.id = id def __index__(self): #1 return self.id foo = Foo(1) array = ['a', 'b', 'c'] what = array[foo] #2 print(what) #3 Define the fallback method Coerce foo into an int. We didn't implement any conversion method; Python falls back to index() Print b Other Methods Finally, Python delegates to a magic method when your code calls a specific number-related function. Method Built-in function object.__round__(self[, ndigits]) round() object.__trunc__(self) trunc() object.__floor__(self) floor() object.__ceil__(self) ceil() Context Managers' Methods Python's context managers allow fine-grained control over resources that must be acquired and released. It works with the with keyword. For example, here's how you open a file to write to: Python with open('file', 'w') as f: #1 f.write('Hello world!') #2 Open the file At this point, Python has closed the file A context manager is syntactic sugar. The following code is equivalent to the one from above: Python f = open('file', 'w') try: f.write('Hello world!') finally: f.close() To write your context manager requires to implement two methods: one for opening the context and one for closing it, respectively, object.__enter__(self) and object.__exit__(self, exc_type, exc_value, traceback). Let's write a context manager to manage a pseudo-connection. Python import traceback class Connection: def __enter__(self): self.connection = Connection() return self.connection def __exit__(self, exc_type, exc_value, exc_traceback): self.connection = None if exc_type is not None: print('An exception happened') print(traceback.format_exception(exc_type, exc_value, exc_traceback)) return True def do_something(self): pass with Connection() as connection: connection.do_something() Callable Objects I was first exposed to callable objects in Kotlin. A callable object looks like a function but is an object: Python hello = Hello() hello('world') The method to implement to make the above code run is object.__call__(self[, args...]). Python class Hello: def __call__(self, who): print(f'Hello {who}!') Conclusion The post concludes our 2-part series on Python "magic" methods. I didn't mention some of them, though, as they are so many. However, they cover the majority of them. Happy Python! To Go Further Special method names PEP 560 – Core support for typing module and generic types
One of our earlier blog posts discussed the initial steps for diving into Amazon Bedrock by leveraging the AWS Go SDK. Subsequently, our second blog post expanded upon this foundation, showcasing a Serverless Go application designed for image generation with Amazon Bedrock and AWS Lambda ("Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers"). Amazon Bedrock is a fully managed service that makes base models from Amazon and third-party model providers (such as Anthropic, Cohere, and more) accessible through an API. The applications demonstrated in those blog posts accessed Amazon Bedrock APIs directly, thereby avoiding any additional layers of abstraction or frameworks/libraries. This approach is particularly effective for learning and crafting straightforward solutions. However, developing generative AI applications goes beyond simply using large language models (LLMs) via an API. You need to think about other parts of the solution which include intelligent search (also known as semantic search that often requires specialized data stores), orchestrating sequential workflows (e.g., invoking another LLM based on the previous LLM response), loading data sources (text, PDF, links, etc.) to provide additional context for LLMs, maintaining historical context (for conversational/chatbot/QA solutions) and much more. Implementing these features from scratch can be difficult and time-consuming. Enter LangChain, a framework that provides off-the-shelf components to make it easier to build applications with language models. It is supported in multiple programming languages. This obviously includes Python, but also JavaScript, Java, and Go. langchaingo is the LangChain implementation for the Go programming language. This blog post covers how to extend langchaingo to use foundation model from Amazon Bedrock. The code is available in this GitHub repository. LangChain Modules One of LangChain's strengths is its extensible architecture - the same applies to the langchaingo library as well. It supports components/modules, each with interface(s) and multiple implementations. Some of these include: Models: These are the building blocks that allow LangChain apps to work with multiple language models (such as ones from Amazon Bedrock, OpenAI, etc.). Chain: These can be used to create a sequence of calls that combine multiple models and prompts. Vector databases: They can store unstructured data in the form of vector embedding. At query time, the unstructured query is embedded, and semantic/vector search is performed to retrieve the embedding vectors that are "most similar" to the embedded query. Memory: This module allows you to persist the state between chain or agent calls. By default, chains are stateless, meaning they process each incoming request independently (the same goes for LLMs). This provides ease of use, choice, and flexibility while building LangChain-powered Go applications. For example, you can change the underlying vector database by swapping the implementation with minimal code changes. Since langchaingo provides many large language model implementations, the same applies here as well. langchaingo Implementation for Amazon Bedrock As mentioned before, Amazon Bedrock processes access to multiple models including Cohere, Anthropic, etc. We will cover how to extend Amazon Bedrock to build a plugin for the Anthropic Claude (v2) model, but the guidelines apply to other models as well. Let's walk through the implementation at a high level. Any custom model (LLM) implementation has to satisfy langchaingo LLM and LanguageModel interfaces. So it implements Call, Generate, GeneratePrompt and GetNumTokens functions. The key part of the implementation is in the Generate function. Here is a breakdown of how it works. The first step is to prepare the JSON payload to be sent to Amazon Bedrock. This contains the prompt/input along with other configuration parameters. //... payload := Request{ MaxTokensToSample: opts.MaxTokens, Temperature: opts.Temperature, TopK: opts.TopK, TopP: opts.TopP, StopSequences: opts.StopWords, } if o.useHumanAssistantPrompt { payload.Prompt = fmt.Sprintf(claudePromptFormat, prompts[0]) } else { } payloadBytes, err := json.Marshal(payload) if err != nil { return nil, err } It is represented by the Request struct which is marshalled into JSON before being sent to Amazon Bedrock. type Request struct { Prompt string `json:"prompt"` MaxTokensToSample int `json:"max_tokens_to_sample"` Temperature float64 `json:"temperature,omitempty"` TopP float64 `json:"top_p,omitempty"` TopK int `json:"top_k,omitempty"` StopSequences []string `json:"stop_sequences,omitempty"` } 2. Next Amazon Bedrock is invoked with the payload and config parameters. Both synchronous and streaming invocation modes are supported. The streaming/async mode will be demonstrated in an example below: //... if opts.StreamingFunc != nil { resp, err = o.invokeAsyncAndGetResponse(payloadBytes, opts.StreamingFunc) if err != nil { return nil, err } } else { resp, err = o.invokeAndGetResponse(payloadBytes) if err != nil { return nil, err } } This is how the asynchronous invocation path is handled - the first part involves using the InvokeModelWithResponseStream function and then handling InvokeModelWithResponseStreamOutput response in the ProcessStreamingOutput function. You can refer to the details in Using the Streaming API section in "Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers," linked in the introduction of this article. //... func (o *LLM) invokeAsyncAndGetResponse(payloadBytes []byte, handler func(ctx context.Context, chunk []byte) error) (Response, error) { output, err := o.brc.InvokeModelWithResponseStream(context.Background(), &bedrockruntime.InvokeModelWithResponseStreamInput{ Body: payloadBytes, ModelId: aws.String(o.modelID), ContentType: aws.String("application/json"), }) if err != nil { return Response{}, err } var resp Response resp, err = ProcessStreamingOutput(output, handler) if err != nil { return Response{}, err } return resp, nil } func ProcessStreamingOutput(output *bedrockruntime.InvokeModelWithResponseStreamOutput, handler func(ctx context.Context, chunk []byte) error) (Response, error) { var combinedResult string resp := Response{} for event := range output.GetStream().Events() { switch v := event.(type) { case *types.ResponseStreamMemberChunk: var resp Response err := json.NewDecoder(bytes.NewReader(v.Value.Bytes)).Decode(&resp) if err != nil { return resp, err } handler(context.Background(), []byte(resp.Completion)) combinedResult += resp.Completion case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } } resp.Completion = combinedResult return resp, nil } 3. Once the request is processed successfully, the JSON response from Amazon Bedrock is converted (un-marshaled) back in the form of a Response struct and a slice of Generation instances as required by the Generate function signature. //... generations := []*llms.Generation{ {Text: resp.Completion}, } Code Samples: Use the Amazon Bedrock Plugin in LangChain Apps Once the Amazon Bedrock LLM plugin for langchaingo has been implemented, using it is as easy as creating a new instance with claude.New(<supported AWS region>) and using the Call (or Generate) function. Here is an example: package main import ( "context" "fmt" "log" "github.com/build-on-aws/langchaingo-amazon-bedrock-llm/claude" "github.com/tmc/langchaingo/llms" ) func main() { llm, err := claude.New("us-east-1") input := "Write a program to compute factorial in Go:" opt := llms.WithMaxTokens(2048) output, err := llm.Call(context.Background(), input, opt) //.... Prerequisites Before executing the sample code, clone the GitHub repository and change to the right directory: git clone github.com/build-on-aws/langchaingo-amazon-bedrock-llm cd langchaingo-amazon-bedrock-llm/examples Refer to the Before You Begin section in "Generative AI Apps With Amazon Bedrock: Getting Started for Go Developers" to complete the prerequisites for running the examples. This includes installing Go, configuring Amazon Bedrock access, and providing necessary IAM permissions. Run Basic Examples This example demonstrates tasks such as code generation, information extraction, and question-answering. You can refer to the code here. go run main.go Run Streaming Output Example In this example, we pass in the WithStreamingFunc option to the LLM invocation. This will switch to the streaming invocation mode for Amazon Bedrock. You can refer to the code here. //... _, err = llm.Call(context.Background(), input, llms.WithMaxTokens(2048), llms.WithTemperature(0.5), llms.WithTopK(250), llms.WithStreamingFunc(func(ctx context.Context, chunk []byte) error { fmt.Print(string(chunk)) return nil })) To run the program: go run streaming/main.go Conclusion LangChain is a powerful and extensible library that allows us to plugin external components as per requirements. This blog demonstrated how to extend langchaingo to make sure it works with the Anthropic Claude model available in Amazon Bedrock. You can use the same approach to implement support for other Amazon Bedrock models such as Amazon Titan. The examples showed how to use simple LangChain apps to using the Call function. In future blog posts, I will cover how to use them as part of chains for implementing functionality like a chatbot or QA assistant. Until then, happy building!
The general appearance of a piece of writing, a picture, a piece of text, or another medium is created to appeal to the spectator and aid in understanding what they are looking at. For instance, Computer Hope has a distinctive layout that is identifiable to our visitors, making it easier for them to move around the website. What Is an HTML Layout? An HTML layout is a template for organizing web pages in a specific way. It is straightforward to use, understand, and adjust web design elements using HTML tags. A proper HTML layout is essential for any website and will significantly enhance its visual appeal. They will also be appropriately formatted on mobile devices because HTML layouts are often responsive by default. A page layout determines how a website looks. An HTML layout is a structure that makes it simple for users to move between online pages. It is a method for creating web pages with straightforward HTML tags. The layout of the web pages is the most crucial aspect to consider while developing a website so that it can look fantastic and appear professional. For building layouts for responsive and dynamic websites, you can also employ JAVASCRIPT and CSS-based frameworks. There are many html interview questions that can be asked about HTML layout. Explore more for HTML-based interview questions and answers. The above image shows a typical layout of an HTML page. Elements in an HTML Layout A web page's structure is defined by a variety of HTML elements. Some of them are given below: Header The webpage's logo or symbol, the heading element, the introduction, and the author information are all found in the header. Web pages' header sections are made using the <header> element. <header>: This tag is used to define a section or header of the HTML documents. Example HTML <header> <h1> This is an Html Layout Example !! </h1> </header> Navbar The primary block of navigational links is contained within the navbar. It may have connections to that page or to different pages. To create a navbar in a webpage <nav> tag is used. <nav>: Establishes a group of navigation links. Example The following is an example of how the <nav> tag is used along with some other HTML tags to create a navbar of a website. HTML <nav> <ul> <li><a href="index.html">Home</a></li> <li><a href="about.html">About</a></li> <li><a href="contact.html">Contact</a></li> <li><a href="#">Other Link</a></li> <li><a href="#">Link 2</a></li> </ul> </nav> Main Section The main section of the webpage can be divided into multiple sections like <article>,<section>. <article>: The HTML element known as <article> denotes a self-contained composition that is meant to be independently distributable or reusable within a document, page, application, or website. An interactive widget or gadget, a blog entry, a product card, a user-submitted comment, a forum post, a magazine or newspaper story, or any other independent piece of content are examples. Example HTML <article> <h2> This is the article section </h2> <p> Write your content here </p> </article> <section> The HTML <section> element designates a distinct portion of a website that has similar items grouped together. With very few exceptions, sections should always have a heading. It might have text, pictures, tables, videos, etc. Example HTML <section> <h2> Introduction to HTML section Element... </h2> <p> Lorem ipsum, dolor perspiciatis voluptas deserunt sit amet consectetur adipisicing elit. Illum modi eos eveniet facere delectus sint autem perspiciatis voluptas deserunt velit labore, in fugit mollitia culpa quas, alias similique ratione adipisci! </p> </section> SideNav This section of the webpage contains a side navbar that can be used to define other links that are present on the website, or we can define the indexes of the current page. We can create a side navbar in the webpage using the <aside> HTML tag. <aside> The HTML element known as <aside> designates a section of a page whose content is only loosely connected to the document's primary text. Frequently, sidebars or call-out boxes are used to present asides. Example HTML <aside> <h2>Side Bar Section</h2> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quisquam, quae.</p> <ul> <li><a href=" #intoduction ">Introduction</a></li> <li><a href=" #Our-team">Our Team</a></li> <li><a href="#">Other Link</a></li> <li><a href="">Link 2</a></li> </ul> </aside> Footer The footer of an HTML document is specified using the <footer> tag. The footer information located in this section is author information, copyright information, carriers, etc. Within the body tag, the footer tag is utilized. In HTML 5, a new tag called <footer> has been added. Both a start tag and an end tag are necessary for the footer elements. Example HTML <footer> <p> This is an example of what the footer section of the page would look like..... </p> <p> © 2022 abcd </p> <p> Auther: xyz</p> <a href="#navbar"> Back to top </a> </footer> HTML Layout Techniques There are numerous frameworks and ways for generating layouts; however, in this article, we'll focus on basic methods. Multicolumn layouts can be made using the techniques listed below: CSS float property CSS flexbox CSS grid CSS framework Besides these, there are also some other methods to create a layout, for example table-based and using only div tags, but using the table to create a layout is not recommended. 1. CSS Float Property The CSS float feature is frequently used to create complete web layouts. Learning float is simple; all you need to do is keep in mind how the float and clear properties operate. Example HTML <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Html Layout based on CSS float property</title> <style> div.container { width: 100%; border: 1px solid gray; } header, footer { padding: 1em; color: rgb(255, 255, 255); background-color: #b4607c; clear: left; text-align: center; } nav { float: left; max-width: 160px; margin: 0; padding: 1em; } nav ul { list-style-type: none; padding: 0; } nav ul a { text-decoration: none; } article { margin-left: 170px; border-left: 1px solid gray; padding: 1em; overflow: hidden; } </style> </head> <body> <div class="container"> <header> <h1>Html Layout based on CSS float property</h1> </header> <nav> <ul> <li><a href="#">Link1</a></li> <li> <a href="#">Link 2</a></li> <li><a href="#">Link 3</a></li> </ul> </nav> <article> <h1> Layout </h1> <p> Molestias veniam expedita aliquid alias unde ipsam porro sequi vel, dolor rem esse soluta Lorem ipsum dolor sit amet consectetur adipisicing elit. voluptas eligendi nostrum voluptatem sapiente consectetur adipisicing elit. error aliquid alias unde ipsam fugit eveniet! </p> <p> Molestias veniam expedita aliquid alias unde ipsam porro sequi vel, dolor rem esse soluta Lorem ipsum dolor sit amet consectetur adipisicing elit. </p> </article> <footer>Copyright © xyz</footer> </div> </body> </html> Output 2. CSS Flexbox When the page layout must handle various screen sizes and display devices, the use of Flexbox guarantees that elements behave consistently. Example HTML <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Html Layout based on CSS flexbox property</title> <style> .flex-container { display: -webkit-flex; display: flex; -webkit-flex-flow: row wrap; flex-flow: row wrap; text-align: center; } .flex-container > * { padding: 15px; -webkit-flex: 1 100%; flex: 1 100%; } .article { text-align: left; } header { background: #b4607c; color: white; } footer { background: #b4607c; color: white; } .nav { background: #eee; } .nav ul { list-style-type: none; padding: 0; } .nav ul a { text-decoration: none; } @media all and (min-width: 768px) { .nav { text-align: left; -webkit-flex: 1 auto; flex: 1 auto; -webkit-order: 1; order: 1; } .article { -webkit-flex: 5 0px; flex: 5 0px; -webkit-order: 2; order: 2; } footer { -webkit-order: 3; order: 3; } } </style> </head> <body> <div class="flex-container"> <header> <h1>Html Layout based on CSS flexbox property</h1> </header> <nav class="nav"> <ul> <li><a href="#">link1</a></li> <li><a href="#">Link2</a></li> <li><a href="#">Link3</a></li> </ul> </nav> <article class="article"> <h1>Flexbox</h1> <p> Molestias veniam expedita aliquid alias unde ipsam porro sequi vel, dolor rem esse soluta Lorem ipsum dolor sit amet consectetur adipisicing elit. voluptas eligendi nostrum voluptatem sapiente consectetur adipisicing elit. error aliquid alias unde ipsam fugit eveniet! </p> <p> ipsum dolor sit amet consectetur adipisicing elit. voluptas eligendi nostrum voluptatem sapiente consectetur adipisicing elit. </p> <p><strong>Resize this page and see what happens!</strong></p> </article> <footer>Copyright © xyz</footer> </div> </body> </html> Output 3. CSS Grid It is simpler to design web pages without the usage of floats and positioning, thanks to the CSS Grid Layout Module, which provides a grid-based layout system with rows and columns. 4. CSS Framework Websites may easily be made to run with different browsers and browser versions thanks to CSS frameworks. This lessens the possibility that errors will emerge during cross-browser testing. Utilizing these frameworks enables quicker and more efficient web development because they come with ready-to-use stylesheets. Conclusion Designing the layout of a webpage is the most crucial part because this is the first thing a user will see on your website. There are several ways to design a layout of a page. We can use any CSS-based frameworks like Bootstrap, Material, and Tailwind, and there are also many JavaScript-based frameworks available.
At AINIRO.IO we've just created a new release of Magic, where the most important feature is the ability to dynamically compile C# code and load the resulting IL code into the AppDomain, almost turning C# into an "interpreted language" due to an execution model that is more similar to PHP and JavaScript than traditionally compiled languages. This has a lot of benefits, especially for Business Process Workflows, since it allows you to use C# in a dynamic runtime, where you've got dynamic actions that are executed from Hyperlambda being a high-level execution orchestration runtime. Below is some example code illustrating the idea: C# using System; using magic.node; using magic.node.extensions; using magic.signals.contracts; [Slot(Name = "foo")] public class Foo : ISlot { public void Signal(ISignaler signaler, Node input) { input.Value = $"Hello {input.GetEx()}, najs to meet you"; } } The point about the above code, of course, is that it implements the ISlot interface, which allows me to interact with it from Hyperlambda, as illustrated below. C# foo:Thomas Hansen The above Hyperlambda, of course, will invoke my C# slot, passing in "Thomas Hansen," and my C# slot, of course, will do some simple string concatenation, returning the result to the caller. If you save the above C# code as "/etc/csharp/foo.cs", you can execute the following Hyperlambda code to dynamically compile the file and execute the slot. C# // Loading file. io.file.load:/etc/csharp/slot.cs // compiling file into an assembly. system.compile references .:netstandard .:System.Runtime .:System.ComponentModel .:System.Private.CoreLib .:magic.node .:magic.node.extensions .:magic.signals.contracts code:x:@io.file.load assembly-name:foo.dll // Loading assembly as plugin now that we've created it. system.plugin.load:x:@system.compile // Invoking dynamically created C# slot. .name:John Doe foo:x:@.name // Unloading plugin. system.plugin.unload:foo.dll Notice that the above [system.compile] never saves the assembly but returns it as a byte[]. To save the compiled code, you can use, for instance [io.file.save.binary]. In the video below, I am demonstrating some features related to this and showing you how you can almost treat C# as if it's a 100% dynamic scripting language due to the dynamic nature of the process. This has a lot of advantages, especially related to BPW or Business Process Workflows, where you've got tons of smaller building blocks or composables you need to orchestrate together dynamically without having to go through an entire process of deployment and more rigid processes. This allows you to dynamically orchestrate C# snippets together, where Hyperlambda becomes the orchestration tool, loosely coupling building blocks of C# code together that somehow perform a larger task. Due to the dynamic nature of Hyperlambda again, allowing you to build anything from scheduled tasks to HTTP endpoints, this has a lot of really interesting advantages for more complex domains, where the end state of your system is in constant flux, possibly due to integrating with hundreds of different parts, where each part is a separate application, often changing over time, making statically compiled code sub-optimal. Statically compiled code is amazing, and you should, of course, prefer it when you can — However, there are problem domains it is fundamentally incompatible with — Workflows being one example. Now, with the ability to compile C# code on the fly in Hyperlambda, this is no longer a problem, and you can use statically compiled C# as much as you wish for such problems. As long as you obey the Hyperlambda interface being the ISlot interface, allowing Hyperlambda to orchestrate your code together.
Teaching kids to program is not just about technicalities or computers. It is about unlocking their potential and instilling some of the most crucial life skills that help them survive in the ever-evolving technological world. In this article, we will find out the importance of teaching coding to kids as well as list some of the best programming languages that can be introduced at an early age. Why Teach Kids to Code? Coding has become one of the most crucial skills to have in the 21st century. It is no longer considered as a skill for professionals or experts. Here are some of the major advantages of teaching coding to kids can have Logical thinking: Coding not only involves solving a problem or developing an idea but also encourages questioning, assumptions, and listing out step-by-step plans of action. It promotes logical thought processes to handle any situation or challenge. Creativity: Coding offers an opportunity for kids to explore and express their imaginations and ideas via a digital platform. Kids have the freedom to choose any functionalities and endless possibilities while addressing an issue or building games. Problem-solving: Coding often requires kids to break down complex issues or projects, analyze them, find possible solutions, and implement them. This makes kids systematic thinkers looking for the best solutions. Future-proofing: Coding skills undoubtedly open the door to a number of exciting career opportunities. From developer to data scientist, analyst to cybersecurity, machine learning to AI, the job market is endless. Best Programming Languages for Kids There are hundreds of programming languages available today, and choosing the best programming languages can be a challenging task. Here is a list of 4 top programming languages in 2024 that are highly recommended for kids to learn. 1. JavaScript (Scripting Programming Language) JavaScript is one of the most used programming languages. Interactivity: JavaScript makes interactivity of web pages and applications possible. It allows users to take real-time actions and enhances functionality by creating a user-friendly experience via animations, buttons, and more. Real-world applications: JavaScript has a wide real-world application, and most websites and applications use it, including Facebook, Google Maps, and Gmail. Easy to start: JavaScript doesn't have hard-to-grasp syntax and a dedicated platform to master. Kids can start it even with a simple text editor and relatively simple lines of code to develop exciting projects. 2. Python Programming Language Python is among the most popular programming languages today. It is a high-level and versatile language that is used for web development, data analysis, machine learning, automation, and more. Clear and readable: The major advantage of using Python is that it doesn't have a complex syntax to adopt. The syntaxes are very easy to read and write, which reduces coding errors drastically and makes it ideal for a beginner-core programming language. Educational resources: Python is an open-source language and has a community of its own. It also has a huge library and enough resources available, be it in terms of tutorials, guides, or sample projects, that allows new learners to create, clear doubts, and get ideas to start with. Problem-solving: Just like any other programming language, python makes it possible to solve or develop a wide range of exciting projects with much ease. 3. CSS (Cascading Style Sheets) CSS stands for Cascading Style Sheets. It is a styling sheet that contains information regarding the layout, colors, fonts, and visual presentation of a webpage: CSS is responsible for making a webpage attractive and appealing to the users. It allows the designer to try and experiment with different aesthetics of a web page. CSS offers an excellent opportunity for kids to explore their creativity by developing color schemes, animations, and a more engaging user interface. CSS is responsible for making the web content adaptable to different screen sizes and devices without compromising the quality and user experience. 4. HTML (HyperText Markup Language) HTML stands for Hypertext Markup Language. It is a lightweight coding language that is very easy to learn and use. Structure: It is responsible for structuring and organizing various content or elements on the web pages, e.g., title, heading, paragraphs, body, table, etc. Semantics: HTML uses tags to identify or mark the contents on the web pages. These tags help the search engines understand the content and make it accessible to the user. For example: HTML <header>, <nav>, <article>, <footer> Integration: HTML allows various features or components to be easily incorporated on a webpage. This basically serves as a backbone to a web page — e.g., CSS, Widgets, CMS, etc. The top programming languages mentioned above have been carefully chosen, and mastering them will give kids exposure to a broad field of programming. This is due to the fact that the list includes languages that help in developing fully working websites or applications, including designing, presentations, functionality, and accessibility. Additionally, these languages are beginner-friendly, making them a great option for newcomers.
The world of Android app development is constantly evolving, and so are the tools and languages used to build these apps. Gradle, a popular build system, has been an integral part of Android development for years. In the past, Gradle build scripts for Android projects were written in Groovy, but with the introduction of Kotlin, developers now have the option to write their build scripts in a more modern and concise language. In this article, we'll explore the transition from Groovy to Kotlin for Gradle Android projects and discuss the benefits and steps involved in making this shift. Why Transition to Kotlin for Gradle? Modern language: Kotlin is a modern, statically typed language that offers features not present in Groovy, making build scripts more concise and expressive. It is designed to be fully interoperable with Java, which is crucial for Android development. Type safety: Kotlin is known for its strong type safety, reducing the likelihood of runtime errors. With Groovy, you might encounter runtime issues due to dynamic typing. Improved tooling support: The Android Studio IDE has excellent support for Kotlin, making it easier to write, read, and maintain Gradle scripts. Code completion, refactoring, and error checking are some of the benefits you'll experience when using Kotlin. Conciseness: Kotlin's concise syntax can lead to shorter, more readable code. This is particularly beneficial in build scripts, which often involve complex logic. Transitioning to Kotlin Step-By-Step Here's a step-by-step guide on how to transition from Groovy to Kotlin for Gradle Android projects: 1. Check Kotlin Version Ensure that you have a recent version of Kotlin installed. You can do this by adding the Kotlin DSL plugin to your project. You can find the latest version on the Kotlin website Kotlin Gradle Plugin Portal. Kotlin plugins { kotlin("jvm") version "latest_version_here" } Replace "latest_version_here" with the actual version number you obtained from the Kotlin website or Gradle plugin portal. This ensures that you're using the most up-to-date version of the Kotlin plugin for your Gradle Android project. 2. Convert Gradle Files Start by converting your project's build.gradle files to Kotlin DSL files (.kts). You can do this by renaming the files or by selecting the "Convert to Kotlin DSL" option in Android Studio. Groovy (build.gradle) Groovy android { compileSdkVersion 30 defaultConfig { applicationId "com.example.myapp" minSdkVersion 21 targetSdkVersion 30 } } Kotlin (build.gradle.kts) Kotlin android { compileSdkVersion(30) defaultConfig { applicationId = "com.example.myapp" minSdkVersion(21) targetSdkVersion(30) } } 3. Update Build Script Modify your build.gradle.kts script to use Kotlin syntax. You'll notice that variable declarations, function definitions, and other aspects of the script will differ from Groovy. Be prepared for a bit of a learning curve if you're new to Kotlin. 4. Dependencies and Plugins Ensure that any third-party dependencies and Gradle plugins used in your project are compatible with Kotlin DSL. Most widely used libraries and plugins already support Kotlin, but it's essential to verify this. Groovy (build.gradle): Groovy apply plugin: 'kotlin-android' apply plugin: 'kotlin-android-extensions' dependencies { implementation 'com.android.support:appcompat-v7:28.0.0' implementation 'com.google.code.gson:gson:2.8.6' } Kotlin (build.gradle.kts): Kotlin plugins { kotlin("android") kotlin("android.extensions") } dependencies { implementation 'com.android.support:appcompat-v7:28.0.0' implementation 'com.google.code.gson:gson:2.8.6' } 5. Using Kotlin's Extension Functions Kotlin allows you to define extension functions to make your Gradle build scripts more concise and expressive. Here's an example of defining an extension function to configure ProGuard rules: Kotlin fun ProguardFiles.getDefaultProguardFile(name: String) = getDefaultFile("${name}.pro") android { buildTypes { release { proguardFiles(getDefaultProguardFile("proguard-android.txt"), getDefaultProguardFile("proguard-rules.pro")) } } } This extension function simplifies the code by encapsulating the logic of getting the default ProGuard file. 6. Test and Debug After converting your build scripts, thoroughly test your build process. Be on the lookout for errors or unexpected behavior, as syntax differences can lead to issues. 7. Migration in Stages It's often a good idea to transition gradually. Start with a small, less critical module or subproject before migrating your entire project. This allows you to get comfortable with the new syntax and identify potential issues. 8. Leverage Kotlin Features As you migrate, take advantage of Kotlin's features. For example, you can use Kotlin's powerful extension functions to make your build scripts more concise and expressive. 9. Continuous Learning Kotlin is a rich language with many features. Continue to learn and explore how you can improve your Gradle scripts by leveraging Kotlin's capabilities. Benefits and Future-Proofing Transitioning to Kotlin for your Gradle Android projects may require some effort, but it's a worthwhile investment. The benefits of improved tooling, type safety, and conciseness can significantly enhance your development process. Furthermore, as Kotlin continues to gain traction in the Android development community, transitioning your Gradle scripts to Kotlin is a step toward future-proofing your projects. In conclusion, the transition from Groovy to Kotlin for Gradle Android projects can lead to more robust and maintainable build scripts. Embracing Kotlin's modern features and improved tooling support can make your development process more efficient and less error-prone. It's a step forward in keeping your Android projects up-to-date with the latest technologies and best practices.
In most financial firms, online transaction processing (OLTP) often relies on static or infrequently updated data, also called reference data. Reference data sources don’t always require ACID transaction capabilities, rather need support for fast read queries often based on simple data access patterns, and event-driven architecture to ensure the target systems remain up-to-date. NoSQL databases emerge as ideal candidates to meet these requirements, and cloud platforms such as AWS offer managed and highly resilient data ecosystems. In this article, I am not going to determine which AWS NoSQL database is better: the concept of a better database only exists within a specific purposeful context. I will share a coding lab to measure the performance of AWS-managed NoSQL databases such as DynamoDB, Cassandra, Redis, and MongoDB. Performance Testing I will start by defining the performance test case, which will concurrently insert a JSON payload 200 times and then read it 200 times. JSON Payload The base/parent class in base_db.py implements the test case logic of executing 10 concurrent threads to create and read 200 records. Python #imports ..... class BaseDB: def __init__(self, file_name='instrument.json', threads=10, records=20): ................................... def execute(self): create_threads = [] for i in range(self.num_threads): thread = threading.Thread( target=self.create_records, args=(i,)) create_threads.append(thread) thread.start() for thread in create_threads: thread.join() read_threads = [] for i in range(self.num_threads): thread = threading.Thread(target=self.read_records, args=(i,)) read_threads.append(thread) thread.start() for thread in read_threads: thread.join() self.print_stats() Each thread executes the write/read routine in the create_records and read_records, respectively. Notice that these functions do not include any database-specific logic, but rather, measure the performance of each read-and-write execution. Python def create_records(self, thread_id): for i in range(1, self.num_records + 1): key = int(thread_id * 100 + i) start_time = time.time() self.create_record(key) end_time = time.time() execution_time = end_time - start_time self.performance_data[key] = {'Create Time': execution_time} def read_records(self, thread_id): for key in self.performance_data.keys(): start_time = time.time() self.read_record(key) end_time = time.time() execution_time = end_time - start_time self.performance_data[key]['Read Time'] = execution_time Once the test case is executed, the print_stats function prints the execution metrics such as the read/write mean and the standard deviation (stdev) values, which indicate database read/write performance and consistency (smaller stdev implies more consistent execution performance). Python def print_stats(self): if len(self.performance_data) > 0: # Create a Pandas DataFrame from performance data df = pd.DataFrame.from_dict(self.performance_data, orient='index') if not df.empty: df.sort_index(inplace=True) # Calculate mean and standard deviation for each column create_mean = statistics.mean(df['Create Time']) read_mean = statistics.mean(df['Read Time']) create_stdev = statistics.stdev(df['Create Time']) read_stdev = statistics.stdev(df['Read Time']) print("Performance Data:") print(df) print(f"Create Time mean: {create_mean}, stdev: {create_stdev}") print(f"Read Time mean: {read_mean}, stdev: {read_stdev}") NoSQL Code Unlike relational databases that support standard SQL, each NoSQL database has its own SDK. The child test case classes for each NoSQL database only need to implement a constructor and create_record/read_recod functions that contain proprietary database SDK to instantiate a database connection and to create/read records in a few lines of code. DynamoDB Test Case Python import boto3 from base_db import BaseDB class DynamoDB (BaseDB): def __init__(self, file_name='instrument.json', threads=10, records=20): super().__init__(file_name, threads, records) dynamodb = boto3.resource('dynamodb', region_name='us-east-1') table_name = 'Instruments' self.table = dynamodb.Table(table_name) def create_record(self, key): item = { 'key': key, 'data': self.json_data } self.table.put_item(Item=item) def read_record(self, key): self.table.get_item(Key={'key': key}) if __name__ == "__main__": DynamoDB().execute() AWS Setup To execute these performance test cases in an AWS account, you should follow these steps: Create an EC2 IAM role with privileges to access the required AWS data services. Launch an EC2 instance and assign the newly created IAM role. Create each NoSQL database instance. IAM Role DynamoDB Table Cassandra Keyspace/Table Please note the DB host and credentials were hardcoded and removed in the mongo_db.py and redis_db.py modules and will need to be updated with the corresponding database connection setting for your AWS account. To connect to DynamoDB and Cassandra, I opted to use the Boto3 session credentials temporarily assigned to the db_performnace_iam_role IAM Role. This code will run in any AWS account in the East 1 region without any modification. Python class CassandraDB(BaseDB): def __init__(self, file_name='instrument.json', threads=10, records=20): super().__init__(file_name=file_name, threads=threads, records=records) self.json_data = json.dumps( self.json_data, cls=DecimalEncoder).encode() # Cassandra Keyspaces configuration contact_points = ['cassandra.us-east-1.amazonaws.com'] keyspace_name = 'db_performance' ssl_context = SSLContext(PROTOCOL_TLSv1_2) ssl_context.load_verify_locations('sf-class2-root.crt') ssl_context.verify_mode = CERT_REQUIRED boto_session = boto3.Session(region_name="us-east-1") auth_provider = SigV4AuthProvider(session=boto_session) cluster = Cluster(contact_points, ssl_context=ssl_context, auth_provider=auth_provider, port=9142) self.session = cluster.connect(keyspace=keyspace_name) Connect to the EC2 instance (I used the Session Manager), and run the following Shell script to perform these tasks: Install Git. Install Pythion3. Clone the GitHub performance_db repository. Install and activate the Python3 virtual environment. Install 3rd party libraries/dependencies. Execute each test case. Shell sudo yum install git sudo yum install python3 git clone https://github.com/dshilman/db_performance.git sudo git pull cd db_performance python3 -m venv venv source ./venv/bin/activate sudo python3 -m pip install -r requirements.txt cd code sudo python3 -m dynamo_db sudo python3 -m cassandra_db sudo python3 -m redis_db sudo python3 -m mongo_db You should see the following output for the first two test cases: (venv) sh-5.2$ sudo python3 -m dynamo_dbPerformance Data: Create Time Read Time1 0.336909 0.0314912 0.056884 0.0533343 0.085881 0.0313854 0.084940 0.0500595 0.169012 0.050044.. ... ...916 0.047431 0.041877917 0.043795 0.024649918 0.075325 0.035251919 0.101007 0.068767920 0.103432 0.037742 [200 rows x 2 columns]Create Time mean: 0.0858926808834076, stdev: 0.07714510154026173Read Time mean: 0.04880355834960937, stdev: 0.028805479258627295Execution time: 11.499964714050293 (venv) sh-5.2$ sudo python3 -m cassandra_dbPerformance Data: Create Time Read Time1 0.024815 0.0059862 0.008256 0.0069273 0.008996 0.0098104 0.005362 0.0058925 0.010117 0.010308.. ... ...916 0.006234 0.008147917 0.011564 0.004347918 0.007857 0.008329919 0.007260 0.007370920 0.004654 0.006049 [200 rows x 2 columns]Create Time mean: 0.009145524501800537, stdev: 0.005201661271831082Read Time mean: 0.007248317003250122, stdev: 0.003557610695674452Execution time: 1.6279327869415283 Test Results DynamoDB Cassandra MongoDB Redis Create mean: 0.0859stdev: 0.0771 mean: 0.0091stdev: 0.0052 mean: 0.0292std: 0.0764 mean: 0.0028stdev: 0.0049 Read mean: 0.0488stdev: 0.0288 mean: 0.0072stdev: 0.0036 mean: 0.0509std: 0.0027 mean: 0.0012stdev: 0.0016 Exec Time 11.45 sec 1.6279 sec 10.2608 sec 0.3465 sec My Observations I was blown away by Cassandra’s fast performance. Cassandra support for SQL allows rich access pattern queries and AWS Keyspaces offer cross-region replication. I find DynamoDB's performance disappointing despite the AWS hype about it. You should try to avoid the cross-partition table scan and thus must use an index for each data access pattern. DynamoDB global tables enable cross-region data replication. MongoDB has a very simple SDK, is fun to use, and has the best support for the JSON data type. You can create indexes and run complex queries on nested JSON attributes. As new binary data formats are emerging, MongoDB may lose its appeal. Redis performance is amazingly fast, however, at the end of the day, it’s a key/value cache even if it supports complex data types. Redis offers powerful features such as pipelining and scripting to further improve query performance by passing code to Redis to execute on the server side. Conclusion In conclusion, choosing the AWS-managed NoSQL database for your enterprise reference data platform depends on your specific priorities. If performance and cross-region replication are your primary concern, AWS Cassandra stands out as a clear winner. DynamoDB integrates well with other AWS services such as Lambda and Kinesis and therefore is a great option for AWS native or serverless architecture. For applications requiring robust support for JSON data types, MongoDB takes the lead. However, if your focus is on fast lookup or session management for high availability, Redis proves to be an excellent option. Ultimately, the decision should align with your organization's unique requirements. As always, you can find the code in the GitHub repo linked earlier in this article (see Shell script task #3 above). Feel free to contact me if you need help running this code or with the AWS setup.
Reza Rahman
Principal Program Manager, Java on Azure,
Microsoft
Kai Wähner
Technology Evangelist,
Confluent
Alvin Lee
Founder,
Out of the Box Development, LLC