2024 DevOps Lifecycle: Share your expertise on CI/CD, deployment metrics, tech debt, and more for our Feb. Trend Report (+ enter a raffle!).
Kubernetes in the Enterprise: Join our Virtual Roundtable as we dive into Kubernetes over the past year, core usages, and emerging trends.
DZone's Annual DevOps Research — Join Us! [survey + raffle]
Production-Like Testing Environments in Software Development
Enterprise Security
This year has observed a rise in the sophistication and nuance of approaches to security that far surpass the years prior, with software supply chains being at the top of that list. Each year, DZone investigates the state of application security, and our global developer community is seeing both more automation and solutions for data protection and threat detection as well as a more common security-forward mindset that seeks to understand the Why.In our 2023 Enterprise Security Trend Report, we dive deeper into the greatest advantages and threats to application security today, including the role of software supply chains, infrastructure security, threat detection, automation and AI, and DevSecOps. Featured in this report are insights from our original research and related articles written by members of the DZone Community — read on to learn more!
API Integration Patterns
Getting Started With Low-Code Development
HAProxy is one of the cornerstones in complex distributed systems, essential for achieving efficient load balancing and high availability. This open-source software, lauded for its reliability and high performance, is a vital tool in the arsenal of network administrators, adept at managing web traffic across diverse server environments. At its core, HAProxy excels in evenly distributing the workload among servers, thereby preventing any single server from becoming a bottleneck. This functionality enhances web applications' overall performance and responsiveness and ensures a seamless user experience. More importantly, HAProxy is critical in upholding high availability — a fundamental requirement in today's digital landscape where downtime can have significant implications. Its ability to intelligently direct traffic and handle failovers makes it indispensable in maintaining uninterrupted service, a key to thriving in the competitive realm of online services. As we delve deeper into HAProxy's functionalities, we understand how its nuanced approach to load balancing and steadfast commitment to high availability make it an irreplaceable component in modern distributed systems. This article will mainly focus on implementing a safe and optimized health check configuration to ensure a robust way to remove unhealthy servers and add healthy servers back to the rotation. Dynamic Server Management in HAProxy One of the standout features of HAProxy is its ability to dynamically manage servers, meaning it can add or remove servers from the network as needed. This flexibility is a game-changer for many businesses. When traffic to a website or application increases, HAProxy can seamlessly bring more servers online to handle the load. Conversely, during quieter periods, it can reduce the number of servers, ensuring resources aren't wasted. This dynamic server management is crucial for two main reasons: scalability and fault tolerance. Scalability refers to the ability of a system to handle increased load without sacrificing performance. With HAProxy, this is done effortlessly. HAProxy scales up the system's capacity as demand grows by adding more servers, ensuring that a sudden spike in users doesn't crash the system. This scalability is vital for businesses that experience fluctuating traffic levels or are growing quickly. Fault tolerance is another critical benefit. In any system, servers can fail for various reasons. HAProxy's dynamic server management means it can quickly remove problematic servers from the rotation and reroute traffic to healthy ones. This ability to immediately respond to server issues minimizes downtime and keeps the application running smoothly, which is crucial for maintaining a reliable online presence. In short, HAProxy's dynamic server management offers a flexible and efficient way to handle varying traffic loads and unexpected server failures, making it an indispensable tool for modern web infrastructure. Sample Architecture depicting HAProxy routing requests The above image shows a typical architecture style of request and response servers. HAProxy is installed and configured in this particular setup on all the servers sending requests. HAProxy is configured here so all the response servers are in rotation and actively respond to the requests. HAProxy handles routing and load-balancing requests to a healthy response server. Practical Scenarios and Use Cases HAProxy's dynamic server management proves its worth in various real-world scenarios, demonstrating its versatility and necessity in modern web infrastructures. Let's explore some critical instances where this feature becomes crucial: Handling Traffic Spikes Imagine an online retail website during a Black Friday sale. The traffic can surge unexpectedly, demanding more resources to handle the influx of users. With HAProxy, the website can automatically scale up by adding more servers to the rotation. This ensures that the website remains responsive and can handle the increased load without crashing, providing a seamless shopping experience for customers. Scheduled Maintenance Periods HAProxy offers a smooth solution for websites requiring regular maintenance. During these periods, servers can be taken down for updates or repairs. HAProxy can reroute traffic to other operational servers, ensuring that the website remains live and users are unaffected by the maintenance activities. Unexpected Server Failures In scenarios where a server unexpectedly fails, HAProxy's health check mechanisms quickly detect the issue and remove the faulty server from the pool. Traffic is then redistributed among the remaining servers, preventing potential service disruptions and maintaining uptime. Media Streaming Services during Major Events Viewer numbers can skyrocket unexpectedly for services streaming live events like sports or concerts. HAProxy helps these services by scaling their server capacity in real-time, ensuring uninterrupted streaming even under heavy load. Optimizing Health Checks for Effective Server Rotation This section will explore implementing a safe and optimized health check configuration to act against unexpected server failures described above. Unexpected server failures are inevitable in network systems, but with HAProxy, the impact of such failures can be significantly mitigated by implementing and optimizing health checks. Health checks are automated tests HAProxy performs to evaluate the status of servers in its pool continually. When a server fails or becomes unresponsive, these checks quickly identify the issue, allowing HAProxy to instantly remove the problematic server from the rotation and reroute traffic to healthy ones. This process is essential for maintaining uninterrupted service and high availability. The code snippet below shows one approach to implementing robust health checks. For more details about syntax and keywords in the HAProxy.cfg file, please refer to the manual page. HAProxy.cfg code snippet for health checks inter - This parameter represents the frequency of time interval between health checks fast fall - represents the number of failed checks before removing the server from rotation rise - represents the number of passing checks before adding the server back to rotation With inter 2s fall 2 rises 10, we are configuring HAProxy to perform health checks every 2 seconds on the provided URI path. If HAProxy encounters two (fall 2) consecutive failing checks on a server, it will be removed from rotation and won't take any traffic. Here, we take an aggressive approach by keeping the threshold for failure very low. Similarly, rise 10 ensures that we take a conservative approach in putting a server back in the rotation by waiting for ten consecutive health checks to pass before adding it back to the rotation. This approach provides the right balance when dealing with unexpected server failures. Conclusion In conclusion, HAProxy's dynamic server management, along with its sophisticated health check mechanisms, plays a vital role in modern-day distributed systems infrastructure stack. By enabling real-time responsiveness to traffic demands and unexpected server issues, HAProxy ensures high availability, seamless user experience, and operational efficiency. The detailed exploration of real-world scenarios and the emphasis on optimizing health checks for server rotation underscore the adaptability and resilience of HAProxy in various challenging environments. This capability not only enhances system reliability but also empowers businesses to maintain continuous service quality, a critical factor in today's digital landscape. Ultimately, HAProxy emerges not just as a tool for load balancing but as a comprehensive solution for robust, resilient systems, pivotal for any organization striving for excellence in online service delivery.
As React Native applications evolve, the need for efficient state management becomes increasingly evident. While Async Storage serves its purpose for local data persistence, transitioning to the Context API with TypeScript brings forth a more organized and scalable approach. This comprehensive guide will walk you through the migration process step by step, leveraging the power of TypeScript. Understanding Async Storage and Context API Async Storage in React Native offers asynchronous, persistent storage for key-value data on the device. As the application scales, managing the state solely through Async Storage might become cumbersome. The Context API, in conjunction with TypeScript, provides a structured means of sharing state across components without prop drilling. It ensures type safety and enhances development efficiency. Why Replace Async Storage With Context API in Typescript? Type safety: TypeScript's strong typing system ensures better code integrity and reduces potential runtime errors. Scalability and maintainability: Context API simplifies state management and promotes scalability by facilitating a more organized codebase. Enhanced development experience: TypeScript's static typing aids in catching errors during development, leading to more robust and maintainable code. Step-By-Step Replacement Process 1. Identify Async Storage Usage Review the codebase to locate sections using Async Storage for reading or writing data. 2. Create a Context With TypeScript TypeScript typescript Copy code import React, { createContext, useContext, useReducer, Dispatch } from 'react'; interface AppState { // Define your application state interface here exampleData: string; } interface AppAction { // Define action types and payload structure here type: string; payload?: any; } const initialState: AppState = { exampleData: '', }; const AppContext = createContext<{ state: AppState; dispatch: Dispatch<AppAction>; }>({ state: initialState, dispatch: () => null, }); const appReducer = (state: AppState, action: AppAction): AppState => { // Implement your reducer logic here based on action types switch (action.type) { case 'UPDATE_DATA': return { ...state, exampleData: action.payload, }; // Add other cases as needed default: return state; } }; const AppProvider: React.FC = ({ children }) => { const [state, dispatch] = useReducer(appReducer, initialState); return ( <AppContext.Provider value={{ state, dispatch }> {children} </AppContext.Provider> ); }; const useAppContext = () => { return useContext(AppContext); }; export { AppProvider, useAppContext }; 3. Refactor Components To Use Context Update components to consume data from the newly created context: TypeScript import React from 'react'; import { useAppContext } from './AppContext'; const ExampleComponent: React.FC = () => { const { state, dispatch } = useAppContext(); const updateData = () => { const newData = 'Updated Data'; dispatch({ type: 'UPDATE_DATA', payload: newData }); }; return ( <div> <p>{state.exampleData}</p> <button onClick={updateData}>Update Data</button> </div> ); }; export default ExampleComponent; 4. Implement Context Provider Wrap your application's root component with the AppProvider: TypeScript import React from 'react'; import { AppProvider } from './AppContext'; import ExampleComponent from './ExampleComponent'; const App: React.FC = () => { return ( <AppProvider> <ExampleComponent /> {/* Other components using the context */} </AppProvider> ); }; export default App; 5. Test and Debug Thoroughly test the application to ensure proper functionality and handle any encountered issues during the migration process.
As the year 2023 winds down, there is time for reflection and looking back. I've done that every year on this blog with year-in-review articles. This year I thought I might take all the new learnings around cloud native observability, add in some insights from all the events I attended related to observability, and try to predict what the biggest changes might be for 2024. In this article, I plan to lay out three top predictions based on my experiences over 2023 in the cloud native domain, with a big focus on the observability arena. This has been my first complete year focused on these topics, and the change in technologies I've been focusing on up to now meant I could approach this with zero bias. I just researched, then went hands-on with open source projects mostly found in the Cloud Native Computing Foundation (CNCF) domain, and went on the road to events to put an ear to the ground. While many predictions you find out in the wild tend to be about the next big technology breakthrough or about the expanded use of an emerging technology in larger organizations, this time around I've found myself thinking about this in a slightly different way. Let's take a look at my top three predictions and the thoughts behind them for 2024. Inside, I think everyone cringes a bit when asked to produce their thoughts and predictions on the new year. You just can't win with these things and committing to them ensures you will be told long into the future how wrong you were! Now on to my top three predictions for 2024. 1. Focus on Cloud Native Burn-Out The number one topic of conversation in the cloud native observability domain in 2023 has been, without a doubt, burnout. This has been in every role, from Sight Reliability Engineers (SREs), DevOps, engineers, developers, and managing any part of the cloud-native engineering experience within an organization. They all resonated with this being the number one theme out there. Where does this come from you might ask? According to research in a 2023 Cloud Native Observability Report over 500 engineers and developers were surveyed and here are a few of the results: They are spending 10 hours on average, per week, trying to triage and understand incidents - that's a quarter of their 40-hour workweek. 88% reported that the amount of time spent on issues negatively impacts them and their careers. 39% admit they are frequently stressed out. 22% said they want to quit. It looks like the issues surrounding the use of cloud-native solutions, managing and maintaining that infrastructure, all will continue to expand on the stress, pressure, and resulting impact on cloud-native resources across the globe and in all kinds of organizations. My prediction is that the attention this topic got in 2023, which was primarily focused on the on-call roles, will expand and deepen into all areas where organizations are trying to grow their cloud-native footprints. In 2024 we will hear more about burn-out-related stress, hear more ideas on how to solve it, and see it become one of the biggest topics of conversation at events, online, and at the coffee machine. 2. More Career Movement As mentioned above, the points of contention in the first prediction make this last prediction less staggering. A quick look at IT roles and the retention rates across all organizations and you will see that it's a rather high number of developers, engineers, DevOps, SREs, and more that are changing employers every year. This is not to say that in 2024 there will be massive layoffs. It's more about the levels of stress, burnout, and pressures that come with cloud-native organizations. According to research posted by Sterling in late 2022, the tech turnover rate was at 13.2%, exceeding all other industries that had an average of 10.5%. LinkedIn research found other sources in their survey that pushed the turnover rate in tech to 18.3%. Either number you choose, this is about cloud-native technical staff having roles that are filled with days of frustration, stress, and problems. They will hit that final wall in 2024 and determine that there is no way to be happy and engaged in their current organizations. My prediction is that a surge of career movement - over 25% of current tech roles - will take the plunge and try to find fulfillment in new roles and new organizations and take on new opportunities. 3. Cloud Native Cost Focus From its initial kickoff in early 2019, to its entry into the Linux Foundation in 2020, and into the future, the FinOps Foundation has become vital to all cloud-native and cloud-using organizations. All through 2022 and 2023, we've seen organizations beginning to realize that they need to get value for every dollar spent on cloud-native services. Along these lines, the FinOps Foundation has become the central gathering place for practitioners in the FinOps role across all kinds of organizations. They support them with training and certifications and are close to releasing The FinOps Cost and Usage Specification (FOCUS) v1.0, which aligns with the open-source community approach to this fairly new space. My prediction is that the continued growth seen in the field of FinOps in 2023 in cloud-native organizations will evolve in 2024 into a permanent value-add for more and more organizations. CIOs, CFOs, and CTOs are going to lean more in 2024 on the FinOps roles, processes, and education to manage their cloud-native spend to ensure the value per dollar spent continues to have an impact on their cloud-native investments. On to 2024 There you have my thoughts or predictions, for what I feel are the impressions that 2023 left on me during my travels, conversations, and research into the cloud native and observability domains. Now it's time to roll on into the new year and see what 2024 brings for you and your organizations.
The advancement of technologies such as artificial intelligence (AI) has enabled modern chatbots to respond to user requests using text, audio, and video, eliminating the need for manual research. Chatbots and assistants are now applicable in a wide range of use-case scenarios, from ordering a pizza to navigating complex B2B sales processes. As a result, chatbots have become an essential part of almost every industry today. This article delves into the world of chatbots and AI assistants, as well as a step-by-step guide to creating a chatbot with Natural Language Processing (NLP) and chatbot frameworks. Understanding the Power of Chatbots and AI Assistants The first ever chatbot was created by MIT professor Joseph Weizenbaum in 1966. It was called ELIZA, and it simulated a conversation by using pattern matching and substitution methodologies. The bot searched for keywords in the user input, then used scripts to apply values to the keywords and transform them into an output. Weizenbaum did not expect ELIZA to amuse users as much as it did, with many people attributing human-like feelings to the program and experts predicting that conversational technologies will take over the world in the future. In the decades that followed, the chatbots continued to evolve, with new ones like Jabberwacky, ALICE, and SmarterChild employing increasingly sophisticated technologies like heuristic pattern matching. The emerging generations of chatbots were gradually gaining the ability to support more conversation modes and provide access to additional services such as weather updates, news alerts, and even simple games. The next revolution occurred in 2011 when Apple introduced Siri, a voice-activated AI assistant integrated into the iPhone 4S. It used advanced machine learning techniques to answer questions, make recommendations, and perform actions by delegating requests to a set of Internet services, becoming one of the first mainstream AI assistants. Later, all major tech companies debuted their own AI assistants, such as Google Now (2012), Microsoft's Cortana (2014), Amazon's Alexa (2014), and Google Assistant (2016). The Nuts and Bolts of AI Assistants So, today's AI Assistant is a sophisticated chatbot with AI capabilities that frequently employs machine learning to improve over time. AI assistants are more advanced than traditional chatbots in naturally understanding and responding to human language. They can learn from interactions and perform a wide variety of tasks rather than being restricted to predefined scripts. Capabilities of AI Assistants Voice recognition Natural language processing Task automation (e.g., setting reminders, playing music) Personalized recommendations Ability to integrate with various apps and IoT devices Business Adoption of AI Customer support Routine inquiries Sales Marketing Data analysis The Magic Behind NLP: Unraveling the Basics Natural language processing (NLP) is a subset of artificial intelligence that includes technologies that enable computers to understand, interpret, and respond to human language. Beyond chatbots, it's used in sentiment analysis to gauge public opinion and language translation to bridge communication gaps. To better understand NLP, it is necessary to investigate its fundamental concepts: Tokenization: the process of dividing the text into smaller parts, such as individual words or phrases, known as tokens, to assist machines in analyzing human speech. Part-of-speech tagging: the process of identifying each word's grammatical role in the phrase, which improves a chatbot's understanding of sentence structure. Named entity recognition: The process of detecting names of people, places, and things, which is essential for chatbots to understand context. These ideas are critical for making chatbots smarter and more responsive. Choosing the Right Chatbot Framework In today's modern educational technology landscape, even non-programmers can create a chatbot. The market is brimming with tools and frameworks to make this truly simple. Among the most popular frameworks are: Dialogflow Dialogflow integrates with Google services and has an easy-to-use interface as well as strong NLP capabilities. However, it can be expensive to use on a large scale. Rasa Rasa is open-source and highly customizable. It is suitable for complex bots. This tool has two main components, RASA NLU and RASA Core, which aid in the development of bots capable of handling complex user inquiries. More technical knowledge is required. Microsoft Bot Framework Microsoft Bot Framework is a platform for developing, connecting, publishing and managing intelligent and interactive chatbots. It works well with Microsoft products and has a robust set of features. The learning curve can be quite steep. Consider the following factors when choosing the best platform for your needs: Complexity: Simple tasks may require basic platforms such as Dialogflow, whereas Rasa caters to complex, customizable requirements. Scalability: Make your decision based on expected user volume. Dialogflow and the Microsoft Bot Framework are both scalable. Integration capabilities: Match with existing tech stack. Case Study T-Mobile, the second largest wireless carrier in the United States with 100 million customers, used RASA to create an effective AI assistant that assisted the company with customer support during the COVID-19 pandemic. It reduced wait times and improved customer experience at a time when queues for expert communication could reach over 20,000 people calling at the same time. T-Mobile's virtual assistant reached 10% of messaging customers within months of its launch. Building Your Chatbot: Step-By-Step Guide Step 1: Preparing the Groundwork Set up a development environment, select a framework (such as Dialogflow or Rasa), and understand the needs and language patterns of the target audience. Gather relevant datasets for training the chatbot, making sure they are representative of actual user interactions. Step 2: Crafting Conversational Design Create natural, engaging dialogues that are in line with user expectations. Plan out user flows to cover various conversation paths. To ensure smooth conversations, use simple, clear language and anticipate user queries. Step 3: Developing the Brain Construct the chatbot using the chosen framework. Setting up intents, entities, and responses is part of this. Provide snippets for basic functions such as greeting users and answering frequently asked questions. Emphasise best practices such as modular coding and keeping a clean codebase. Step 4. Testing and Iteration Conduct extensive testing, including user testing, to ensure the chatbot works as expected in various scenarios. Use feedback to iteratively develop the chatbot, constantly refining it based on user interactions and new data. Enhancing Your Chatbot With Advanced NLP Techniques Beyond basic responses, NLP can provide a set of advanced features that allow chatbots to respond more appropriately, handle complex queries, and provide personalized experiences. They are as follows: Sentiment analysis to gauge user emotions; Intent recognition to accurately understand user requests; Entity extraction to identify and use key information from user inputs. NLP techniques can also provide a chatbot with multilingual and multimodal support. Benefits and Drawbacks Adding multilingual support necessitates understanding nuances in various languages, which can be difficult but broadens user reach. Multimodal support (such as voice, text, and images) improves user interaction but necessitates the sophisticated integration of multiple AI technologies. Implementation Guidance For language support, use robust NLP libraries and APIs. Integrate technologies such as speech recognition and image processing for multimodal capabilities and test extensively across languages and modes. Deploying and Scaling Your AI Assistant Deployment options: Chatbots can be integrated into websites, embedded in messaging platforms such as Facebook Messenger, or integrated into mobile apps. In terms of reach and user engagement, each platform has distinct advantages. Security concerns: It is critical to ensure data privacy and security during and after deployment, especially for bots that handle sensitive information. Use encryption and secure authentication methods. Scaling strategies: Improve chatbot performance to handle increased traffic by increasing server capacity and refining AI algorithms for efficiency. Update the bot on a regular basis with new data and features. Ethical Considerations and Future Trends Ethical AI and Privacy The critical issues that arise during AI chatbot development include ensuring unbiased AI algorithms, transparent data usage, and respecting user privacy. A growing emphasis is being placed on preventing AI from perpetuating stereotypes or prejudices. You will need the following to ensure responsible development: Apply ethical AI principles (transparency in data collection, personal information security, obtaining user consent, and providing clear data usage policies). Conduct bias and accuracy audits on a regular basis. Future Trends in Conversational AI The rapid advancement of AI technologies determines the course of technological development. Some of the trends are as follows: Voice-activated AI: The rise of voice-activated AI, such as smart home devices, indicates a trend towards more natural, conversational interactions with technology. AI-human collaboration: Future trends indicate that AI will augment rather than replace human capabilities, resulting in improved customer service and more personalized user experiences through AI-human collaboration. Conclusion In this article, we examined how AI drives the development of chatbots, which gain new capabilities to improve our daily lives on many levels and reshape businesses for the better. We also looked at how to easily create a powerful chatbot using the best tools and platforms the market has to offer.
When building a large production-ready stateless microservices architecture, we always come across a common challenge of preserving request context across services and threads, including context propagation to the child threads. What Is Context Propagation? Context propagation means passing contextual information or states across different components or services in a distributed system where applications are often composed of multiple services running on different machines or containers. These services need to communicate and collaborate to fulfill a user request or perform a business process. Context propagation becomes crucial in such distributed systems to ensure that relevant information about a particular transaction or operation is carried along as it traverses different services. This context may include data such as: User authentication details Request identifiers Distributed Tracing information Other metadata (that helps in understanding the state and origin of a request) Key aspects of context propagation include: Request Context: When a user initiates a request, it often triggers a chain of interactions across multiple services. The context of the initial request, including relevant information like user identity, request timestamp, and unique identifiers, needs to be propagated to ensure consistent behavior and tracking. Distributed Tracing and Logging: Context propagation is closely tied to distributed tracing and logging mechanisms. By propagating context information, it becomes easier to trace the flow of a request through various services, aiding in debugging, performance analysis, and monitoring. Consistency: Maintaining a consistent context across services is essential for ensuring that each service involved in handling a request has the necessary information to perform its tasks correctly. This helps avoid inconsistencies and ensures coherent behavior across the distributed system. Middleware and Framework Support: Many middleware and frameworks provide built-in support for context propagation. For example, in microservices architectures, frameworks like Spring Cloud, Istio, or Zipkin offer tools for managing and propagating context seamlessly. Statelessness: Context propagation is especially important in stateless architectures where each service should operate independently without relying on a shared state. The context helps in providing the necessary information for a service to process a request without needing to store a persistent state. Effective context propagation contributes to the overall reliability, observability, and maintainability of distributed systems by providing a unified view of the state of a transaction as it moves through different services. It also helps in reducing the code. The Usecase Let's say you are building a Springboot Webflux-based Microservices/applications, and you need to ensure that the state of the user (Session Identifier, Request Identifier, LoggedIn Status, etc. ) and client ( Device Type, Client IP, etc.) passed in the originating request should be passed between the services. The Challenges Service-to-service call: For internal service-to-service calls, the context propagation does not happen automatically. Propagating context within classes: To refer to the context within service and/or helper classes, you need to explicitly pass it via the method arguments. This can be handled by creating a class with a static method that stores the context in the ThreadLocal object. Java Stream Operations: Since Java stream functions run in separate executor threads, the Context propagation via ThreadLocal to child threads needs to be done explicitly. Webflux: Similar to Java Stream functions, Context propagation in Webflux needs to be handled via reactor Hooks. The Idea here is how to ensure that context propagation happens automatically in the child threads and to the internal called service using a reactive web client. A similar pattern can be implemented for Non reactive code also. Solution Core Java provides two classes, ThreadLocal and InheritableThreadLocal, to store thread-scoped values. ThreadLocal allows the creation of variables that are local to a thread, ensuring each thread has its own copy of the variable. A limitation of ThreadLocal is that if a new thread is spawned within the scope of another thread, the child thread does not inherit the values of ThreadLocal variables from its parent. Java public class ExampleThreadLocal { private static ThreadLocal<String> threadLocal = new ThreadLocal<>(); public static void main(String[] args) { threadLocal.set("Main Thread Value"); new Thread(() -> { System.out.println("Child Thread: " + threadLocal.get()); // Outputs: Child Thread: null }).start(); System.out.println("Main Thread: " + threadLocal.get()); // Outputs: Main Thread: Main Thread Value } } On the other hand; InheritableThreadLocal extends ThreadLocal and provides the ability for child threads to inherit values from their parent threads. Java public class ExampleInheritableThreadLocal { private static InheritableThreadLocal<String> inheritableThreadLocal = new InheritableThreadLocal<>(); public static void main(String[] args) { inheritableThreadLocal.set("Main Thread Value"); new Thread(() -> { System.out.println("Child Thread: " + inheritableThreadLocal.get()); // Outputs: Child Thread: Main Thread Value }).start(); System.out.println("Main Thread: " + inheritableThreadLocal.get()); // Outputs: Main Thread: Main Thread Value } } Hence, in the scenarios where we need to ensure that context must be propagated between parent and child threads, we can use application-scoped static InheritableThreadLocal variables to hold the context and fetch it wherever needed. Java @Getter @ToString @Builder public class RequestContext { private String sessionId; private String correlationId; private String userStatus; private String channel; } Java public class ContextAdapter { final ThreadLocal<RequestContext> threadLocal = new InheritableThreadLocal<>(); public RequestContext getCurrentContext() { return threadLocal.get(); } public void setContext(tRequestContext requestContext) { threadLocal.set(requestContext); } public void clear() { threadLocal.remove(); } } Java public final class Context { static ContextAdapter contextAdapter; private Context() {} static { contextAdapter = new ContextAdapter(); } public static void clear() { if (contextAdapter == null) { throw new IllegalStateException(); } contextAdapter.clear(); } public static RequestContext getContext() { if (contextAdapter == null) { throw new IllegalStateException(); } return contextAdapter.getCurrentContext(); } public static void setContext(RequestContext requestContext) { if (cContextAdapter == null) { throw new IllegalStateException(); } contextAdapter.setContext(requestContext); } public static ContextAdapter getContextAdapter() { return contextAdapter; } } We can then refer to the context by calling the static method wherever required in the code. Java Context.getContext() This solves for: Propagating context within classes. Java Stream Operations Webflux In order to ensure that context is propagated to external calls via webclient, automatically, we can create a custom ExchangeFilterFunctionto read the context from Context.getContext() and then add the context to the header or query params as required. Java public class HeaderExchange implements ExchangeFilterFunction { @Override public Mono<ClientResponse> filter( ClientRequest clientRequest, ExchangeFunction exchangeFunction) { return Mono.deferContextual(Mono::just) .flatMap( context -> { RequestContext currentContext = Context.getContext(); ClientRequest newRequest = ClientRequest.from(clientRequest) .headers(httpHeaders ->{ httpHeaders.add("context-session-id",currentContext.getSessionId() ); httpHeaders.add("context-correlation-id",currentContext.getCorrelationId() ); }).build(); return exchangeFunction.exchange(newRequest); }); } } Initializing the Context as part of WebFilter. Java @Slf4j @Component public class RequestContextFilter implements WebFilter { @Override public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) { String sessionId = exchange.getRequest().getHeaders().getFirst("context-session-id"); String correlationId = exchange.getRequest().getHeaders().getFirst("context-correlation-id"); RequestContext requestContext = RequestContext.builder().sessionId(sessionId).correlationId(correlationId).build() Context.setContext(requestContext); return chain.filter(exchange); } }
With the rapid adoption of passkeys (and the underlying WebAuthn protocol), authentication has become more secure and user-friendly for many users. One of the standout advancements of passkeys has been the integration of Conditional UI, often referred to as "passkey autofill" or Conditional Mediation (in the following, we stay with the term Conditional UI). Despite its recent introduction and ongoing adoption by browsers, there’s a noticeable gap in technical documentation and implementation advice for Conditional UI. This article aims to bridge that gap by explaining what Conditional UI is, how it works, and how to tackle common challenges during its implementation. What Is Conditional UI? Conditional UI represents a new mode for passkeys/WebAuthn login processes. It selectively displays passkeys in a user interface (UI) only when a user has a discoverable credential (resident key), which is a type of passkey registered with the relying party (the online service) stored in their authenticator of a device (e.g., laptop, smartphone). The passkeys are displayed in a selection dropdown that is mixed up with auto-filled passwords, providing a seamless transition between traditional password systems and advanced passkey authentication, as users see both in the same context. This intelligent approach ensures that users aren't overwhelmed with unnecessary options and can navigate the login process more seamlessly. The foundation of Conditional UI is built on three main pillars: Respect user privacy: Ensuring user privacy by preventing disclosure of available credentials or lack of user consent to reveal these credentials. Great user experience even if no passkey exists: Empowering relying parties to implement WebAuthn opportunistically, ensuring user experience remains good even if passkeys are not available. Smooth transition from passwords to passkeys: Combining passkeys with password-based authentication to smooth the transition towards passwordless authentication methods, capitalizing on users' familiar UX paradigms. Conditional UI Benefits and Drawbacks Benefits Streamlined authentication: The process is more streamlined and efficient, removing the complexities often associated with multiple authentication methods. Reduction in user errors: By presenting only relevant options, users are less likely to make mistakes during the authentication process. Enhanced user satisfaction: Removing unnecessary steps means users can log in faster and more effortlessly, leading to improved user satisfaction. Simple frontend integration: One of the standout features of Conditional UI is its ease of integration. Developers can seamlessly incorporate it into the front end with a few lines of code (see below). Passwordless and usernameless login: One of the huge benefits is that Conditional UI promotes not only passwordless authentication but also a usernameless or accountless experience. Users are spared the mental load of recalling their specific email address or user handle from sign-up. Instead, they can rely on the browser’s suggestions, which include the email address/user handle paired with the appropriate passkey in the autofill menu. Solving the bootstrapping dilemma: Transitioning from traditional username-password systems to passkeys can be daunting. Conditional UI addresses this transition challenge. Websites can initiate a passkey / WebAuthn call alongside a conventional password prompt without fretting over potential modal dialog errors if a device lacks the needed credentials. Drawbacks Learning curve for developers: Conditional UI introduces a new paradigm, which means there's a learning curve involved for developers unfamiliar with its intricacies. Device/browser dependency: Conditional UI’s success hinges on the user's device or browser compatibility. Given that not all browsers or devices support it currently, this can limit its application. No conditional passkey register: There’s no support for using Conditional UI in the account/passkey creation process. That means you need to create passkeys the regular way, either at the account creation stage by providing some dedicated passkey creation page or in the account settings. However, there's an ongoing discourse about the potential inclusion of Conditional UI for sign-ups as well. Password manager disable autocomplete: Some modern password managers and their browser extensions modify the website’s DOM and disable or overwrite the autocomplete tag in input fields in favor of their own autocomplete features. This can lead to an inconsistent and unsatisfying user experience. As standards for Conditional UI are relatively new, we hope that things improve so that, e.g., not two autofill menus are overlaid or the desired one is not shown at all. How Does Conditional UI Work? In the following, we provide a step-by-step breakdown of the single steps of an entire Conditional UI flow: In general, the Conditional UI process flow can be partitioned into two phases. During the page load phase, conditional UI logic happens in the background, while in the user operation phase, the user has to do something actively. Conditional UI availability checks: The client (browser) calls the isConditionalMediationAvailable() function to detect if the current browser/device combination supports Conditional UI.Only if the response is true does the process continue; otherwise, the Conditional UI process is aborted. Call the conditional UI endpoint: Next, the client calls the server Conditional UI endpoint in order to retrieve the PublicKeyCredentialRequestOptions. Receive PublicKeyCredentialRequestOptions: The server returns the PublicKeyCredentialRequestOptions which contain the challenge and more WebAuthn server options (e.g., allowCredentials, extensions, userVerification). Start the local authentication: By calling credentials.get() with the received PublicKeyCredentialOptions and the mediation property is set to be “conditional, the process for the local authentication on the device starts. Show autofill selection: The autofill menu for passkeys pops up. The specific styling is dependent on the browser and device (e.g., some require the user to place the cursor in the input field, and some automatically display the menu on page load; see below). Local user authentication: The user selects the passkey from the autofill menu that they want to use and authenticate via the authentication dialog of their device (e.g., via Face ID, Touch ID, Windows Hello). Send authenticator response to server: If the local user authentication was successful, the authenticator response is sent back to the server. User is logged in and redirected: Once the server receives the authenticator response, it validates the signature against the corresponding user account’s public key in the database. If the verification is successful, the user is granted access, logged in, and redirected to the logged-in page. By following this process flow, Conditional UI offers a seamless and user-friendly authentication experience. Technical Requirements for Conditional UI General To get Conditional UI working, some general aspects need to be considered: Credential specifications: Conditional UI is specifically designed to operate only with resident keys/discoverable credentials. The reason behind this is that authenticators do not store user-specific data (e.g., name, display name) for non-resident keys/non-discoverable credentials. As a result, using the latter for passkey autofill is not possible. Credential filtering: The allowCredentials feature remains supported, facilitating websites that are already aware of the user's identity (for instance, if a username was sent in the initial mediation call because it might be stored in the browser’s LocalStorage) to refine the list of credentials showcased to users during autofill. Client-Side To get Conditional UI working on the client side, the following requirements must be fulfilled: Compatible browser: Ensure that the user uses a modern browser that supports Conditional UI (see here for the latest browser coverage). Enabled JavaScript: JavaScript must be enabled to facilitate Conditional UI operations. Test conditional UI availability: The relying party (the server side) should have the certainty that Conditional UI is available on the client side when it receives the WebAuthn mediation call to avoid triggering any user-visible errors in scenarios where Conditional UI isn't supported. To address this, it’s recommended to use the isConditionalMediationAvailable() method and check for the technical availability of Conditional UI. HTML input field required: For Conditional UI to work, you need to have an HTML input field on your web page. If you do not have one, you need to provide support for the regular passkey/WebAuthn login process that is triggered with a user interaction, like a button click. Remove timeout protocols: Timeout parameters (e.g., the user is taking a very long time to decide on a passkey in the autofill menu) should be disregarded in this setup. Server-Side To get Conditional UI working, some requirements on the server side must be fulfilled as well: Running WebAuthn server: As we are still in the context of passkeys / WebAuthn, it’s required to have a WebAuthn server running that manages the authentication procedures. Provide mediation start endpoint: Compared to regular WebAuthn login endpoints, it’s useful to provide another endpoint that has similar functionality but can deal with an optional user handle (e.g., email address, phone number, username). Practical Coding Tips Since the official rollout of Conditional UI in late 2022 and earlier beta versions, we’ve been testing and working extensively with it. In the following, we want to share practical tips that helped during the implementation of Conditional UI with you. Full Conditional UI Example A full, minimalistic code example for a Conditional UI method would look like this: HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Conditional UI</title> </head> <body> <input type="text" id="username" autoComplete="username webauthn" /> <button onclick="passkeyLogin()">Login via passkey</button> </body> </html> 6.2 Browser Compatibility Check Implement Conditional UI detection that ensures that Conditional UI is only employed when the current device/browser combination supports it. This should work without presenting user-visible errors in the absence of Conditional UI support. Incorporating the isConditionalMediationAvailable() method within the user interface addresses this concern. If Conditional UI support is given, the Conditional UI login process can be started. JavaScript // source: https://developer.mozilla.org/en-US/docs/Web/API/PublicKeyCredential/isConditionalMediationAvailable#examples // Availability of `window.PublicKeyCredential` means WebAuthn is usable. if ( window.PublicKeyCredential && PublicKeyCredential.isConditionalMediationAvailable ) { // Check if conditional mediation is available. const isCMA = await PublicKeyCredential.isConditionalMediationAvailable(); if (isCMA) { // Call WebAuthn authentication start endpoint let options = await WebAuthnClient.getPublicKeyRequestOptions(); const credential = await navigator.credentials.get({ publicKey: options.publicKeyCredentialRequestOptions, mediation: "conditional", }); /* ... */ } } WebAuthn Autocomplete Token in Input Fields The input field should receive a “webauthn” HTML autofill token. This signals the client to populate passkeys to the ongoing request. Besides passkeys, other autofill values might be showcased as well. These autofill tokens can be paired with other existing tokens, e.g.: autocomplete="username webauthn": Besides displaying passkeys, this also suggests username autofill. autocomplete="current-password webauthn": Besides displaying passkeys, this further prompts for password autofill. HTML <label for="name">Username:</label> <input type="text" name="name" autocomplete="username webauthn"> <label for="password">Password:</label> <input type="password" name="password" autocomplete="current-password webauthn"> Mediation Property in WebAuthn API Get Call To retrieve available passkeys after having received the PublicKeyCredentialRequestOptions object, the navigator.credentials.get() function should be called (which serves both passkeys and passwords). The PublicKeyCredentialRequestOptions object needs to have the mediation parameter set to “conditional” to activate Conditional UI on the client. JavaScript const credential = await navigator.credentials.get({ publicKey: options.publicKeyCredentialRequestOptions, mediation: "conditional" }); Cancellation of Conditional UI Flow If there's no available passkey, or the user neglects the suggested passkeys and enters their email, the Conditional UI flow is stopped. This underscores the importance of always supporting the standard passkey / WebAuthn login via a modal as well. A critical point to emphasize here is the potential need to halt an ongoing Conditional UI request. Contrary to modal experiences, autofill dropdowns lack a cancellation button. As per WebAuthn's design, only a single active credential request should be in progress at any given moment. The WebAuthn standard suggests utilizing an AbortController to cancel a WebAuthn process, applicable to both regular and Conditional UI login processes (see WebAuthn's Docs for details). The Conditional UI login process gets activated as soon as a user lands on the page. The initial task should be to create a globally-scoped AbortController object. This will act as a signal for your client to terminate the autofill request, especially if the user decides to do the regular passkey login process. Reassure that the AbortController can be invoked by other functions and is reset if the Conditional UI process has to restart. Employ the signal property within the navigator.credentials.get() call, incorporating your AbortController signal as its value. This signals to the passkey / WebAuthn function that the request must be halted if the signal gets aborted. Remember to set up a fresh AbortController each time you trigger Conditional UI. Using an already-aborted AbortController will lead to an instant cancellation of the passkey / WebAuthn function. The remaining steps align with a regular passkey login process. In the following, you see a code example of the mentioned steps: HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Conditional UI</title> </head> <body> <input type="text" id="username" autoComplete="username webauthn" /> <button onclick="passkeyLogin()">Login via passkey</button> </body> </html> In the absence of Conditional UI support, direct users towards the regular passkey login process. Offering this path is important for users relying on hardware security keys (e.g., YubiKeys) or those compelled to use non-resident keys / non-discoverable credentials due to authenticator constraints. Conditional UI in Native Apps When you develop a native app, e.g., for iOS or Android, Conditional UI works as well. It doesn’t matter if you implement it natively in Flutter, Kotlin, or Swift or if you decide to go with Chrome Custom Tabs CCT or SFSafariViewController or SFAuthenticationSession / ASWebAuthenticationSession. Both approaches support Conditional UI. Conditional UI Examples in Different Devices/Browser To illustrate how Conditional UI looks like for the end user, we added several screenshots of a Conditional UI autofill menu using https://passkeys.eu. Conditional UI in Windows 11 (22H2) + Chrome 118 Conditional UI in macOS Ventura (13.5.1) + Chrome 118 Conditional UI in macOS Ventura (13.5.1) + Safari 16.6 Conditional UI in Android 13 + Chrome 118 Conditional UI in iOS 17.1 + Safari 17.1 Conclusion Passkeys, with its Conditional UI/passkey autofill capability, are the new way to authenticate online. As we transition to an era where passwords are more and more replaced by passkeys, the need for robust and user-friendly transition mechanisms is undeniable. We hope this article has helped you to understand how to correctly implement Conditional UI, handle the transition process, and which aspects to pay special attention to.
In the ever-evolving landscape of software development, the key to a successful project lies in the elegance of its code design. Striking the right balance between simplicity and flexibility is not just a lofty goal but a strategic imperative. This code design proposal charts a course toward a sophisticated yet adaptable architecture grounded in simplicity, evolution, and iterative refinement. The primary goal of this code design proposal is to champion simplicity as the cornerstone of our software development philosophy. Simplicity is not about sacrificing sophistication but achieving it through a thoughtful and streamlined approach. Our focus is crafting a flexible design that effortlessly adapts to the evolving demands of any project. Starting with the bare essentials, we seek to create a codebase that grows organically, expanding its capabilities only when necessary. This proposal introduces guiding principles to shape our code design journey. From handling inputs and implementing interfaces to exploring design patterns and architectural evolution, these principles provide a compass for developers navigating the intricate landscape of software design. We advocate for a shift towards an evolutionary architecture, where simplicity is the foundation and complexity is embraced strategically. Now, let's delve into the fundamental principles that define this approach. Embracing Guiding Principles In the intricate landscape of code design, where decisions shape the foundation of software architecture, guiding principles serve as the North Star for developers. This session illuminates the key guiding principles that form the backbone of our core design philosophy. From the disciplined handling of inputs to the strategic exploration of design patterns, we delve into principles crafted to foster simplicity and flexibility. Each principle carries a unique insight, providing a compass for developers to navigate the complexity of software development. Join us on a journey through input handling, interface implementation, pattern exploration, evolutionary architecture, service usage, and the cultivation of a culture centered around refactoring. These principles form the blueprint for a robust, adaptable, and sophisticated codebase. Interface implementation: A discerning approach is vital when it comes to interfaces. If a solitary implementation of an interface exists, consider the removal option. However, introducing interfaces becomes pivotal in scenarios involving multiple types or diverse circumstances. This nuanced decision-making ensures that interfaces serve a purpose aligned with the system’s complexity. Pattern exploration: Design patterns are powerful tools, but their reasonable use is paramount. For instance, favor the employment of the Strategy pattern over resorting to multiple conditional statements (ifs) when confronted with diverse scenarios. It promotes a cleaner, more modular codebase, enhancing both readability and maintainability. Evolutionary architecture: The evolution of architecture is a fundamental principle. Commence with a simple Model-View-Controller (MVC) structure for basic CRUD operations. As the system’s complexity grows or specific requirements emerge, embark on a strategic refactoring journey toward the Ports and Adapters pattern. Acknowledge its roots in the four-layer architecture evolution, ensuring the system’s adaptability and scalability. Service usage: In handling services, simplicity is again underscored. Consider using a service for single operations involving a lone parameter and a database call. However, if the cyclomatic complexity surpasses ten lines, evaluating the necessity of transitioning toward use cases becomes imperative. This principle ensures that services remain streamlined and focused on specific, well-defined tasks. Refactoring as a culture: Embrace a culture of refactoring as a fundamental aspect of the development process. Tests serve as the guiding force during refactoring, ensuring that code integrity is maintained. This proactive approach enhances the codebase’s robustness and fosters a continuous improvement mindset among the development team. As we conclude our exploration of guiding principles in code design, it's crucial to underscore their pivotal role in shaping a cohesive and adaptable architecture. The principles discussed - from input handling to refactoring as a culture - converge to form a comprehensive design philosophy. This philosophy seamlessly aligns with our overarching design approach, emphasizing the balance between simplicity and flexibility. Our design approach advocates for an evolutionary path in tandem with these principles. Starting with the simplicity of the Model-View-Controller (MVC) design for basic CRUD operations, we progress to more sophisticated architectures like the Ports and Adapters pattern as complexity demands. The introduction of interfaces, strategic use of design patterns, and a nuanced approach to service usage echo the commitment to adaptability and refinement. This synthesis of guiding principles and design approach creates a robust foundation for our codebase and instills a proactive mindset. By fostering a culture of continuous refinement and leveraging testing as a guide, we ensure that our software design remains agile and resilient in the face of evolving requirements. The journey from simplicity to sophistication is not just a path; it's a dynamic, iterative process that guarantees the longevity and adaptability of our codebase. Design Approach In our pursuit of an optimal code design, the chosen approach is the roadmap that guides the development journey. This session outlines a systematic design approach, emphasizing adaptability and scalability as the cornerstones of our architectural decisions. Initial design (MVC): Embarking on our code design journey, the initial step advocates for simplicity. The Model-View-Controller (MVC) design pattern, a stalwart in software architecture, is recommended for handling basic CRUD operations. This foundational approach provides a clear separation of concerns, offering a structured framework well-suited for straightforward scenarios. Interface and pattern introduction: As our projects evolve and requirements diversify, the design approach calls for interfaces and exploring relevant design patterns. This adaptive strategy enables us to respond effectively to changing needs. For instance, incorporating the Strategy pattern becomes pertinent when dealing with multiple scenarios, while the Ports and Adapters pattern proves invaluable for navigating the intricacies of more complex architectures. Service and use cases: Simplicity remains a guiding principle even as our projects become complex. The design approach recommends the use of services for single, straightforward operations. However, a strategic shift towards use cases is encouraged when the complexity surpasses ten lines, as measured by cyclomatic complexity. This shift ensures better organization, maintainability, and a focus on well-defined tasks. Testing as a guide: A robust testing culture is integral to our design approach. Tests serve as a safety net during refactoring, ensuring that changes do not compromise existing functionality. By using tests as a guide, we guarantee the reliability and stability of our codebase, promoting confidence in the face of ongoing development. Continuous refinement: Embracing an iterative development process is the crux of our design approach. Regular codebase assessments lead to continuous refinement in response to emerging needs. It may involve restructuring classes, introducing new patterns, or optimizing existing code. The iterative nature of this process ensures that our codebase remains agile, resilient, and aligned with the evolving landscape of software development. Benefits In code design, pursuing excellence extends beyond the immediate development phase. This session illuminates the profound benefits embedded in our chosen design philosophy, emphasizing flexibility, maintainability, and the adoption of a Test-Driven Development (TDD) approach. Flexibility: At the core of our design philosophy lies flexibility. The chosen architecture facilitates seamless adaptation to changing requirements. Our codebase becomes inherently flexible by adhering to principles prioritizing simplicity and gradually introducing complexity as needed. This adaptability is not just a feature but a strategic advantage, positioning our projects to navigate the ever-shifting landscape of technological advancements and evolving user needs. Maintainability: The longevity of a software project hinges on its maintainability. Our design approach, rooted in simplicity and continuous code refinement, ensures that the codebase remains manageable and sustainable over time. By starting with a minimalistic approach and embracing a culture of refactoring, we pave the way for a codebase that is easy to understand, modify, and extend. This commitment to maintainability is an investment in the future, safeguarding against technical debt pitfalls and promoting our software projects' overall health. Test-Driven Development (TDD): A cornerstone of our design philosophy is the adoption of Test-Driven Development (TDD). Tests are more than just validators of functionality; they act as a safety net during refactoring. Developers clearly understand the expected behavior by writing tests before implementing functionality, fostering a robust and reliable codebase. TDD not only ensures the correctness of the code but also accelerates the development process by providing rapid feedback on changes. The synergy between TDD and our design principles reinforces the integrity of the codebase, making it resilient to the iterative nature of development and enhancing overall software quality. These benefits are not mere byproducts but intentional outcomes of a design philosophy that prioritizes adaptability, sustainability, and a rigorous commitment to quality assurance. Book Reference In the ever-evolving landscape of software architecture, staying abreast of the latest insights and methodologies is paramount. Here, we spotlight three invaluable books that contribute to the intellectual depth of software architects and offer practical wisdom for navigating the complex challenges of building and evolving robust systems. Building Evolutionary Architectures: Automated Software Governance (Author: Neal Ford, Rebecca Parsons, Patrick Kua): In Building Evolutionary Architectures, the authors provide a comprehensive guide to navigating the dynamic terrain of software development. The book addresses the continual evolution of the software ecosystem, offering insights into tools, frameworks, techniques, and paradigms. Central to its premise is exploring core engineering practices that lay the groundwork for rethinking architecture over time. The book offers practical strategies for architects to adapt and thrive in the ever-changing software landscape by emphasizing the protection of crucial architectural characteristics during evolution. A Philosophy of Software Design (Author: John Ousterhout): John Ousterhout’s A Philosophy of Software Design delves into the fundamental challenge of managing complexity in software design. The book tackles the philosophical aspects of the design process, presenting a collection of principles to guide developers in decomposing complex systems into manageable modules. Ousterhout introduces red flags that signal potential design problems and provides practical strategies for minimizing complexity in large software systems. By offering insights into the philosophical underpinnings of software design, this book equips developers with tools to write software more efficiently. Just Enough Software Architecture (Author: George Fairbanks): In Just Enough Software Architecture, George Fairbanks takes a unique approach, presenting a practical guide tailored for software developers. The book introduces risk-driven architecting, emphasizing that the level of design effort should align with the risks at hand. It advocates for democratizing architecture, making it relevant to all developers by understanding how constraints guide outcomes. The book cultivates declarative knowledge, helping developers comprehend the reasons behind their actions. With an emphasis on engineering, Fairbanks provides practical advice, focusing on technical aspects and demonstrating how to make informed design tradeoffs. Collectively, these books offer a rich knowledge repository, combining theoretical foundations with practical insights. Whether you’re a seasoned architect or a developer aspiring to understand the nuances of software design, these references serve as invaluable companions on your journey to mastering the art and science of software architecture. Conclusion In concluding our code design proposal, the underlying theme resonates with the harmonious balance between simplicity and flexibility. The intentional choice to commence with a minimalistic approach and selectively introduce complexity aligns with a broader vision of creating software that seamlessly adapts to the dynamic demands of the digital landscape while maintaining an inherent sophistication. This proposal champions the idea that software design is not a static endeavor but a dynamic, iterative process. Regular refactoring, deeply ingrained in the development culture, serves as a compass for navigating the evolving intricacies of code. This commitment to refinement is not merely a maintenance task; it is a proactive measure ensuring the longevity and adaptability of our projects. By cultivating a robust testing culture and viewing tests as validators and guides, we fortify our codebase's pillars of reliability and resilience. This synergy between simplicity, flexibility, and a testing-driven ethos forms the bedrock of a design philosophy that transcends immediate project goals, paving the way for sustained success and enduring software excellence. In essence, this code design proposal is a testament to our commitment to craftsmanship in software development. As we embark on the implementation phase, let us carry forward the principles of adaptability, sophistication, and a relentless pursuit of codebase refinement. We forge a path towards a future-proof, resilient, and high-performing software ecosystem.
Our Excel spreadsheets hold a lot of valuable data in their dozens, hundreds, or even thousands of cells and rows. With that much clean, formatted digital data at our disposal, it’s up to us to find programmatic methods for extracting and sharing that data among other important documents in our file ecosystem. Thankfully, Microsoft made that extremely easy to do when they switched their file representation standard over to OpenXML more than 15 years ago. This open-source XML-based approach drastically improved the accessibility of all Office document contents by basing their structure on well-known technologies – namely Zip and XML – which most software developers intimately understand. Before that, Excel (XLS) files were stored in a binary file format known as BIFF (Binary Interchange File Format), and other proprietary binary formats were used to represent additional Office files like Word (DOC). This change to an open document standard made it possible for developers to build applications that could interact directly with Office documents in meaningful ways. To get information about the structure of a particular Excel workbook, for example, a developer could write code to access xl/workbook.xml in the XLSX file structure and get all the workbook metadata they need. Similarly, to get specific sheet data, they could access xl/worksheets/(sheetname).xml, knowing that each cell and value with that sheet will be represented by simple <c> and <v> elements with all their relevant data nested within. This is a bit of an oversimplification, but it serves to point out the ease of navigating a series of zipped XML file paths. Given the global popularity of Excel files, building (or simply expanding) applications to load, manipulate, and extract content from XLSX was a no-brainer. There are dozens of examples of modern applications that can seamlessly load & manipulate XLSX files, and many even provide the option to export files in XLSX format. When we set out to build our applications to interact with Excel documents, we have several options at our disposal. We can elect to write our code to sift through OpenXML document formatting, or we can download a specialized programming library, or we can alternatively call a specially designed web API to take care of a specific document interaction on our behalf. The former two options can help us keep our code localized, but they’ll chew up a good amount of keyboard time and prove a little more costly to run. With the latter option, we can offload our coding and processing overhead to an external service, reaping all the benefits with a fraction of the hassle. Perhaps most beneficially, we can use APIs to save time and rapidly get our application prototypes off the ground. Demonstration In the remainder of this article, I’ll quickly demonstrate two free-to-use web APIs that allow us to retrieve content from specific cells in our XLSX spreadsheets in slightly different ways. Ready-to-run Java code is available below to make structuring our calls straightforward. Both APIs will return information about our target cell, including the cell path, cell text value, cell identifier, cell style index, and the formula (if any) used within that cell. With the information in our response object, we can subsequently ask our applications to share data between spreadsheets and other open standard files for myriad purposes. Conveniently, both API requests can be authorized with the same free API key. It’s also important to note that both APIs process file data in memory and release that data upon completion of the request. This makes both requests fast and extremely secure. The first of these two API solutions will locate the data we want using the row index and cell index in our request. The second solution will instead use the cell identifier (i.e., A1, B1, C1, etc.) for the same purpose. While cell index and cell identifier are often regarded interchangeably (both locate a specific cell in a specific location within an Excel worksheet), using the cell index can make it easier for our application to adapt dynamically to any changes within our document, while the cell identifier will always remain static. To use these APIs, we’ll start by installing the SDK with Maven. We can first add a reference to the repository in pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> We can then add a reference to the dependency in pom.xml: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> With installation out of the way, we can structure our request parameters and use ready-to-run Java code examples to make our API calls. To retrieve cell data using the row index and cell index, we can format our request parameters like the application/JSON example below: JSON { "InputFileBytes": "string", "InputFileUrl": "string", "WorksheetToQuery": { "Path": "string", "WorksheetName": "string" }, "RowIndex": 0, "CellIndex": 0 } And we can use the below code to call the API once our parameters are set: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); EditDocumentApi apiInstance = new EditDocumentApi(); GetXlsxCellRequest input = new GetXlsxCellRequest(); // GetXlsxCellRequest | Document input request try { GetXlsxCellResponse result = apiInstance.editDocumentXlsxGetCellByIndex(input); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling EditDocumentApi#editDocumentXlsxGetCellByIndex"); e.printStackTrace(); } To retrieve cell data using the cell identifier, we can format our request parameters like the application/JSON example below: JSON { "InputFileBytes": "string", "InputFileUrl": "string", "WorksheetToQuery": { "Path": "string", "WorksheetName": "string" }, "CellIdentifier": "string" } We can use the final code examples below to structure our API call once our parameters are set: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); EditDocumentApi apiInstance = new EditDocumentApi(); GetXlsxCellByIdentifierRequest input = new GetXlsxCellByIdentifierRequest(); // GetXlsxCellByIdentifierRequest | Document input request try { GetXlsxCellByIdentifierResponse result = apiInstance.editDocumentXlsxGetCellByIdentifier(input); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling EditDocumentApi#editDocumentXlsxGetCellByIdentifier"); e.printStackTrace(); } That’s all the code we’ll need. With utility APIs at our disposal, we’ll have our projects up and running in no time.
Hello DZone Community! Recently, you might have seen our announcement about the updates to the Core program. We’ve received a lot of great feedback about the new program, and we’re very excited to continue growing and expanding it to more members! But that was just the beginning. We’ve been working hard on improvements across the entire DZone community, and today, we are thrilled to announce some big improvements to your DZone profiles! There’s a lot to unpack with these new profiles, but the overall gist of it is that it gives them a fresh new look (ooh shiny!!) and adds some new features for you. Among other things, we’ve added: A section for your education, training, and credentials earned Sections for any Trend Reports and Refcards you’ve contributed to A section for any DZone events you’ve been a part of While all members will receive the above updates to their profiles, we’ve built some additional features for our Core members. They truly go above and beyond for the DZone community by being highly engaged and regularly contributing expert content to the site. These additional changes will help continue to elevate them as thought leaders both within the DZone community and across the industry at large. Core member profiles will now have: Optimized profile A place to add open-source projects they're working on or support A section recognizing when they're highlighted as a Featured Expert on DZone A new, exclusive banner showcasing their Core membership We could not be more excited to roll out these new profiles to you all. Every single one of our contributors is essential to what we do at DZone, and these new profiles will help highlight to our community and the rest of our audience just how knowledgeable and important you are to DZone. We literally would not be here without you! If you haven't already and would like to begin your contributor journey, you can start by creating your own article! Our team of editors is here to help along the way. You can reach out to editors@dzone.com with any of your content questions. Please spend some time poking around your new profile, and let us know what you think. We’re always open to feedback and new ideas! Drop us a line at community@dzone.com with your thoughts. We are so incredibly grateful for all you do for DZone! Sincerely, The DZone Team
The world of artificial intelligence is seeing rapid advancements, with language models at the forefront of this technological renaissance. These models have revolutionized the way we interact with machines, turning sci-fi dreams into everyday reality. As we step into an era where conversational AI becomes increasingly sophisticated, a new contender has emerged in the AI arena: Llama 2. Developed by Meta AI, Llama 2 is setting the stage for the next wave of innovation in generative AI. Let’s dive into the details of this groundbreaking model. What Is LLama? LLaMA (Large Language Model Meta AI) is a collection of foundation language models ranging from 7B to 65B parameters, which are smaller in size than other state-of-the-art models, like GPT-3 (175B parameters) and PaLM (540B parameters). Despite their smaller size, LLaMA models deliver exceptional performance on a variety of benchmarks, including reasoning, coding, proficiency, and knowledge tests. LLaMA models are also more efficient in terms of computational power and resources. This makes them more accessible to researchers and developers who do not have access to large amounts of infrastructure. Let’s take a step back and talk a little bit about the background story of LlaMa. With all the hype from the AI tools and community, Meta came up with their own model in February 2023 and named it LlaMa. Image credits: Mark’s post on Facebook The interesting fact was, unlike other AI giants, they wanted to keep the model private and share it with known researchers to optimize it even more. Yet somehow, the model got leaked to the public, and the AI community started experimenting with the model, optimizing it so well that in a matter of weeks, they managed to get LLaMA running on a phone. People were training LLaMa variations like Vicuna that rival Google’s Bard, spending just a few hundred bucks. Image credits: lmsys.org What Is Llama 2 and How Does It Work? Llama 2 is a state-of-the-art language model developed by Meta. It is the successor to the original LLaMA, offering enhancements in terms of scale, efficiency, and performance. Llama 2 models range from 7B to 70B parameters, catering to diverse computing capabilities and applications. Tailored for chatbot integration, Llama 2 shines in dialogue use cases, offering nuanced and coherent responses that push the boundaries of what conversational AI can achieve. Image credits: Meta Llama 2 is pre-trained using publicly available online data. This involves exposing the model to a large corpus of text data like books, articles, and other sources of written content. The goal of this pre-training is to help the model learn general language patterns and acquire a broad understanding of language structure. It also involves supervised fine-tuning and reinforcement learning from human feedback (RLHF). One component of the RLHF is rejection sampling, which involves selecting a response from the model and either accepting or rejecting it based on human feedback. Another component of RLHF is proximal policy optimization (PPO), which involves updating the model's policy directly based on human feedback. Finally, iterative refinement ensures the model reaches the desired level of performance with supervised iterations and corrections. Llama 2 Benefits Here are some notable benefits of Llama 2 — further demonstrating why it’s a good choice for organizations building generative AI-powered applications. Open: The model and its weights are available for download under a community license. This allows businesses to integrate the model with their internal data and fine-tune it for specific use cases while preserving privacy. Free: Businesses can use the model to build their own chatbots and other use cases without high initial costs or having to pay licensing fees to Meta — making it an economical option for companies looking to incorporate AI without a significant financial burden. Versatile: The model offers a range of sizes to fit different use cases and platforms, indicating flexibility and adaptability to various requirements. Safety: Llama 2 has been tested both internally and externally to identify issues, including toxicity and bias, which are important considerations in AI deployment. The Responsible Use Guide that comes with it provides developers with best practices for safe and responsible AI development and evaluation. Llama 2 Training and Dataset LlamA 2 is grounded in the transformer architecture and is renowned for its effectiveness in processing sequential data. It incorporates several innovative elements, including RMSNorm pre-normalization, SwiGLU activation, and Rotary embeddings. These contribute to its ability to maintain context over longer stretches of conversation and offer more precise attention to relevant details in dialogue. It is pre-trained on a vast corpus of data, ensuring a broad understanding of language nuances before being fine-tuned through supervised learning and reinforcement learning with human feedback. Image credits: Meta Llama 2 has been trained with a reinforcement learning approach to produce/generate non-toxic and family-friendly output for the users. This way, the aim is to become human-friendly, getting familiar with human choices and preferences. Llama 2 has been trained on a massive dataset: The Llama 2 model suite, with its variants of 7B, 13B, and 70B parameters, offers a range of capabilities suited to different needs and computational resources. These sizes represent the number of parameters in each model, with parameters being the aspects of the model that are learned from the training data. In the context of language models, more parameters typically mean a greater ability to understand and generate human-like text because the model has a larger capacity to learn from a wider variety of data. Advantages and Use Cases of Llama 2 One of the key advantages of Llama 2 is its open-source nature, which fosters a collaborative environment for developers and researchers worldwide. Moreover, its flexible architecture allows for customization, making it a versatile tool for a range of applications. Llama 2 also touts a high safety standard, having undergone rigorous testing against adversarial prompts to minimize harmful outputs. Its training methodology — focusing on up-sampling factual sources — is a stride towards reducing hallucinations, where AI generates misleading information. Llama 2 has a good grip over the output it generates and is much more accurate and contextual than other similar models in the market. Image credits: Meta Llama 2’s capabilities extend beyond chatbot applications. It can be fine-tuned for specific tasks, including summarization, translation, and content generation, making it an invaluable asset across sectors. In coding, 'Code Llama' is fine-tuned to assist with programming tasks, potentially revolutionizing how developers write and review code. Llama 2 vs. OpenAI's ChatGPT While OpenAI's ChatGPT has captured more public attention, Llama 2 brings formidable competition. Llama 2's models are specifically optimized for dialogue, potentially giving them an edge in conversational contexts. Additionally, Llama 2's open-source license and customizable nature offer an alternative for those seeking to develop on a platform that supports modification and redistribution. While ChatGPT has the advantage of being a part of the larger GPT-3.5 and GPT-4 ecosystems known for their impressive generative capabilities, Llama 2's transparency in model training may appeal to those in the academic and research communities seeking to push the limits of what AI can learn and create. In my opinion, Llama 2 represents not just a step forward in AI but a leap into a future where the collaboration between human and machine intelligence becomes more integrated and seamless. Its introduction is a testament to the dynamic nature of the AI field and its unwavering push toward innovation, safety, and the democratization of technology. As we continue to explore the vast potential of generative AI, Llama 2 is a beacon of what's possible and a preview of the exciting advancements still to come. SingleStoreDB With Llama 2 Integrating Llama 2 with SingleStoreDB offers a synergistic blend of advanced AI capabilities and robust data management. SingleStoreDB’s prowess in handling large-scale datasets complements Llama 2’s varied model sizes, ranging from 7B to 70B parameters, ensuring efficient data access and processing. This combination enhances scalability, making it ideal for dynamic AI applications. The setup promises improved real-time AI performance, with SingleStoreDB’s rapid querying — complementing Llama 2’s need for quick data retrieval and analysis. This integration paves the way for innovative AI solutions, especially in scenarios requiring quick decision-making and sophisticated data interpretation. Conclusion As the AI landscape continues to evolve at an unprecedented pace, the launch of Llama 2 and Meta's partnership with Microsoft represents a significant turning point for the industry. This strategic move marks a transition toward increased transparency and collaborative development, paving the way for more accessible and advanced AI solutions. Llama 2 stands out for its balance between performance and accessibility. It is designed to be as safe or safer than other models in the market, a critical factor given the potential impact of AI outputs.
Implementing DevOps Practices in Salesforce Development
December 13, 2023 by
Cloud Native Predictions 2024: Stress, Careers, and Costs
December 13, 2023 by CORE
Open Dashboard and Visualization Workshop: Basic Perses Dashboard
December 15, 2023 by CORE
10 Websites to Learn Database and SQL in Depth
December 14, 2023 by
Explainable AI: Making the Black Box Transparent
May 16, 2023 by
December 15, 2023 by
Open Dashboard and Visualization Workshop: Basic Perses Dashboard
December 15, 2023 by CORE
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
10 Websites to Learn Database and SQL in Depth
December 14, 2023 by
Running Legacy SAP Versions on AWS
December 14, 2023 by
December 15, 2023 by
Open Dashboard and Visualization Workshop: Basic Perses Dashboard
December 15, 2023 by CORE
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
How To Implement Data Management Into Your AI Strategy
December 14, 2023 by
User-Centric Design in the Digital Age: Trends Shaping Web Design and UI/UX Experiences
December 14, 2023 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by