2024 DevOps Lifecycle: Share your expertise on CI/CD, deployment metrics, tech debt, and more for our Feb. Trend Report (+ enter a raffle!).
Kubernetes in the Enterprise: Join our Virtual Roundtable as we dive into Kubernetes over the past year, core usages, and emerging trends.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Unlocking the Power of Streaming: Effortlessly Upload Gigabytes to AWS S3 With Node.js
How To Create an Analytical Dashboard With Next.js: Airline Dashboard Example
Next.js, an open-source Javascript framework was created specifically for leveraging React to create user-friendly web applications and static websites. It was developed by Vercel and offers an integrated environment that makes server-side rendering simpler. Next.js was released on October 26, 2023, and during the Next.js Conf, Guillermo Rauch, the CEO, talked about the new features. One of the features, termed “Partial Prerendering,” was introduced in the preview to provide both quick initial and dynamic visuals without sacrificing the developer experience. Next.js 14 brings notable improvements to its TurboPack, the engine responsible for the efficient compilation, now it is faster. Furthermore, the stabilization of Server Actions and partial prerendering results in a better development experience and faster websites. Next.JS 14 Features Partial Pre-Rendering and SSR Pre-rendering is the practice of creating a web page’s HTML before a user requests it, either during the build or deployment time. Next.js offers two pre-rendering options for optimal speed: Static Generation and Server-side rendering (SSR). Static Generation works with data that is already available at build time which makes better performance over Server-side rendering. In Server-side rendering, data fetching and rendering are done at the request time. Still Server-side rendering is preferable compared to client-render apps. And if we use Next.js, we will have server rendering by default. Another significant feature, Partial pre-rendering, is announced for Next.js 14. Partial pre-rendering differs from pre-rendering in that it creates parts of the page dynamically during runtime in response to user interactions. By getting the static parts of the page as HTML and updating just the dynamic parts when needed, it is intended to deliver both fast initial page loads and dynamic visuals. If the generated HTML is static until the next build or c pre-rendering works well for pages with static or rarely changing content. Partial pre-rendering can help with faster initial page loads when a choice between static and dynamic rendering is required. Despite being an experimental feature and optional, it is crucial to integrate dynamic rendering with static generation. Turbo Mode The primary objective of Next.js 14 is to enhance speed and performance. Their Rust-based compiler Turbopack took the concern of the team and they made a remarkable improvement. Now local server startup is 53.3% faster and code updates with fast refresh speeds up to 94.7% quicker. We should note that Turbo is not yet fully finalized. Server Actions In addition to providing stability to Server Actions, Next.js 14 introduces mechanisms to improve the performance of web applications. This integration facilitates a smooth interaction between the client and server, empowering developers to incorporate essential functionalities such as error handling, caching, revalidating, and redirection—all within the context of the App Router model. Furthermore, for those utilizing TypeScript, this update ensures better type safety between the client and server components, contributing to a more robust and maintainable codebase. The FormData Web API offers a familiar paradigm for developers accustomed to server-centric frameworks. Image Optimization and Image Component Next.js 14 introduces an enhanced and flexible image optimization feature, streamlining the process of optimizing images automatically. Here’s a brief overview of how Next.js facilitates image optimization: Prioritizing Image Loading Next.js intelligently prioritizes image loading. Images within the viewport are loaded first, providing a faster initial page load, and images below are loaded asynchronously as the user scrolls down. Dynamic Sizing and Format Selection The Image component in Next.js dynamically selects the right size and format based on the user’s bandwidth and device resources. This ensures that users receive appropriately sized and optimized images. Responsive Image Resizing Next.js simplifies responsive image handling by automatically resizing images as needed. This responsive design approach ensures that images adapt to various screen sizes, further enhancing the overall user experience. Support for Next-Gen Formats (Webp): The image optimization in Next.js extends to supporting next-generation image formats like WebP. This format, known for its superior compression and quality, is automatically utilized by the Image component when applicable. Preventing Cumulative Layout Shifts: To enhance visual stability and prevent layout shifts, Next.js incorporates placeholders for images. These placeholders serve as temporary elements until the actual images are fully loaded, avoiding disruptions in the layout. Additionally, Next.js 14 enhances performance by efficiently handling font downloads. The framework optimizes the download process for fonts from sources such as Next/font/google. Automatic Code Splitting Automatic code splitting in Next.js 14 is a powerful technique that significantly contributes to optimizing web performance. Code-splitting results in breaking JS bundles into smaller, more manageable chunks. Users only download what’s necessary, leading to more efficient use of bandwidth. With less JS the performance on slower devices sees a notable improvement. Route-based splitting: By default, Next.js splits JavaScript into manageable chunks for each route. As users interact with different UI elements, the associated code chunks are sent, reducing the amount of code to be parsed and compiled at once. Component-based splitting: Developers can optimize even further on a component level. Large components can be split into separate chunks, allowing non-critical components or those rendering only on specific UI interactions to be lazily loaded as needed. These approaches collectively contribute to a more efficient, faster, and user-friendly web application experience, aligning with the continuous efforts to enhance performance in Next.js 14. Conclusion In conclusion, Next.js 14 introduces several groundbreaking features that contribute to speed and performance for web applications. The introduction of “Partial Prerendering” stands out among the new features. This innovation aims to deliver both quick initial and dynamic visuals without compromising the developer experience. The Rust-based compiler brings remarkable improvements, making local server startup 53.3% faster and code updates with fast refresh up to 94.7% quicker. Server Actions have been stabilized in this version, presenting an integration that allows developers to define server-side functions directly within React components. Image optimization is another highlight of Next.js 14, offering enhanced features like prioritized loading, dynamic sizing, and format selection. Automatic code splitting emerges as a powerful technique for optimizing web performance by breaking JS bundles into smaller, manageable chunks. Developers can leverage these features to create faster, more efficient, and user-friendly web applications, solidifying Next.js as a leading framework in the web development landscape.
Taking ERC20 tokens cross-chain is broken. Today, bridges are often slow and expensive, have security vulnerabilities (as evidenced most recently by the Multichain hack), and fragment liquidity when each bridge creates its own version of the bridged token liquidity (and communities) when they create their own versions of tokens. Bridging in its current state worked as a necessary stopgap solution, but for web3 to move forward, we need a better way. L2s are booming. Independent chains are still being built. The future is decidedly cross-chain. Recently I set out to find exactly what solutions are being built to fix the problems related to taking tokens cross-chain. In this article, I’ll look at two of the better proposals—OFT and xERC20, compare their merits, and shed light on why xERC20 emerged as my top pick. I’ll also walk you through a few code samples to show how it works. First, let’s look at the problem: why the current way we move tokens across chains isn’t the right long-term solution. Web3 Bridges Explained What Is a Web3 Bridge? A bridge is a “gateway” between blockchains that allows you to transfer tokens and data between those chains. Bridges don’t actually move the token between chains; rather, they use various mechanisms to burn/wrap/mint comparable tokens on each chain. You’ve probably used a bridge to transfer ERC20 tokens (such as WETH, DAI, USDT, etc.) between Ethereum, Polygon, BNB, and others. Protocols such as Across, Connext, and Stargate are some of the largest bridges by volume. But as we’ve said, the current way we create tokens that can move across chains is a highly flawed process. Why the Current Method Is Flawed When you decide to create a cross-chain token, you currently have three choices for your strategy: 1 - Build your own bridges 2 - Use the target chain’s canonical bridge 3 - Use third-party bridges Unfortunately, all three choices are fundamentally flawed. Building your own bridges is perhaps an idealistic choice, but it’s easy to see why it can’t really work. Not only would building a custom bridge on each supported chain be technically challenging, incredibly expensive, and time-consuming, but you would be taking on all the security risks yourself. Bridges are already rife with potential vulnerabilities. According to Chainalysis, attacks on bridges account for 69% of total funds stolen in 2022. Similar to not building your own custom wallet, you probably don’t want to build your own custom bridges. Using the chain’s canonical bridge is a choice some teams make. (By canonical, we mean the official bridge created by the chain to connect with Ethereum.) The upside to this choice is that you have a reduced security risk and can more easily identify the official version of the token created on the chain. The downside is this: canonical bridges are slow, expensive, and only work with their specific chain/L2. This means that if a token needs to be moved between L2s, it will first have to go through Ethereum before reaching its final destination—a very expensive and time-consuming process. Bridging tokens back from an L2 to Ethereum can take days and involve multiple gas-fee payments. For transferring an ERC20 token, this would include two extra transactions - one on the source chain to convert ERC20 to the native token and another on the destination. It’s a terrible and often unusable experience. Using a specific third-party bridge also has benefits but, ultimately, is not a great solution. While bridges can be fast and cheap, they each use unique bridging mechanisms, which are often centralized. This means you suddenly have multiple and varied security risks. And even worse, each bridge creates its own representation of your token. If a user moves your token via the canonical bridges and a third-party bridge, you’ll end up with different versions. You can quickly end up with half a dozen or more versions of your token. (You’ve probably seen the result of this with USDC—a.USDC, USDC.e, etc.) If the adopted version of a token on a chain is the one minted by the canonical bridge, you need high liquidity on third-party bridges to make sure users receive the version of the asset they're interested in—the one they can use in that ecosystem. With most bridges, when tokens are transferred, the tokens on the source chain are locked in the bridge contract, and the bridge mints an equivalent amount of tokens from their representation contract on the destination chain. This not only locks you (as the token creator) into the bridge’s model forever but also creates a significant security risk. A bug in the bridge’s contracts could give hackers the ability to mint a high (or even unlimited) number of unbacked tokens. Simply put, third-party bridges have a lot of issues. The Current Best Solution A common solution is for a team to go with the canonical bridge and then rely on liquidity providers to provide a layer on top that absorbs the risk and enables fast transfers. This works pretty well for token teams that can provide high liquidity. But this approach ultimately fails for long-tail tokens. Most projects just don’t have the liquidity and usage of mainstream tokens such as ETH. So what do teams do? They either spend a lot of money to incentivize liquidity (which, if you are a long-tail token, you probably don’t have), or they make the most common choice: they give up and go with a proprietary standard such as Multichain. Teams know they shouldn't, but they don’t have much choice. And, again, we know how that often turns out. There aren't many good choices for a team that wants to take their token cross-chain. Options are high-risk, expensive, fragmented, and slow. But enough about the problems. What we really need is a solution that offers fungibility, security, and minimal liquidity requirements. Let’s look at some possibilities. Emerging Solutions for Creating Cross-Chain Tokens I looked at two newer solutions that I hoped would mitigate these problems: the xERC20 token standard suggested by Connext (as a public EIP) and the LayerZero OFT. Let’s look at each and compare. The Omnichain Fungible Token (OFT) standard is a token standard developed and used by the LayerZero team. The OFT standard allows the token issuer to transfer tokens quickly and cheaply across chains that have been integrated with LayerZero (it works with only LayerZero smart contracts). OFT uses contract-to-contract communication to transfer OFTs directly between chains. Using the standard, the protocol burns an OFT token on the first chain and sends a receipt to the second chain, which then mints an equivalent number. This solution works well because it maintains a static supply of the token (and only one version of the token) across chains. From their documentation: “It’s like the ERC-20 standard, only instead of allowing composability on different apps, it allows composability on 14 blockchains.” The xERC20 standard suggested by Connext is an extension to ERC20 that defines a standard for bridging tokens. “The approach allows tokens to be minted and burned across chains by multiple bridges (canonical or 3rd-party) while giving token issuers the ability to granularly control their security preferences for each bridge using rate limits.” xERC20 ensures that all whitelisted bridges can lock, burn, and mint your token … and, most importantly that they all use the same version of your token (the canonical version). xERC20 also defines a lockbox that wraps tokens so that existing tokens that don’t implement the standard can still adhere to the requirements (similar to how WETH works). xERC20 effectively creates tokens that can be transferred with no slippage and without multiple nonfungible representations. An added benefit of xERC20 is that it also gives you sovereignty over your token. With the standard, you can define not only what bridges can work with your token but also how many tokens they can burn/mint. We’ll look at this in more detail in a minute. How do these two solutions compare? Comparing xERC20 to OFT First, both of these solutions are an improvement over the current state of moving tokens across chains. I was excited to see senior teams working on better solutions, and I can’t wait to see where these ideas go next. I applaud any team working to solve these incredibly difficult web3 challenges. Both of these solutions solve some of the main issues—they create a way to move tokens across chains with minimized security risk, less dependence on liquidity, and a single version of the token. And they do all this quickly and cheaply. That’s a huge win. The first main difference between xERC20 and OFT, and the main reason I prefer xERC20, is that xERC20 is an open standard. It doesn’t lock you in with one specific bridge, so you avoid any risk of being trapped forever if something goes wrong with it. The OFT standard is specific to LayerZero. It works really well! But it only works with LayerZero. And unlike xERC20, where the token creator maintains ownership of their token contracts on each chain, OFTs require the token issuer to relinquish ownership of the token contracts (which are deployed and owned by LayerZero). In the end, tokens that implement OFT are still tokens issued by LayerZero—effectively a third-party bridge. So they inherit many of the problems we outlined around third-party bridges above. The second main difference, and one I find very interesting, is how xERC20 gives control of the bridging back to the team behind the token. It allows teams to decide which bridges can mint/burn the token, and it lets teams set rate limits (by bridge) on the number of tokens that can be minted/burned. Token teams can now specify their risk tolerance on a bridge-by-bridge basis. Debating the pros and cons of this feature could be an entire article in itself! Giving control to the token issuer might rub some the wrong way. Control, maybe, should stay with the bridge. But maybe giving control to the token team creates better incentives. If the token team can pull the ability to mint/burn the token, then maybe the bridge now needs to work a little harder to provide a minimum level of service and security. Maybe this drives healthy competition between the bridges. Maybe it makes better bridges. It’s something to think about. xERC20 OFT Cross-bridge interoperability? Yes, since this would be an ERC standard, all bridges could become compatible with it. No, works only with LayerZero smart contract and 14 chains that LayerZero supports Control of tokens issued? Yes, allows to set bridge-specific max. Limits on the amount of tokens issued in a period of time. No How To Implement xERC20 It’s pretty easy to make your ERC20 token compatible with xERC20. The first and easiest option is to upgrade (or create) your contracts to implement the relevant functions (burn/mint/etc.). The steps are: Inherit from ERC20: JavaScript contract XERC20 is ERC20 { // ... code goes here ... } Create and maintain a mapping for limits associated with each bridge: JavaScript mapping(address => Bridge) public bridges; // Bridge can be a struct representing bridges with any additional // configs that the token issue may want to maintain Implement the minting and burning logic: JavaScript function _mint(address account, uint256 amount) internal virtual { require(account != address(0), "ERC20: mint to the zero address"); _beforeTokenTransfer(address(0), account, amount); _totalSupply = _totalSupply.add(amount); _balances[account] = _balances[account].add(amount); emit Transfer(address(0), account, amount); } function _burn(address account, uint256 amount) internal virtual { require(account != address(0), "ERC20: burn from the zero address"); _beforeTokenTransfer(account, address(0), amount); _balances[account] = _balances[account].sub(amount, "ERC20: burn amount exceeds balance"); _totalSupply = _totalSupply.sub(amount); emit Transfer(account, address(0), amount); } Implement the bridge limits (these will update the mapping introduced above): JavaScript function setLimits(address _bridge, uint256 _mintingLimit, uint256 _burningLimit) external onlyOwner { _changeMinterLimit(_mintingLimit, _bridge); _changeBurnerLimit(_burningLimit, _bridge); } function mintingMaxLimitOf(address _bridge) public view returns (uint256 _limit) { _limit = bridges[_bridge].minterParams.maxLimit; } function burningMaxLimitOf(address _bridge) public view returns (uint256 _limit) { _limit = bridges[_bridge].burnerParams.maxLimit; } If your token contract is not upgradeable (or you don’t want to upgrade), you can use the Lockbox contract to wrap your original token into a new token compatible with the xERC20 standard. Since the wrapping logic is reliable and well-tested (it’s based on the ETH-WETH pair), this option is considered trusted. Using xERC20 With Connext xERC20 is an open standard, and anyone can create an implementation. Connext already offers support for the standard. (Remember that xERC20 standard is permissionless and works with canonical bridges, but select bridges must be approved by the issuer before transfers.) At a high level, you can use Connext by following these steps: An implementation of the token on a “home” chain must be deployed. A mintable and burnable representation of the token must be deployed on destination chains (the bridge must have permission to burn and mint this representation). The token representation addresses must be added to the mainnet allowlist config file and the Chaindata mappings by submitting a PR. Once the PRs are approved by the Connext team, the tokens can be transferred across chains using the bridge. Status of xERC20 and OFT It’s early in the life of the xERC20 standard, but progress is quickly being made. The standard has been audited and is already live with a few projects. The EIP to adopt the standard has been created, and implementation has begun. Alchemix recently announced support for the xERC20 standard. And Defi Wonderland has published a suggested implementation on their GitHub. This implementation has an interface for the xERC20 contract with eight core functions that the token issuer must implement. These are functions related to setting the Lockbox contract (setLockbox), issuance limits for bridges (setLimits, mintingMaxLimitOf, burningMaxLimitOf, etc.), and the core mint and burn functions. OFT has been in production for some time and is stable (since it’s a closed standard, it doesn't need to go through any adoption or voting process). V1 is available and supports EVM chains only. V2 supports non-EVM chains (such as Aptos). Conclusion The multichain hack woke up a lot of people to the fundamental problems with the current state of bridging. I’m really excited to see teams working on the next evolution of cross-chain tokens. Fixing the issues is obviously necessary for wider adoption of web3. For too long, we’ve accepted the current faulty system—we need the kinds of solutions offered by OFT and xERC20 now. I recommend you check them both out, participate in discussions, and support these innovative teams. Have a really great day!
In this article, we will learn how to add user authentication with OAuth providers in your Next.js app. To do so, we’ll be using NextAuth.js, which is a user authentication solution that simplifies the whole process and has built-in support for many popular sign-in services. What’s OAuth? OAuth (Open Authorization) is an open standard for access delegation, commonly used as a way for Internet users to grant websites or applications access to their information on other websites but without giving them the passwords. When you use OAuth authentication with providers (like Google, Facebook, Twitter, etc.), you are using a protocol that allows you to authorize applications to use your personal information from the provider without needing to expose your password. This is achieved by sharing tokens instead of credentials, which can be restricted for use with specific resources or services. The main advantage is that you don't have to deal with different authentication methods offered by various web services. OAuth is widely accepted and used by many popular applications on the internet. Setup NextAuth.js 1. Install NextAuth.js JavaScript npm install next-auth 2. Add NEXTAUTH_SECRET to Your .env.local File After successful sign in, NextAuth.js generates a cookie with a JWT token used for user identification and it's encrypted using the NEXTAUTH_SECRET value. You can generate a random value using openssl rand -base64 32 or generate-secret.vercel.app and add it to your .env.local file. 3. Add Route Handler to Your Next.js App To add NextAuth.js to your project, you will need to create a dynamic Route Handler. To do so, create a file called route.js|ts in app/api/auth/[…nextauth]. This file will also contain all of your global NextAuth.js configurations. Route Handlers are the equivalent of API Routes inside the pages directory, meaning you do not need to use API Routes and Route Handlers together. If you're using an older version of Next.js with the pages Route, here’s an example of how to initialize an API route. In my route.ts file below, I managed to set up authentication with four different providers: Google, LinkedIn, GitHub, and Spotify. JavaScript import NextAuth, { NextAuthOptions } from "next-auth"; import GoogleProvider from "next-auth/providers/google"; import LinkedInProvider from "next-auth/providers/linkedin"; import GitHubProvider from "next-auth/providers/github"; import SpotifyProvider from "next-auth/providers/spotify"; export const authOptions: NextAuthOptions = { providers: [ GoogleProvider({ clientId: String(process.env.GOOGLE_CLIENT_ID), clientSecret: String(process.env.GOOGLE_CLIENT_SECRET), }), GitHubProvider({ clientId: String(process.env.GITHUB_CLIENT_ID), clientSecret: String(process.env.GITHUB_CLIENT_SECRET), }), // LinkedIn recently changed their OAuth flow which is why there is a bit extra code LinkedInProvider({ clientId: String(process.env.LINKEDIN_CLIENT_ID), clientSecret: String(process.env.LINKEDIN_CLIENT_SECRET), authorization: { params: { scope: "openid profile email" }, }, issuer: "https://www.linkedin.com", jwks_endpoint: "https://www.linkedin.com/oauth/openid/jwks", profile(profile, tokens) { const defaultImage = "https://cdn-icons-png.flaticon.com/512/174/174857.png"; return { id: profile.sub, name: profile.name, email: profile.email, image: profile.picture ?? defaultImage, }; }, }), SpotifyProvider({ clientId: String(process.env.SPOTIFY_CLIENT_ID), clientSecret: String(process.env.SPOTIFY_CLIENT_SECRET), }), ], }; const handler = NextAuth(authOptions); export { handler as GET, handler as POST }; What Did We Achieve With the Route Handler? Thanks to our Route Handler, now all requests to /api/auth/* like signIn, callback, signOut, etc.) will automatically be handled by NextAuth.js. Out of the box, we get a simple Sign-in page (/API/auth/signin). After a successful sign-in, you will notice a JWT token in your cookies. The same JWT token is signed with your NEXT_SECRET that we created in the previous step. How Do I get the clientId and clientSecret? Usually, the process involves creating an OAuth app on your desired service (i.e., Google) and generating a key and secret that you need to add to your .env.local file. Step-by-Step Implementation Imagine we want to implement authentication with X (ex Twitter). Here are all the steps we need to take to do so. For authentication with X (ex Twitter), simply follow NextAuth.js’s guide to get the clientId and clientSecret at next-auth.js. Remember to carefully read the documentation for the provider you want to use. Here’s a short video on how to get Twitter's clientId and clientSecret:Note: The process may vary for different services. Refer to the provider's documentation (see Step 1). For Twitter: Use the OAuth 2.0 version. Select 'Native App'. Set the Redirect URL to /api/auth/callback/twitter. Use 'localhost:3000' for development. You can add your production Callback URI/Redirect URL later. Copy-paste the clientId and clientSecret in your .env.local file. Import the Twitter provider and initialize it in the list of providers in your route.ts JavaScript // ...code import TwitterProvider from "next-auth/providers/twitter"; export const authOptions: NextAuthOptions = { // ...code providers: [ // ...other providers TwitterProvider({ clientId: process.env.TWITTER_CLIENT_ID, clientSecret: process.env.TWITTER_CLIENT_SECRET, version: "2.0", // opt-in to Twitter OAuth 2.0 }), ], }; // ...code That’s it! Go to your /api/auth/signin, and you will see your new provider. Sign-in page with all the implemented providers. Most Common Issues I Faced Even though the process from a coding perspective is straightforward and simple, I encountered a few challenges because each service has a unique way of creating an OAuth app and generating the clientId and clientSecret. Also, make sure you set up the Redirect URI/Callback URL correctly for each service. It looks like this /api/auth/callback/<name_of_provider>. For example, for GitHub, it’s locahost:3000. Make sure you set the correct Redirect URI/Callback URL in production.You can use localhost:3000 while in development. The last thing is to make sure you set up the correct scope and permissions. Basically, you will need to set which information your OAuth app will ask users for, such as email, name, and profile_photo. In case things get weird while developing, clear the cookies and try again. Example I’ve created a small Next.js (v14) app that you can play with and see how I implemented authentication with all these providers. Bonus: I also managed to create a protected route with content that will be shown only to authenticated users. App: next-auth-providers-exampleGitHub: next-auth-providers-example Summary Install NextAuth.js (npm install next-auth). You need to add NEXTAUTH_SECRET in your .env.local file in order for NextAuth.js to work. Add a Route Handler for NextAuth.js (or API Route for older Next.js versions). You can copy-paste the example provided above. Create an OAuth app for the provider that you want to implement, get the clientId and clientSecret, and add them to your .env.local file. Inside your RouteHandler (route.ts file), import the provider and add them to the array of providers, bypassing the clientId and clientSecret from your .env.local file.
This methodology allows different services to be developed, deployed, and scaled independently by different teams. The success of this architectural style has inspired a similar approach in the world of frontend development: micro frontends. What Are Micro Frontends? Micro frontends extend the principles of microservices to the frontend. The idea is to decompose a web application’s UI into smaller, semi-independent "micro" applications that work loosely together. Each team owns a specific feature or part of the application, with full control over their domain, from the database to the user interface. This approach brings several benefits: Scalability: Different parts of the frontend can be scaled independently. Autonomy: Teams can develop, deploy, and update their micro frontends independently. Technology Agnostic: Each micro frontend can potentially use different frameworks or libraries. Resilience: Isolation between micro frontends can prevent cascading failures. Challenges With Micro Frontends While micro frontends offer numerous benefits, they also introduce challenges such as: Integration: Ensuring that all the disparate parts of the application work seamlessly together. Consistent UX: Maintaining a consistent user experience across different micro frontends. Performance: Avoiding performance bottlenecks that can occur due to multiple micro frontends loading on the same page. Shared Dependencies: Managing dependencies that are shared across micro frontends without duplication. Next.js and Micro Frontends Next.js, a popular React framework, is known for its simplicity and conventional over-configuration approach. It supports server-side rendering static site generation, and, more importantly, it’s flexible enough to integrate with a micro frontend architecture. Here's a basic example of how you could set up a micro frontend architecture using Next.js: 1. Set Up the Main Application Shell Create a Next.js application that will act as the "shell" or "container" application. This application will handle the integration of the micro frontends. JavaScript npx create-next-app main-shell cd main-shell npm run dev 2. Create Micro Frontends Generate separate Next.js applications for each micro frontend. For example, if you have a product page and a checkout process, each of these could be a separate app. JavaScript npx create-next-app product-page npx create-next-app checkout-process 3. Serve Micro Frontends as Standalone Applications Each micro frontend should be able to run independently. You can achieve this by deploying each app to its domain or subdomain: JavaScript product-page.example.com checkout-process.example.com 4. Integrate Micro Frontends Into the Main Shell To integrate the micro frontends into the main shell, you can use iframe; server-side includes, or JavaScript techniques like Web Components or module federation. A simple client-side integration using JavaScript might look like this: JavaScript // In the main shell application's component const MicroFrontend = ({ name, host }) => { useEffect(() => { const scriptId = `micro-frontend-script-${name}`; const renderMicroFrontend = () => { window[`render${name}`](`${name}-container`, window.history); }; if (document.getElementById(scriptId)) { renderMicroFrontend(); return; } fetch(`${host}/asset-manifest.json`) .then(res => res.json()) .then(manifest => { const script = document.createElement('script'); script.id = scriptId; script.crossOrigin = ''; script.src = `${host}${manifest['main.js']}`; script.onload = renderMicroFrontend; document.head.appendChild(script); }); return () => { window[`unmount${name}`] && window[`unmount${name}`](`${name}-container`); }; }, [name, host]); return <div id={`${name}-container`} />; }; const ProductPage = () => ( <MicroFrontend name="ProductPage" host="http://product-page.example.com" /> ); const CheckoutProcess = () => ( <MicroFrontend name="CheckoutProcess" host="http://checkout-process.example.com" /> ); // In your main shell's pages where you want to include the micro frontend export default function Home() { return ( <div> <h1>Welcome to the Main Shell</h1> <ProductPage /> <CheckoutProcess /> </div> ); } This is a simplified example, and in a real-world scenario, you'd need to address cross-origin issues, set up a more robust communication channel between the main shell and micro frontends (perhaps using Custom Events or a state management library), and handle loading and error states more gracefully. Conclusion Micro frontends represent a significant shift in frontend development, promising more flexibility and scalability. When integrated with a framework like Next.js, it provides a structured yet flexible path to grow complex web applications. However, it's essential to weigh the complexity and needs of your project, as micro frontends aren't a silver bullet and might not suit every team or application. In practice, successful implementation requires a good understanding of both the benefits and the potential pitfalls, as well as a disciplined approach to design, development, and deployment. With careful planning and execution, micro frontends can help teams build large-scale applications more manageably and sustainably.
Websites can use IP geolocation to provide various features to website users. These include webpage redirection by country, showing content in the local language, or customizing the content based on the user’s geolocation. React is an open-source JavaScript library that makes it easy for web developers to create engaging user interfaces based on components. Being designed to run on the front end (client-side) means that it can only render the page on the browser side. Meanwhile, Next.js is an open-source React web development framework developed by Vercel. Next.js enables React web applications to perform server-side rendering, code splitting, and so much more. The framework is powered by Node.js, which provides the JavaScript runtime environment. In this tutorial, I’ll cover how to set up a basic website powered by React and Next.js. Then, I'll demonstrate how to obtain IP geolocation data of web visitors by checking their IP addresses. This involves some easy integration with IP2Location.io, the next generation of IP geolocation API that is brought to you by IP2Location.com, a long-time provider of IP geolocation data. I'll be using a Debian 12 instance for our demonstration. Prerequisite: IP2Location.io Free API key. Take note of your API key, which will be used in the steps below. 1. Install Node.js At the time of writing, the minimum version of Node.js required by Next.js is version 18 or higher. Refer to the next.js setup to see what is the latest required version. Follow the installation steps at node.js for your platform. For Debian, install using the below command: sudo apt install nodejs npm After installation, verify the versions of both. nodejs --version && npm --version Alright, my version meets the requirements. So, let’s move on to the next step. 2. Create the Basic Next.js Application Use the command line below to automatically generate the basic Next.js application based on the default template. Navigate to the folder where you want to create your application, then run the following command. npx create-next-app@latest I’ll use the default project name “my-app” and will not use TypeScript for this demo. Select Yes for ESLint, No for Tailwind, No for src director, No for App Router, and No for import alias. After running the above, you should see the folder called “my-app” has been created along with the skeleton code from the default template. Let’s navigate into the “my-app” folder and see what files and folders have been created. Take a look at the package.json file, and you’ll see that it has automatically included the required dependencies to work with React and Next.js. 3. Install the IP2Location.io Node.js SDK Run the below command to install the IP2Location.io Node.js SDK. npm install ip2location-io-nodejs After running the above command, you’ll see the package.json has been updated to include the IP2Location.io Node.js SDK. 4. Modifying the Index.js in the Pages Folder to Add the IP2Location.io Codes Navigate to the pages folder, and let’s take a look at the index.js file using a text editor. I assume that you are familiar with React, as I will not cover that here. Now, let’s replace the whole page with the below code to simplify the page for the purpose of the demo. Save the page after pasting the simplified code below. JavaScript import Head from 'next/head' import Image from 'next/image' import { Inter } from 'next/font/google' import styles from '@/styles/Home.module.css' import ip2locationio from 'ip2location-io-nodejs' const inter = Inter({ subsets: ['latin'] }) export async function getServerSideProps(context) { let ip const { req } = context if (req.headers['x-forwarded-for']) { ip = req.headers['x-forwarded-for'].split(',')[0] } else if (req.headers['x-real-ip']) { ip = req.connection.remoteAddress } else { ip = req.connection.remoteAddress } // Configures IP2Location.io API key let mykey = 'YOUR_API_KEY' let config = new ip2locationio.Configuration(mykey) let ipl = new ip2locationio.IPGeolocation(config) let lang = 'fr' // translation language, only supported in Plus and Security plans so use blank if not needed // Lookup ip address geolocation data const ipldata = await ipl.lookup(ip, lang) return { props: { ipldata: ipldata } } } export default function Home({ ipldata }) { return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={`${styles.main} ${inter.className}`}> <div className={styles.description}> IP: {ipldata.ip} <br /> Country: {ipldata.country_name} ({ipldata.country_code}) <br /> Country translation: {ipldata.country.translation.value} ({ipldata.country.translation.lang}) <br /> Region: {ipldata.region_name} <br /> City: {ipldata.city_name} <br /> Latitude: {ipldata.latitude} <br /> Longitude: {ipldata.longitude} <br /> Continent: {ipldata.continent.name} <br /> Continent translation: {ipldata.continent.translation.value} ({ipldata.continent.translation.lang}) <br /> </div> </main> </> ) } Now, let’s break down the important bits that will allow us to retrieve the IP geolocation for the website visitor’s IP address. First of all, we need to import the IP2Location.io Node.js SDK so that we can use its functionality. import ip2locationio from 'ip2location-io-nodejs' Next, add another function called “getServerSideProps” which is a Next.js function to allows us to set dynamic properties on every page request. It is here that we retrieve the caller’s IP address and pass it to the IP2Location.io API to get the geolocation data. JavaScript export async function getServerSideProps(context) { let ip const { req } = context if (req.headers['x-forwarded-for']) { ip = req.headers['x-forwarded-for'].split(',')[0] } else if (req.headers['x-real-ip']) { ip = req.connection.remoteAddress } else { ip = req.connection.remoteAddress } // Configures IP2Location.io API key let mykey = 'YOUR_API_KEY' let config = new ip2locationio.Configuration(mykey) let ipl = new ip2locationio.IPGeolocation(config) let lang = 'fr' // translation language, only supported in Plus and Security plans so use blank if not needed // Lookup ip address geolocation data const ipldata = await ipl.lookup(ip, lang) return { props: { ipldata: ipldata } } } Finally, we can use the properties that we’ve set above inside our Home function that will render our index page. We’ll pass the ipldata to the function and then the IP geolocation fields like country_code, country_name, region_name and other fields will be displayed. JSX export default function Home({ ipldata }) { return ( <> <Head> <title>Create Next App</title> <meta name="description" content="Generated by create next app" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" href="/favicon.ico" /> </Head> <main className={`${styles.main} ${inter.className}`}> <div className={styles.description}> IP: {ipldata.ip} <br /> Country: {ipldata.country_name} ({ipldata.country_code}) <br /> Country translation: {ipldata.country.translation.value} ({ipldata.country.translation.lang}) <br /> Region: {ipldata.region_name} <br /> City: {ipldata.city_name} <br /> Latitude: {ipldata.latitude} <br /> Longitude: {ipldata.longitude} <br /> Continent: {ipldata.continent.name} <br /> Continent translation: {ipldata.continent.translation.value} ({ipldata.continent.translation.lang}) <br /> </div> </main> </> ) } 5. Run the Next.js Application Run the below command to start the Next.js application. npm run dev It will tell you that you can access the web server at localhost on port 3000. 6. Open the Browser and Test the Page Since IP geolocation requires public IP addresses, we will access our test server through the Internet from our client machine so that it can read our public IP address. Open the browser on your client machine and navigate to the server you’re using at port 3000. See the output of the page below. NOTE: My example is using the Security plan for the API so I am able to retrieve more data like continent info as well as translations. Conclusion I hope that the above guide can help you guys easily integrate IP geolocation into your Next.js web application. Modern websites do rely on IP geolocation for many of their features, and it is easy to accomplish this.
In 2015, ECMAScript 6 was introduced — a significant release of the JavaScript language. This release introduced many new features, such as const/let, arrow functions, classes, etc. Most of these features were aimed at eliminating JavaScript's quirks. For this reason, all these features were labeled as "Harmony." Some sources say that the entire ECMAScript 6 is called "ECMAScript Harmony." In addition to these features, the "Harmony" label highlights other features expected to become part of the specification soon. Decorators are one of such anticipated features. Nearly ten years have passed since the first mentions of decorators. The decorators’ specification has been rewritten several times, almost from scratch, but they have not become part of the specification yet. As JavaScript has long extended beyond just browser-based applications, authors of specifications must consider a wide range of platforms where JavaScript can be executed. This is precisely why progressing to stage 3 for this proposal has taken so much time. Something Completely New? First of all, let's clarify what decorators are in the programming world. “Decorator is a structural design pattern that lets you attach new behaviors to objects by placing these objects inside special wrapper objects that contain the behaviors.”© Refactoring.Guru The key point here is that a decorator is a design pattern. This means that, typically, it can be implemented in any programming language. If you have even a basic familiarity with JavaScript, chances are you have already used this pattern without even realizing it. Sound interesting? Then, try to guess what the most popular decorator in the world is... Meet the most famous decorator in the world, the higher-order function — debounce. Debounce Before we delve into the details of the debounce function, let's remind ourselves what higher-order functions are. Higher-order functions are functions that take one or more functions as arguments or return a function as their result. The debounce function is a prominent example of a higher-order function and, at the same time, the most popular decorator for JS developers. The higher-order function debounce delays the invocation of another function until a certain amount of time has passed since the last invocation without changing its behavior. The most common use case is to prevent sending multiple requests to the server when a user is inputting values into a search bar, such as loading autocomplete suggestions. Instead, it waits until the user has finished or paused input and only then sends the request to the server. On most resources for learning JavaScript language, in the section about timeouts, you will find exercises that involve writing this function. The simplest implementation looks like this: JavaScript const debounce = (fn, delay) => { let lastTimeout = null return (...args) => { clearInterval(lastTimeout) lastTimeout = setTimeout(() => fn.call(null, ...args), delay) } } Using this function may look like the following: JavaScript class SearchForm { constructor() { this.handleUserInput = debounce(this.handleUserInput, 300) } handleUserInput(evt) { console.log(evt.target.value) } } When using a special syntax for decorators, which we will discuss in the next section, the implementation of the same behavior will look like this: JavaScript class SearchForm { @debounce(300) handleUserInput(evt) { console.log(evt.target.value) } } All the boilerplate code is gone, leaving only the essentials. Looks nice and clean, doesn't it? Higher-Order Component (HOC) The next example will come from the React-world. Although the use of Higher-Order Components (HOC) is becoming less common in applications built with this library, HOCs still serve as a good and well-known example of decorator usage. Let's take a look at an example of the withModal HOC: JavaScript const withModal = (Component) => { return (props) => { const [isOpen, setIsOpen] = useState(false) const handleModalVisibilityToggle = () => setIsOpen(!isOpen) return ( <Component {...props} isOpen={isOpen} onModalVisibilityToggle={handleModalVisibilityToggle} /> ) } } And now, let's see how it can be used: JavaScript const AuthPopup = ({ onModalVisibilityToggle }) => { // Component } const WrappedAuthPopup = withModal(AuthPopup) export { WrappedAuthPopup as AuthPopup } Here is what using the HOC with the special decorator syntax would look like: JavaScript @withModal() const AuthPopup = ({ onModalVisibilityToggle }) => { // Component } export { AuthPopup } Important Note: Function decorators are not a part of the current proposal. However, they are on the list of things that could be considered for the future development of the decorators specification.Once again, all the boilerplate code is gone, leaving only what truly matters. Perhaps some of the readers did not see anything special in this. In the example above, only one decorator was used. Let's take a look at such an example: JavaScript const AuthPopup = ({ onSubmit, onFocusTrapInit, onModalVisibilityToggle, }) => { // Component } const WrappedAuthPopup = withForm( withFocusTrap( withModal(AuthPopup) ), { mode: 'submit', }) export { WrappedAuthPopup as AuthPopup } See that hard-to-read nesting? How much time did it take you to understand what is happening in the code? Now, let's take a look at the same example but with the use of decorator syntax: JavaScript @withForm({ mode: 'submit' }) @withFocusTrap() @withModal() const AuthPopup = ({ onSubmit, onFocusTrapInit, onModalVisibilityToggle, }) => { // Component } export { AuthPopup } Would you not agree that the code that goes from top to bottom is much more readable than the previous example with nested function calls? The higher-order function debounce and the higher-order component withModal are just a few examples of how the decorator pattern is applied in everyday life. This pattern can be found in many frameworks and libraries that we use regularly, although many of us may often not pay attention to it. Try analyzing the project you are working on and look for places where the decorator pattern is applied. You will likely discover more than one such example. JavaScript Implementations Before we delve into the decorator proposal itself and its implementation, I would like us to take a look at this image: With this image, I would like to remind you of the primary purpose for which JavaScript language was originally created. I am not one of those people who like to complain, saying, "Oh, JavaScript is only good for highlighting form fields." Typically, I refer to such individuals as “dinosaurs." JavaScript primarily focuses on the end user for whom we write code. This is a crucial point to understand because every time new things are introduced in JavaScript language, such as classes with implementations differing from what is found in other programming languages, the same complainers come and start lamenting that things are not done in a user-friendly manner. Quite the opposite, in JavaScript, everything is designed with end users in mind, which is something that no other programming language can boast about. Today, JavaScript is not just a browser language. It can be run in various environments, including on the server. The TC39 committee, responsible for introducing new features to the language, faces the challenging task of meeting the needs of all platforms, frameworks, and libraries. However, the primary focus remains on end users in the browser. History of Decorators To delve deeper into the history of this proposal, let's review a list of key events. 2014-04 – Stage 0: Decorators were proposed by Yehuda Katz, and they were initially intended to become a part of the ECMAScript 7. TypeScript type Decorator = ( target: DecoratedClass, propertyKey: string, descriptor: PropertyDescriptor ) => PropertyDescriptor | void function debounce(delay: number): PropertyDescriptor { return (target, propertyKey, descriptor) => { let lastTimeout: number const method = descriptor.value descriptor.value = (...args: unknown[]) => { clearInterval(lastTimeout) lastTimeout = setTimeout(() => method.call(null, ...args), delay) } return descriptor } } Already at this stage, you can see one of the reasons why the decorator API underwent such significant changes later on. The first argument of the decorator was an entire class, even if you were decorating only one of its members. Moreover, it was assumed that developers could mutate this class. JavaScript engines always strive to optimize as much as possible, and in this case, the developer's call to mutate the entire class undermined a significant number of optimizations provided by the engine. Later, we will see that this was indeed a major reason why the decorator API was rewritten multiple times, almost from scratch. 2015-03 – Stage 1: Without significant changes, the proposal advanced to stage 2. However, an event occurred that significantly influenced the further development of this proposal: TypeScript 1.5 was released, which supported decorators. Despite decorators being marked as experimental (--experimentalDecorators), projects like Angular and MobX actively started using them. Furthermore, the overall workflow for these projects assumed the use of decorators exclusively. Due to the popularity of these projects, many developers mistakenly believed that decorators were already a part of the official JS standard. This created additional challenges for the TC39 committee because they had to consider the expectations and requirements of the developer community as well as optimization issues in language engines. 2016-07 – Stage 2: After the decorators proposal reached stage 2, its API began to undergo significant changes. Furthermore, at one point, the proposal was referred to as "ESnext class features for JavaScript." During its development, there were numerous ideas about how decorators could be structured. To get a comprehensive view of the entire history of changes, I recommend reviewing the commits in the proposal's repository. Here is an example of what the decorators API used to look like: TypeScript type Decorator = (args: { kind: 'method' | 'property' | 'field', key: string | symbol, isStatic: boolean, descriptor: PropertyDescriptor }) => { kind: 'method' | 'property' | 'field', key: string | symbol, isStatic: boolean, descriptor: PropertyDescriptor, extras: unknown[] } By the end of stage 2, the decorator API looked as follows: TypeScript type Decorator = ( value: DecoratedValue, context: { kind: 'class' | 'method' | 'getter' | 'setter' | 'field' | 'accessor', name: string | symbol, access?: { get?: () => unknown, set?: (value: unknown) => void }, private?: boolean, static?: boolean, addInitializer?: (initializer: () => void) => void } ) => UpdatedDecoratedValue | void function debounce(delay: number): UpdatedDecoratedValue { return (value, context) => { let lastTimeout = null return (...args) => { clearInterval(lastTimeout) lastTimeout = setTimeout(() => value.call(null, ...args), delay) } } } The entire Stage 2 took six years, during which the decorator API underwent significant changes. However, as we can see from the code above, mutations were excluded. This made the proposal more acceptable for JS engines as well as for various platforms, frameworks, and libraries. But the development history of decorators is not over yet. 2020-09 – Announcing MobX 6. Bye-bye Decorators: Some libraries that relied exclusively on decorators started to move away from their old implementation because they understood that the way they were working with decorators would no longer be standardized. ”Using decorators is no longer the norm in MobX. This is good news to some of you, but others will hate it. Rightfully so, because I concur that the declarative syntax of decorators is still the best that can be offered. When MobX started, it was a TypeScript only project, so decorators were available. Still experimental, but obviously they were going to be standardized soon. That was my expectation at least (I did mostly Java and C# before). However, that moment still hasn't come yet, and two decorators proposals have been cancelled in the mean time. Although they still can be transpiled.”© Michel Weststrate, author of MobX 2022-03 – Stage 3: After years of changes and refinements, decorators finally reached stage 3. Thanks to the extensive adjustments and refinements during the second stage, the third stage began without significant changes. A particular highlight is the creation of a new proposal called Decorator Metadata. 2022-08 – SpiderMonkey Newsletter: SpiderMonkey, the browser engine used by Firefox, became the first engine to begin working on the implementation of decorators. Implementations like this indicate that the proposal is generally ready to become a full-fledged part of the specification. 2022-09 – Babel 7.19.0. Stage 3 decorators: Adding support in a compiler is a very significant update for any proposal. Most proposals have a similar item in their standardization plan, and the decorators proposal was no exception. 2022-11 – Announcing TypeScript 4.9: ECMAScript decorators were listed in TS 4.9 Iteration Plan. However, after some time, the TS team decided to move decorators to the 5.0 release. Here is the authors' comment: “While decorators have reached stage 3, we saw some behavior in the spec that needed to be discussed with the champions. Between addressing that and reviewing the changes, we expect decorators will be implemented in the next version.” In general, this decision makes sense, as they did not want to risk incorporating a feature into TS prematurely, especially if it did not become part of the standard. There is always a chance of such situations happening. However, in this case, it might not be as significant as the first implementation. In TS 4.9, only a small part of decorators specification was included — Class Auto-Accessors. This addition to the decorators specification served as a correction for the mutations that were prevalent in the first stages of implementation. The reason behind this is that often there is a desire to make properties reactive, meaning that some effects should occur when the property changes, such as UI re-rendering, for example: JavaScript class Dashboard extends HTMLElement { @reactive tab = DashboardTab.USERS } In the old implementation, with the reactive decorator, you had to mutate the target class by adding additional set and get accessors to achieve the desired behavior. With the use of auto-accessors, this behavior now occurs more explicitly, which in turn allows engines to optimize it better. JavaScript class Dashboard extends HTMLElement { @reactive accessor tab = DashboardTab.USERS } Another interesting thing is how decorators were supposed to work. Since the TS team could not remove the old implementation that worked under the --experimentalDecorators flag, they decided on the following approach: if the --experimentalDecorators flag is present in the configuration, the old implementation will be used. If this flag is not present, then the new implementation will be used. 2023-03 – Announcing TypeScript 5.0: As promised, the TS team released the full version of decorators specification in TS 5.0. 2023-03 – Deno 1.32: Although in version 1.32, Deno supported TS 5.0, they decided to postpone the functionality related to decorators. “Take note that ES decorators are not yet supported, but we will be working towards enabling them by default in a future version.” 2023-05 – Angular v16 is here: Angular 16 also added support for ECMAScript decorators. However, some other frameworks built around decorators (and which were inspired by Angular?) have stated that they will not make changes toward ECMAScript decorators for now. For many of them, two important aspects are Metadata and Parameter decorators. ”I don't think we'll support JS decorators till the metadata support & parameter decorators are implemented.”© Kamil Mysliwiec, creator of NextJS 2023-08 – Announcing TypeScript 5.2: In TS 5.2, another standard was added that complements the decorators specification — Decorator Metadata. The primary idea behind this proposal is to simplify decorators' access to the class metadata in which they are used. Another reason there were so many debates regarding syntax and usage was that the authors had to create a whole separate proposal for this purpose. Just Syntactic Sugar or Not? After all the explanations and examples, you might have a question: "So, are decorators in JavaScript just higher-order functions with special syntax, and that is it?”. It is not all that simple. In addition to what was mentioned earlier regarding how JavaScript primarily focuses on end-users, it is also worth adding that JS engines always try to use the new syntax as a reference point to at least attempt to make your JavaScript faster. JavaScript import { groupBy } from 'npm:lodash@4.17.21' const getGroupedOffersByCity = (offers) => { return groupBy(offers, (it) => it.city.name) } // OR ? const getGroupedOffersByCity = (offers) => { return Object.groupBy(offers, (it) => it.city.name) } It may seem like there is no difference, but there are distinctions for the engine. Only in the second case, when native functions are used, can the engine attempt optimization. Describing how optimizations work in JavaScript engines would require a separate article. Do not hesitate to explore browser source code or search for articles to gain a better understanding of this topic. It is also important to remember that there are many JavaScript engines, and they all perform optimizations differently. However, if you assist the engine by using native syntax, your application code will generally run faster in most cases. Possible Extensions The new syntax in the specification also opens the door for additional features in the future. As an analogy, consider constructor functions and classes. When private fields were introduced in the specification, they were introduced as a feature for classes. For those who staunchly denied the usefulness of classes and claimed that constructor functions were equivalent, private fields became another reason to move away from constructor functions in favor of classes. Such features are likely to continue evolving. While we can currently achieve many of the same effects as decorators using higher-order functions in many cases, they still do not cover all the potential functionality that will be added to the decorators specification in the future. The "possible extensions" file in the decorators specification repository provides insights into how the decorators specification may evolve in the future. Some of the points were listed in the first stages but are not present in the current standard, such as parameter decorators. However, there are also entirely new concepts mentioned, like const/let decorators or block decorators. These potential extensions illustrate the ongoing development and expansion of the decorator functionality in JavaScript. Indeed, numerous proposals and extensions are being considered to enhance the decorators specification further. Some of these proposals, like the Decorator Metadata, are already under consideration even though the core decorator specification has not yet been standardized. This underscores the idea that decorators have a promising future in the specification, and we can hope to see them become a part of the standard in the near future. Conclusion The lengthy consideration of the decorators proposal over ten years may indeed seem like an extended period. It is true that the early adoption of decorators by leading frameworks and libraries played a role in uncovering the shortcomings of the initial implementation. However, this early adoption also served as an invaluable learning experience, highlighting the importance of harmonizing with web platforms and developing a solution that aligns with both platforms and the developer community while preserving the essence of decorators. The time spent refining the proposal has ultimately contributed to making it a more robust and well-considered addition to the JavaScript language. Indeed, decorators will bring significant changes to how we write applications today. Perhaps not immediately, as the current specification primarily focuses on classes, but with all the additions and ongoing work, JavaScript code in many applications will soon look different. We are now closer than ever to the moment when we can finally see those ones that are real decorators in the specification. It is an exciting development that promises to enhance the expressiveness and functionality of JavaScript applications.
At Octomind, we are using Large Language Models (LLMs) to interact with web app UIs and extract test case steps that we want to generate. We use the LangChain library to build interaction chains with LLMs. The LLM receives a task prompt, and we, as developers, provide tools the model can utilize to solve the task. The unpredictable and non-deterministic nature of the LLM output makes ensuring type safety quite a challenge. LangChain's approach to parsing input and handling errors often leads to unexpected and inconsistent outcomes within the type system. I’d like to share what I learned about parsing and error handling of LangChain. I will explain: Why did we go for TypeScript in the first place? The issue with LLM output How a type error can go unnoticed What consequences this can have All code examples use LangChain TS on the main branch on September 22nd, 2023 (roughly version 0.0.153). Why LangChain TS Instead of Python? There are two languages supported by LangChain — Python and JS/TypeScript. There were some pros and some cons with TypeScript: On the con side: We have to live with the fact that the TypeScript implementation is somewhat lagging behind the Python version — in code and even more so in documentation, this is a solvable issue if you are willing to trade the documentation for just going through the source code. On the pro side: We don't have to write another service in a different language since we are using TypeScript elsewhere, and we allegedly get guaranteed type safety, of which we are big fans here. We decided to go for the TypeScript version of LangChain to implement parts of our AI-based test discoveries. Full disclosure: I didn’t look into how the Python version handles the issues described below. Have you found similar issues in the Python version? Feel free to share them directly in the GitHub issue I created. Find the link at the end of the article. The Issue With Types in LLMs In LangChain, you can provide a set of tools that may be called by the model if it deems it necessary. For our purposes, a tool is simply a class with a _call function that does something that the model can not do on its own, like clicking on a button on a web page. The arguments for that function are provided by the model. When your tool implementation depends on the developer knowing the input format (in contrast to just doing something with text generated by the model), LangChain provides a class called StructuredTool. The StructuredTool adds a zod schema to the tool, which is used to parse whatever the model decides to call the tool so that we can use this knowledge in our code. Let's build our "click" example under the assumption that we want the model to give us a query selector to click on: Now, when you look at this class, it seems reasonably simple without a lot of potential for things to go wrong. But how does the model actually know what schema to supply? It has no intrinsic functionality for this. It just generates a string response to a prompt. When LangChain informs the model about the tools at its disposal, it will generate format instructions for each tool. These instructions define what JSON is and what the specific input schema the model should generate to use a tool. For this, LangChain will generate an addition to your own prompt that looks something like this: You have access to the following tools. You must format your inputs to these tools to match their "JSON schema" definitions below. "JSON Schema" is a declarative language that allows you to annotate and validate JSON documents. For example, the example "JSON Schema" instance {"properties": {"foo": {"description": "a list of test words," "type": "array," "items": {"type": "string"}}, "required": ["foo"]} would match an object with one required property, "foo." The "type" property specifies "foo" must be an "array," and the "description" property semantically describes it as "a list of test words." The items within "foo" must be strings. Thus, the object {"foo": ["bar," "baz"]} is a well-formatted instance of this example, "JSON Schema." The object {"properties": {"foo": ["bar," "baz"]} is not well-formatted. Here are the JSON Schema instances for the tools you have access to: click: left click on an element on a web page represented by a query selector, args: {"selector":{"type": "string," "description": "The query selector to click on."} Don't Trust the LLM Now, we have a best-effort way to make the model call our tool with inputs in the correct schema. Best effort unfortunately does not guarantee anything. It is entirely possible that the model generates input that does not adhere to the schema. So, let's take a look at the implementation of StructuredTool to see how it deals with that issue. StructuredTool.call is the function that eventually calls our _call method from above. It starts like this: The signature of arg is interpreted as follows: If, after parsing the tool’s schema, the output can be just a string, this can also be a string or whatever object the schema defines as input. This is the case if you define your schema as schema = z.string(). In our case, our schema can not be parsed to a string, so this simplifies to the type { selector: string }, or ClickSchema. But Is This Actually the Case? According to the implementation, we only check that the input actually adheres to the schema inside of the call. The signature reads like we have already made some assumptions about the input. So one might replace the signature with something like: But looking at it further, even this has issues. The only thing we know for certain is that the model will give us a string. This means there are two options: 1. call really should have the following signature: 2. There is another element to this Something must have already decided that the string returned by the model is valid JSON and have parsed it. In case that z.output<T> extends string, something somewhere must have already decided that string is an acceptable input format for the tool, and we do not need to parse JSON. (A string by itself is not valid JSON, JSON.parse("foo") will result in a SyntaxError). Introducing the OutputParser Class Of course, the second option is what is happening. For this use case, LangChain provides a concept called OutputParser. Let's take a look at the default one (StructuredChatOuputParser) and its parse method in particular. We don't need to understand every detail, but we can see that this is where the string that the model produces is parsed to JSON, and errors are thrown if it is not valid JSON. So, from this, we either get AgentAction or AgentFinish. We don't need to concern ourselves with AgentFinish, since it is just a special case to indicate that the interaction with the model is done. AgentAction is defined as: By now, you might have already seen — neither AgentAction nor the StructuredChatOutputParserWithRetries is generic, and there is no way to connect the type of toolInput with our ClickSchema. Since we don't know which tool the agent has actually selected, we can not (easily) use generics to represent the actual type, so this is expected. But worse, toolInput is typed as string, even though we just used JSON.parse to get it! Consider the positive case where the model produced output that matches our schema, let's say the string "{\"selector\": \"myCoolButton\"}" (wrapped in all the extra fluff LangChain requires to correctly parse). Using JSON.parse, this will deserialize to an object { selector: "myCoolButton" }and not a string. But because JSON.parse's return type is any, the typescript compiler has no chance of realizing this. Unfortunately for us, this also means that we, as developers, have a hard time realizing this. The Impact on Our Production Code To understand why this is troublesome, we need to look into the execution loop where the AgentActions are used to actually invoke the tool. This happens here in AgentExecutor._call. We don't really need to understand everything that this class does. Think of it as the wrapper that handles the interaction of the model with the tool implementations to actually call them. The _call method is quite long, so here is a reduced version that only contains parts relevant to our problem (these methods are simplified parts of _call and not in the actual code base of LangChain). The first thing that happens in the loop is to look for the next action to execute. This is where the parsing using the OutputParser comes in and where its exceptions are handled. You can see that in the case of an error, the toolInput field will always be a string (if this.handleParsingErrors is a function, the return type is also string). But we have just seen above that in the non-error case toolInput will be parsed JSON! This is inconsistent behavior. We never parse the output of handleParsingErrors to JSON. Let's look at how the loop continues. The next step is to call the selected tool with the given input: We only pass the previously computed output on to the tool in tool.call(action.toolInput)! In case this causes another error, we re-use the same function to handle parsing errors that will return a string that is supposed to be the tool output in the error case. Let's summarize all the issues: We parse the model's output to JSON and use that parsed result to call a tool If the parsing succeeds, we call the tool with any valid JSON If the parsing fails, we call the tool with a string The tool parses the input with zod, which will only work in the error case if the schema is just a const stringSchema = z.string() We have not covered this, but using const stringSchema = z.string() as the tool schema will not type check at all, since the generic argument of StructuredTool is T extends z.ZodObject<any, any, any, any>, and typeof stringSchema does not fulfil that constraint The signature of tool.call allows this to type check since we don't know specifically which tool we have at the moment, so string and any JSON are potentially valid The actual type check for this happens at runtime inside this function The developer implementing the tool has no idea about this. Since only StrucStep.actionturedTool._call is abstract, you will always get what the schema indicates, but StructuredTool.call will fail, even if you have supplied a function handleParsingErrors. Whatever the tool gets called is serialized into AgentAction.toolInput: string, which is not correctly typed The library user has access to the AgentSteps with wrongly typed AgentActions, since it is possible to request them as a return value of the overall loop using returnIntermediateSteps=true. Whatever the developer does now is definitely not type-safe! How Did We Run Into This Problem? At Octomind, we are using the AgentSteps to extract the test case steps that we want to generate. We noticed that the model often makes the same errors with the tool input format. Recall our ClickSchema, which is just { selector: string }. In our clicking example, it would either generate according to the schema, or { element: string }, or just a string that is the value we want, like "myCoolButton." So, we built an auto-fixer for these common error cases. The fixer basically just checks whether it can fix the input using either of the options above. The earliest we can inject this code without overwriting a lot of the planning logic that LangChain provides is in StructuredTool.call. We can not handle it using handleParsingErrors, since that receives only the error as input, and not the original input. Once you are overwriting StructuredTool.call, you are relying on the signature of that function to be correct, which we just saw is not the case. At this point, I was stuck having to figure out all of the above to see why I was getting wrongly typed inputs. The Solution To Type Safety While these hurdles can be frustrating, they also present opportunities to take a deep dive into the library and come up with possible solutions instead of complaining. I have opened two issues at LangChain JS/TS to discuss ideas on how to solve these problems: Issue 1 Issue 2 Feel free to jump in!
As more developers adopt TypeScript, I’ve curated reasons why you should use TypeScript in your next project. Although it met some resistance early on, it has quickly become a widely-used programming language in the last decade. Here is how to use TypeScript and some of the popular benefits to programmers. But first, let's dive into what TypeScript is and the problems it can solve. What Is TypeScript? TypeScript is an open-source programming language developed by Microsoft in 2012 as a superset of JavaScript. This means it contains all of JavaScript but with more features. Building on JavaScript’s functionalities and structures, it has additional features, such as typing or object-oriented programming, and it compiles to plain JavaScript. So, any code is also valuable in JavaScript. Now, what does all this mean to your project? What Can TypeScript Solve? TypeScript’s primary purpose is to improve productivity when developing complex applications. One way this happens is to enable IDEs to have a richer environment to spot common errors while you type the code. This adds a type of safety to your projects. Developers no longer have to check for errors whenever changes are made manually. And since TypeScript technically involves adding static typing to JavaScript, it can help you avoid errors like the classic: As it catches errors for you, this makes code refactoring easier without breaking it significantly. With features like interfaces, abstract classes, type aliases, tuple, function overloading, and generics. Adopting this programming language in a large JavaScript project could provide more robust software and still be deployable anywhere a JavaScript application would run. Why Is TypeScript Better Than JavaScript? TypeScript’s motto is “JavaScript that scales.” That’s because it brings the future of development to JavaScript. But is it as good as people say? Here are a few areas where TypeScript is better than JavaScript: Optional Static Typing JavaScript is a dynamically typed language. Although this has its benefits, the freedom of dynamic typing usually leads to bugs. Not only does this reduce the programmer’s efficiency, but it slows down development due to the costs of adding new lines of code. But TypeScript’s static typing differs from JavaScript’s dynamically typed nature. For example, when you’re unsure of the type in JavaScript, you’ll generally rely on the TypeError during runtime to suggest why the variable type is wrong. On the other hand, TypeScript adds syntax to JavaScript. Its compiler uses this syntax to identify possible code errors before they happen, and it subsequently produces vanilla JavaScript that browsers understand. A study showed that TypeScript could successfully detect 15% of JavaScript bugs. IDE Support During its early years, TypeScript was only supported in Microsoft’s Visual Studio code editor. However, as it gained traction, more code editors and IDEs started to support the programming language natively or through plugins. You can write TypeScript code on nearly every code editor. This extensive IDE support has made it more relevant and popular for software developers. Other IDEs that support it include Eclipse, Atom, WebStorm, and CATS. Object Orientation It supports Object-Oriented Programming concepts like classes, encapsulation, inheritance, abstraction, and interfaces. The OOP paradigm makes creating well-organized, scalable code easier, and as your project evolves in size and complexity, this benefit becomes more apparent. Readability Due to the addition of strict types and elements that make the code more expressive, you’ll be able to see the design intent of the programmers who wrote the code. This works well for remote teams because a self-explanatory code can offset the lack of direct communication among teams. Community Support TypeScript is lucky to have a massive group of exceptionally talented people working tirelessly to improve the open-source language. This explains why it has gained traction among developers and software development teams in the last few years. Most JavaScript applications comprise hundreds of thousands of files. One change to an individual file could affect the behavior of other files. Validating the relationships between every element of your project can become time-consuming quickly. As a type-checked language, it does this automatically with immediate feedback during development. While you may not see how big of a deal this is when working with small projects, complex ones with a large codebase can become messy with bugs all over the place. Every dev would like to be more efficient and faster, which can help improve project scalability. In addition, TypeScript’s language features and reference validation make it better than JavaScript. Ultimately, TypeScript improves the developer experience and code maintainability because devs feel more confident in their code. It’ll also save lots of time that would have otherwise gone into validating they haven’t accidentally broken the project. This programming language also provides better collaboration between and within teams. Advantages of TypeScript It offers significant advantages for developers and software development teams. I’ve listed five advantages of TypeScript in your next project: 1. Compile-Time Errors It’s quite clear as day already. I’ve mentioned this earlier because it is the obvious TypeScript benefit. Compile-time errors are why most developers have started using it. They can use the compiler to detect potential errors during compile time rather than runtime. JavaScript’s inability to support types and compile-time error checks means it’s not a good fit for server-side code in complex and large codebases. On the other hand, another reason to use TypeScript is that it detects compilation errors during development, making runtime errors unlikely. It incorporates static typing, helping a programmer check type correctness at compile time. 2. Runs Everywhere I already mentioned that TypeScript compiles to pure JavaScript, meaning it can run everywhere. In fact, it compiles to any JavaScript version, including the latest version, ES2022, and others like ES6, ES5, and ES3. You can use it with frameworks like React and Angular on the front end or Node.js on the backend. 3. Tooling Over Documentation If you want a successful project in the long run, documentation is essential. But this can be tricky because it’s easy to overlook documentation, difficult to enforce, and impossible to report if it’s no longer up to date. This makes it essential to prioritize tooling over documentation. TypeScript takes tooling seriously. And this goes beyond errors and completions while typing. It documents the arguments a function is expecting, the shape of objects, and the variables that may be undefined. It’ll also notify you when it needs to be updated and where exactly. Without this programming language, each developer would have to waste a lot of time looking up the shapes of objects, combing through documentation, and hoping they’re up to date. Or you would have to debug the code and hope that your predictions about which fields are required and optional are accurate. 4. Object-Oriented Programming (OOP) As an object-oriented programming language, it is great for large and complex projects that must be actively updated or maintained. Some of the benefits that object-oriented programming provides are: Reuse of code through inheritance: The ability to assign relationships and subclasses between objects enables programmers to reuse a common logic while retaining a unique hierarchy. This attribute of OOP speeds up development and provides more accuracy by enabling a more in-depth data analysis. Increased flexibility due to polymorphism: Depending on the context, objects can take on multiple forms depending on the context. The program will identify which meaning or usage is required for each execution of that object, which reduces the need to duplicate code. Reduced Data Corruption through Encapsulation: Each object’s implementation and state are held privately within a defined class or boundary. Other objects can’t access the class nor have the authority to make changes. They can only call a list of methods or public functions. Hence, encapsulation helps you perform data hiding, which increases program security and prevents unintentional data corruption. Effective Problem Solving: Object-oriented programming takes a complex problem and breaks it into solvable chunks. For each small problem, a developer writes a class that does what they need. Ultimately, using OOP provides improved data structures and reliability while saving time in the long run. 5. Static Typing Besides helping you catch bugs, static typing gives the code more structure and ensures it is self-documented. This is because the type of information makes it easier to understand how classes, functions, and other structures work. It also becomes easier to refactor code or eliminate technical debt. In addition, static typing integrates seamlessly with autocomplete tools, ensuring they’re more reliable and accurate. That way, devs can write code faster. In most cases, static-typed code is easier for humans or robots to read. Step-By-Step To Install TypeScript By now, you already have an idea of what TypeScript does and how it makes writing code easier. But how do you use it? You need to install it first, so here is a full guide to do it. Step 1: Download and Install the NodeJS Framework The first step is downloading and installing the NodeJS framework (npm version) into your computer. If you don’t already have it installed, you can do so by visiting the Node download page. It’s recommended that you use the LTS (long-time support) version because it’s the most stable. Step 2: Navigate to the Start Menu and Click the Command Prompt After installing Node and NPM, run the command below in the NodeJS command prompt: npm install –g TypeScript The command will install TypeScript into your local systems. Step 3: Verify Installation You can verify if TypeScript has been installed by running the command below: tsc -v tsc is a TypeScript compiler, while the -v flag displays the TS version. See below: Once you’ve confirmed this, then TypeScript has been successfully installed. You can also install a specific TS version using the command ‘@’ followed by the version you want. For example: npm install –global TypeScript@4.9.3 How To Install TypeScript Into a Current Project You can also set it up on a per-project basis. That is, you install TS into your current project. This helps you have multiple projects with different TypeScript versions and ensures each project works consistently without interactions with each other. To install the TypeScript compiler locally into your project, simply use the command below: npm install –save-dev TypeScript How to Uninstall TypeScript To uninstall it, you can use the same command you used for installation. Simply replace the install with uninstall as seen below: npm uninstall –global TypeScript How To Use TypeScript After installing it, it’s time to use it. You’ll need a code editor like Visual Code Studio. If you don’t have it, you need to download and install VS Code. When you’ve done this, here’s how to use TypeScript: Step 1: Let’s create a simple Hello World project. This will help you have an idea of how to use TypeScript. Step 2: Run the following command after installation to make a project directory: mkdir HelloWorld Then move into the new directory: cd HelloWorld Step 3: Launch Visual Studio Code (or your preferred code editor). We’ll use VS code here. Step 4: Navigate to File Explorer and create a new file named helloworld.ts. The file name isn’t essential; you can name it whatever you want. However, it’s important that these end with a .ts extension. Step 5: Next, add the following TypeScript code. let message: string = ‘Hello, World!’; console.log(message); You’ll notice the keywords let and string type declaration. Step 6: To compile the TypeScript code, simply open the Integrated Terminal (Ctrl+`) and type: tsc helloworld.ts This compiles and creates a new helloworld.js JavaScript file. When you open helloworld.js, you’ll see it doesn’t look too different from helloworld.ts. You’ll see the type information has now been removed and let has been replaced with var. Conclusion Ultimately, using TypeScript will depend on your project and the time and effort required. Your team will need to assess the advantages and disadvantages of implementation. So, using TypeScript will become apparent right away, from better code completion to bug prevention, and it will make your team’s lives easier when it comes to writing code.
In today's digital landscape, it's not just about building functional systems; it's about creating systems that scale smoothly and efficiently under demanding loads. But as many developers and architects can attest, scalability often comes with its own unique set of challenges. A seemingly minute inefficiency, when multiplied a million times over, can cause systems to grind to a halt. So, how can you ensure your applications stay fast and responsive, regardless of the demand? In this article, we'll delve deep into the world of performance optimization for scalable systems. We'll explore common strategies that you can weave into any codebase, be it front end or back end, regardless of the language you're working with. These aren't just theoretical musings; they've been tried and tested in some of the world's most demanding tech environments. Having been a part of the team at Facebook, I've personally integrated several of these optimization techniques into products I've helped bring to life, including the lightweight ad creation experience in Facebook and the Meta Business Suite. So whether you're building the next big social network, an enterprise-grade software suite, or just looking to optimize your personal projects, the strategies we'll discuss here will be invaluable assets in your toolkit. Let's dive in. Prefetching Prefetching is a performance optimization technique that revolves around the idea of anticipation. Imagine a user interacting with an application. While the user performs one action, the system can anticipate the user's next move and fetch the required data in advance. This results in a seamless experience where data is available almost instantly when needed, making the application feel much faster and responsive. Proactively fetching data before it's needed can significantly enhance the user experience, but if done excessively, it can lead to wasted resources like bandwidth, memory, and even processing power. Facebook employs pre-fetching a lot, especially for their ML-intensive operations such as "Friends suggestions." When Should I Prefetch? Prefetching involves the proactive retrieval of data by sending requests to the server even before the user explicitly demands it. While this sounds promising, a developer must ensure the balance is right to avoid inefficiencies. A. Optimizing Server Time (Backend Code Optimizations) Before jumping into prefetching, it's wise to ensure that the server response time is optimized. Optimal server time can be achieved through various backend code optimizations, including: Streamlining database queries to minimize retrieval times. Ensuring concurrent execution of complex operations. Reducing redundant API calls that fetch the same data repeatedly. Stripping away any unnecessary computations that might be slowing down the server response. B. Confirming User Intent The essence of prefetching is predicting the user's next move. However, predictions can sometimes be wrong. If the system fetches data for a page or feature the user never accesses, it results in resource wastage. Developers should employ mechanisms to gauge user intent, such as tracking user behavior patterns or checking active engagements, ensuring that data isn't fetched without a reasonably high probability of being used. How To Prefetch Prefetching can be implemented using any programming language or framework. For the purpose of demonstration, let's look at an example using React. Consider a simple React component. As soon as this component finishes rendering, an AJAX call is triggered to prefetch data. When a user clicks a button in this component, a second component uses the prefetched data: JavaScript import React, { useState, useEffect } from 'react'; import axios from 'axios'; function PrefetchComponent() { const [data, setData] = useState(null); const [showSecondComponent, setShowSecondComponent] = useState(false); // Prefetch data as soon as the component finishes rendering useEffect(() => { axios.get('https://api.example.com/data-to-prefetch') .then(response => { setData(response.data); }); }, []); return ( <div> <button onClick={() => setShowSecondComponent(true)}> Show Next Component </button> {showSecondComponent && <SecondComponent data={data} />} </div> ); } function SecondComponent({ data }) { // Use the prefetched data in this component return ( <div> {data ? <div>Here is the prefetched data: {data}</div> : <div>Loading...</div>} </div> ); } export default PrefetchComponent; In the code above, the PrefetchComponent fetches data as soon as it's rendered. When the user clicks the button, SecondComponent gets displayed, which uses the prefetched data. Memoization In the realm of computer science, "Don't repeat yourself" isn't just a good coding practice; it's also the foundation of one of the most effective performance optimization techniques: memoization. Memoization capitalizes on the idea that re-computing certain operations can be a drain on resources, especially if the results of those operations don't change frequently. So, why redo what's already been done? Memoization optimizes applications by caching computation results. When a particular computation is needed again, the system checks if the result exists in the cache. If it does, the result is directly retrieved from the cache, skipping the actual computation. In essence, memoization involves creating a memory (hence the name) of past results. This is especially useful for functions that are computationally expensive and are called multiple times with the same inputs. It's akin to a student solving a tough math problem and jotting down the answer in the margin of their book. If the same question appears on a future test, the student can simply reference the margin note rather than work through the problem all over again. When Should I Memoize? Memoization isn't a one-size-fits-all solution. In certain scenarios, memoizing might consume more memory than it's worth. So, it's crucial to recognize when to use this technique: When the data doesn’t change very often: Functions that return consistent results for the same inputs, especially if these functions are compute-intensive, are prime candidates for memoization. This ensures that the effort taken to compute the result isn't wasted on subsequent identical calls. When the data is not too sensitive: Security and privacy concerns are paramount. While it might be tempting to cache everything, it's not always safe. Data like payment information, passwords, and other personal details should never be cached. However, more benign data, like the number of likes and comments on a social media post, can safely be memoized to improve performance. How To Memoize Using React, we can harness the power of hooks like useCallback and useMemo to implement memoization. Let's explore a simple example: JavaScript import React, { useState, useCallback, useMemo } from 'react'; function ExpensiveOperationComponent() { const [input, setInput] = useState(0); const [count, setCount] = useState(0); // A hypothetical expensive operation const expensiveOperation = useCallback((num) => { console.log('Computing...'); // Simulating a long computation for(let i = 0; i < 1000000000; i++) {} return num * num; }, []); const memoizedResult = useMemo(() => expensiveOperation(input), [input, expensiveOperation]); return ( <div> <input value={input} onChange={e => setInput(e.target.value)} /> <p>Result of Expensive Operation: {memoizedResult}</p> <button onClick={() => setCount(count + 1)}>Re-render component</button> <p>Component re-render count: {count}</p> </div> ); } export default ExpensiveOperationComponent; In the above example, the expensiveOperation function simulates a computationally expensive task. We've used the useCallback hook to ensure that the function doesn't get redefined on each render. The useMemo hook then stores the result of the expensiveOperation so that if the input doesn't change, the computation doesn't run again, even if the component re-renders. Concurrent Fetching Concurrent fetching is the practice of fetching multiple sets of data simultaneously rather than one at a time. It's similar to having several clerks working at a grocery store checkout instead of just one: customers get served faster, queues clear more quickly, and overall efficiency improves. In the context of data, since many datasets don't rely on each other, fetching them concurrently can greatly accelerate page load times, especially when dealing with intricate data that requires more time to retrieve. When To Use Concurrent Fetching? When each data is independent, and the data is complex to fetch: If the datasets being fetched have no dependencies on one another and they take significant time to retrieve, concurrent fetching can help speed up the process. Use mostly in the back end and use carefully in the front end: While concurrent fetching can work wonders in the back end by improving server response times, it must be employed judiciously in the front end. Overloading the client with simultaneous requests might hamper the user experience. Prioritizing network calls: If data fetching involves several network calls, it's wise to prioritize one major call and handle it in the foreground, concurrently processing the others in the background. This ensures that the most crucial data is retrieved first while secondary datasets load simultaneously. How To Use Concurrent Fetching In PHP, with the advent of modern extensions and tools, concurrent processing has become simpler. Here's a basic example using the concurrent {} block: PHP <?php use Concurrent\TaskScheduler; require 'vendor/autoload.php'; // Assume these are some functions that fetch data from various sources function fetchDataA() { // Simulated delay sleep(2); return "Data A"; } function fetchDataB() { // Simulated delay sleep(3); return "Data B"; } $scheduler = new TaskScheduler(); $result = concurrent { "a" => fetchDataA(), "b" => fetchDataB(), }; echo $result["a"]; // Outputs: Data A echo $result["b"]; // Outputs: Data B ?> In the example, fetchDataA and fetchDataB represent two data retrieval functions. By using the concurrent {} block, both functions run concurrently, reducing the total time it takes to fetch both datasets. Lazy Loading Lazy loading is a design pattern wherein data or resources are deferred until they're explicitly needed. Instead of pre-loading everything up front, you load only what's essential for the initial view and then fetch additional resources as and when they're needed. Think of it as a buffet where you only serve dishes when guests specifically ask for them, rather than keeping everything out all the time. A practical example is a modal on a web page: the data inside the modal isn't necessary until a user decides to open it by clicking a button. By applying lazy loading, we can hold off on fetching that data until the very moment it's required. How To Implement Lazy Loading For an effective lazy loading experience, it's essential to give users feedback that data is being fetched. A common approach is to display a spinner or a loading animation during the data retrieval process. This ensures that the user knows their request is being processed, even if the data isn't instantly available. Lazy Loading Example in React Let's illustrate lazy loading using a React component. This component will fetch data for a modal only when the user clicks a button to view the modal's contents: JavaScript import React, { useState } from 'react'; function LazyLoadedModal() { const [data, setData] = useState(null); const [isLoading, setIsLoading] = useState(false); const [isModalOpen, setIsModalOpen] = useState(false); const fetchDataForModal = async () => { setIsLoading(true); // Simulating an AJAX call to fetch data const response = await fetch('https://api.example.com/data'); const result = await response.json(); setData(result); setIsLoading(false); setIsModalOpen(true); }; return ( <div> <button onClick={fetchDataForModal}> Open Modal </button> {isModalOpen && ( <div className="modal"> {isLoading ? ( <p>Loading...</p> // Spinner or loading animation can be used here ) : ( <p>{data}</p> )} </div> )} </div> ); } export default LazyLoadedModal; In the above example, the data for the modal is fetched only when the user clicks the "Open Modal" button. Until then, no unnecessary network request is made. Once the data is being fetched, a loading message (or spinner) is displayed to indicate to the user that their request is in progress. Conclusion In today's fast-paced digital world, every millisecond counts. Users demand rapid responses, and businesses can't afford to keep them waiting. Performance optimization is no longer just a 'nice-to-have' but an absolute necessity for anyone serious about delivering a top-tier digital experience. Through techniques such as Pre-fetching, Memoization, Concurrent Fetching, and Lazy Loading, developers have a robust arsenal at their disposal to fine-tune and enhance their applications. These strategies, while diverse in their applications and methodologies, share a common goal: to ensure applications run as efficiently and swiftly as possible. However, it's important to remember that no single strategy fits all scenarios. Each application is unique, and performance optimization requires a judicious blend of understanding the application's needs, recognizing the users' expectations, and applying the right techniques effectively. It's an ongoing journey of refinement and learning.
There are various methods of visualizing three-dimensional objects in two-dimensional space. For example, most 3D graphics engines use perspective projection as the main form of projection. This is because perspective projection is an excellent representation of the real world, in which objects become smaller with increasing distance. But when the relative position of objects is not important, and for a better understanding of the size of objects, you can use parallel projections. They are more common in engineering and architecture, where it is important to maintain parallel lines. Since the birth of computer graphics, these projections have been used to render 3D scenes when 3D rendering hardware acceleration was not possible. Recently, various forms of parallel projections have become a style choice for digital artists, and they are used to display objects in infographics and in digital art in general. The purpose of this article is to show how to create and manipulate isometric views in SVG and how to define these objects using, in particular, the JointJS library. To illustrate SVG’s capabilities in creating parallel projections, we will use isometric projection as an example. This projection is one of the dominant projection types because it allows you to maintain the relative scale of objects along all axes. Isometric Projection Let’s define what isometric projection is. First of all, it is a parallel type of projection in which all lines from a “camera” are parallel. It means that the scale of an object does not depend on the distance between the “camera” and the object. And specifically, in isometric (which means “equal measure” in Greek) projection, scaling along each axis is the same. This is achieved by defining equal angles between all axes. In the following image, you can see how axes are positioned in isometric projection. Keep in mind that in this article, we will be using a left-handed coordinate system. One of the features of the isometric projection is that it can be deconstructed into three different 2D projections: top, side, and front projections. For example, a cuboid can be represented by three rectangles on each 2D projection and then combined into one isometric view. The next image represents separate projections of an object using the left-handed coordinate system. Separate views of the orthographic projection Then, we can combine them into one isometric view: Isometric view of the example object The challenge with SVG is that it contains 2D objects which are located on one XY-plane. But we can overcome this by combining all projections in one plane and then separately applying a transformation to every object. SVG Isometric View Transformations In 3D, to create an isometric view, we can move the camera to a certain position, but SVG is purely a 2D format, so we have to create a workaround to build such a view. We recommend reading Cody Walker’s article that presents a method for creating isometric representations from 2D object views — top, side, and front projections. Based on the article, we need to create transformations for each 2D projection of the object separately. First, we need to rotate our plane by 30 degrees. And then, we will skew our 2D image by -30 degrees. This transformation will align our axes with the axes of the isometric projection. Then, we need to use a scale operator to scale our 2D projection down vertically by 0.8602. We need to do it due to the fact of isometric projection distortion. Let’s introduce some SVG features that will help us implement isometric projection. The SVG specification allows users to specify a particular transformation in the transform attribute of an SVG element. This attribute helps us apply a linear transformation to the SVG element. To transform 2D projection into an isometric view, we need to apply scale, rotate, and skew operators. To represent the transformation in code, we can use the DOMMatrixReadOnly object, which is a browser API, to represent the transformation matrix. Using this interface, we can create a matrix as follows: JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); This interface allows building a transformation matrix using our values, and then we can apply the resulting value to thetransform attribute using the matrix function. In SVG, we can present only one 2D space at a time, so for our conversion, we will be using top projection as a base projection. This is mostly because axes in this projection correspond with axes in a normal SVG viewport. To demonstrate SVG possibilities, we will be using the JointJS library. We defined a rectangular grid in the XY-plane with a cell width of 20. Let’s define SVG for the elements on the top projection from the example. To properly render this object, we need to specify two polygons for two levels of our object. Also, we can apply a translate transformation for our element in 2D space using DOMMatrix: JavaScript // Translate transformation for Top1 Element const matrix2D = new DOMMatrixReadOnly() .translate(200, 200); HTML <!--Top1 element--> <polygon joint-selector="body" id="v-4" stroke-width="2" stroke="#333333" fill="#ff0000" fill-opacity="0.7" points="0,0 60,0 60,20 40,20 40,60 0,60" transform="matrix(1,0,0,1,200,200)"> </polygon> <!--Top2 element--> <polygon joint-selector="body" id="v-6" stroke-width="2" stroke="#333333" fill="#ff0000" fill-opacity="0.7" points="0,0 20,0 20,40 0,40" transform="matrix(1,0,0,1,240,220)"> </polygon> Then, we can apply our isometric matrix to our elements. Also, we will add a translate transformation to position elements in the right place: JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); const top1Matrix = isoMatrix.translate(200, 200); const top2Matrix = isoMatrix.translate(240, 220); Isometric view without height adjustment For simplicity, let’s assume that our element’s base plane is located on the XY plane. Therefore, we need to translate the top view so it will be viewed as it is located on the top of the object. To do it, we can just translate the projection by its Z coordinate on the scaled SVG space as follows. The Top1 element has an elevation of 80, so we should translate it by (-80, -80). Similarly, the Top2 element has an elevation of 40. We can just apply these translations to our existing matrix: JavaScript const top1MatrixWithHeight = top1Matrix.translate(-80, -80); const top2MatrixWithHeight = top1Matrix.translate(-40, -40); Final isometric view of top projection In the end, we will have the following transform attributes for Top1 and Top2 elements. Note that they differ only in the two last values, which represent the translate transformation: JavaScript // Top1 element transform="matrix(0.8660254037844387,0.49999999999999994,-0.8165000081062317,0.47140649947346464,5.9,116.6)" // Top2 element transform="matrix(0.8660254037844387,0.49999999999999994,-0.8165000081062317,0.47140649947346464,26.2,184.9)" To create an isometric view of side and front projections, we need to make a net so we can place all projections on 2D SVG space. Let’s create a net by attaching side and front views similar to the classic cube net: Then, we need to skewX side and front projections by 45 degrees. It will allow us to align the Z-axis for all projections. After this transformation, we will get the following image: Prepared 2D projection Then, we can apply our isoMatrix to this object: Isometric projection without depth adjustments In every projection, there are parts that have a different 3rd coordinate value. Therefore, we need to adjust this depth coordinate for every projection as we did with the top projection and its Z coordinate. In the end, we will get the following isometric view: Final isometric view of the object Using JointJS for the Isometric Diagram JointJS allows us to create and manipulate such objects with ease due to its elements framework and wide set of tools. Using JointJS, we can define and control isometric objects to build powerful isometric diagrams. Remember the basic isometric transformation from the beginning of the article? JavaScript const isoMatrix = new DOMMatrixReadOnly() .rotate(30) .skewX(-30) .scale(1, 0.8602); In the JointJS library, we can apply this transformation to the whole object which stores all SVG elements, and then simply apply the object-specific transformations on top of this. Isometric Grid Rendering JointJS has great capabilities in the rendering of custom SVG markup. Utilizing JointJS, we can generate a path that is aligned to an untransformed grid and have it transformed automatically with the grid, thanks to the global paper transformation that we mentioned previously. You can see the grid and how we interpret the coordinate system in the demo below. Note that we can dynamically change the paper transformation, which allows us to change the view on the fly: Isometric grid Creating a Custom Isometric SVG Element Here, we show a custom SVG Isometric shape in JointJS. In our example, we use the isometricHeight property to store information about a third dimension and then use it to render our isometric object. The following snippet shows how you can call the custom createIsometricElement function to alter object properties: JavaScript const element = createIsometricElement({ isometricHeight: GRID_SIZE * 3, size: { width: GRID_SIZE * 3, height: GRID_SIZE * 6 }, position: { x: GRID_SIZE * 6, y: GRID_SIZE * 6 } }); In the following demo, you can see that our custom isometric element can be moved like an ordinary element on the isometric grid. You can change dimensions by altering the parameters of the createIsometricElement function in the source code (when you click “Edit on CodePen”): Custom isometric element on the isometric grid Z-Index Calculation in Isometric Diagrams One of the problems with an isometric view is placing elements respective to their relative position. Unlike in a 2D plane, in an isometric view, objects have perceived height and can be placed one behind the other. We can achieve this behavior in SVG by placing them into the DOM in the right order. To define the order in our case, we can use the JointJS z attribute, which allows sending the correct element to the background so that it can be overlapped/hidden by the other element as expected. You can find more information about this problem in a great article by Andreas Hager. We decided to sort the elements using the topological sorting algorithm. The algorithm consists of two steps. First, we need to create a special graph, and then we need to use a depth-first search for that graph to find the correct order of elements. As the first step, we need to populate the initial graph — for each object, we need to find all objects behind it. We can do that by comparing the positions of their bottom sides. Let’s illustrate this step with images — let’s, for example, take three elements which are positioned like this: We have marked the bottom side of each object in the second image. Using this data, we will create a graph structure that will model topological relations between elements. In the image, you can see how we define the points on the bottom side — we can find the relative position of all elements by comparing aMax and bMin points. We define that if the x and y coordinates of point bMin are less than the coordinates of point aMax , then object b is located behind object a. Algorithm data in a 2D space Comparing the three elements from our previous example, we can produce the following graph: Topological graph After that, we need to use a variation of the depth-first search algorithm to find the correct rendering order. A depth-first search allows us to visit graph nodes according to the visibility order, starting from the most distant one. Here is a library-agnostic example of the algorithm: JavaScript const sortElements = (elements: Rect[]) => { const nodes = elements.map((el) => { return { el: el, behind: [], visited: false, depth: null, }; }); for (let i = 0; i < nodes.length; ++i) { const a = nodes[i].el; const aMax = aBBox.bottomRight(); for (let j = 0; j < nodes.length; ++j) { if (i != j) { const b = nodes[j].el; const bMin = bBBox.topLeft(); if (bMin.x < aMax.x && bMin.y < aMax.y) { nodes[i].behind.push(nodes[j]); } } } } const sortedElements = depthFirstSearch(nodes); return sortedElements; }; const depthFirstSearch = (nodes) => { let depth = 0; let sortedElements = []; const visitNode = (node) => { if (!node.visited) { node.visited = true; for (let i = 0; i < node.behind.length; ++i) { if (node.behind[i] == null) { break; } else { visitNode(node.behind[i]); delete node.behind[i]; } } node.depth = depth++; sortedElements.push(node.el); } }; for (let i = 0; i < nodes.length; ++i) { visitNode(nodes[i]); } return sortedElements; }; This method can be implemented easily using the JointJS library — in the following CodePen, we use a special JointJS event to recalculate z-indexes of our elements whenever the position of an element is changed. As outlined above, we use a special z property of the element model to specify rendering order and assign it during the depth-first traversal. (Note that the algorithm’s behavior is undefined in the case of intersecting elements due to the nature of the implementation of isometric objects.) Z-index calculations for isometric diagrams The JointJS Demo We have created a JointJS demo that combines all of these methods and techniques and also allows you to easily switch between 2D and isometric SVG markup. Crucially, as you can see, the powerful features of JointJS (which allow us to move elements, connect them with links, and create tools to edit them, among others) work just as well in the isometric view as they do in 2D. You can see the demo here. Throughout this article, we used our open-source JointJS library for illustration. However, since you were so thorough with your exploration, we would like to extend to you an invitation to our no-commitment 30-day trial of JointJS+, an advanced commercial extension of JointJS. It will allow you to experience additional powerful tools for creating delightful diagrams.
Anthony Gore
Founder,
Vue.js Developers
John Vester
Staff Engineer,
Marqeta @JohnJVester
Justin Albano
Software Engineer,
IBM