Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Volume 30 | April 2024

Languages & Frameworks

  • Languages & Frameworks

    Adopt Trial Assess Hold Adopt Trial Assess Hold
  • New
  • Moved in/out
  • No change

Languages & Frameworks

Adopt ?

No blips

Trial ?

  • 84. Astro

    The Astro framework is gaining more popularity in the community. One of our teams has used Astro to build content-driven websites like blogs and marketing websites. Astro is a multi-page application framework that renders HTML on the server and minimizes the amount of JavaScript sent over the wire. We like that Astro supports — when appropriate — select active components written in the front-end JavaScript framework of your choice even though it encourages sending only HTML. It does this through its island architecture. Islands are regions of interactivity within a single page where the necessary JavaScript is downloaded only when needed. In this way, most areas of the site are converted to fast, static HTML, and the JavaScript parts are optimized for parallel loading. Our team likes both its page rendering performance as well as its build speed. The Astro component syntax is a simple extension of HTML, and the learning curve is quite gentle.

  • 85. DataComPy

    Comparing DataFrames is a common task in data engineering, often done to compare the output of two data transformation approaches to make sure no meaningful deviations or inconsistencies have occurred. DataComPy is a Python library that facilitates the comparison of two DataFrames in pandas, Spark and more. The library goes beyond basic equality checks by providing detailed insights into discrepancies at both row and column levels. DataComPy also has the ability to specify absolute or relative tolerance for comparison of numeric columns as well as known differences it need not highlight in its report. Some of our teams use it as part of their smoke testing suite; they find it efficient when comparing large and wide DataFrames and consider its reports easy to understand and act upon.

  • 86. Pinia

    Pinia is a store library and state management framework for Vue.js. It uses declarative syntax and offers its own state management API. Compared to Vuex, Pinia provides a simpler API with less ceremony, offers Composition-style APIs and, most importantly, has solid type inference support when used with TypeScript. Pinia is endorsed by the Vue.js team as a credible alternative to Vuex and is currently the official state management library for Vue.js. Our teams are leveraging Pinia for its simplicity and ease of implementation.

  • 87. Ray

    Today's machine learning (ML) workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands. Ray is a unified framework for scaling AI and Python code from laptop to cluster. Ray is essentially a well-encapsulated distributed computing framework with a series of AI libraries to simplify ML work. By integrating with other frameworks (e.g., PyTorch and TensorFlow), it can be used to build large-scale ML platforms. Companies like OpenAI and Bytedance use Ray heavily for model training and inference. We also use its AI libraries to help with distributed training and hyperparameter tuning on our projects. We recommend you try Ray when building scalable ML projects.

Assess ?

  • 88. Android Adaptability

    Some mobile applications and games can be so demanding they cause thermal throttling within a few minutes. In this state, CPU and GPU frequency are reduced to help cool the device, but it also results in reduced frame rates in games. When the thermal situation improves, the frame rates increase again and the cycle repeats, leading to the software feeling janky. Android Adaptability, a new set of libraries, allows application developers to respond to changing performance and thermal situations. The Android Dynamic Performance Framework (ADPF) includes the Thermal API to provide information about the thermal state and the Hint API to help Android choose the optimal CPU operating point and core placement. Teams using Unity will find the Unity Adaptive Performance package helpful, as it works with both APIs.

  • 89. Concrete ML

    Previously, we blipped the Homomorphic Encryption technique that allows computations to be performed directly on encrypted data. Concrete ML is one such open-source tool that allows for privacy-preserving machine learning. Built on top of Concrete, it simplifies the use of fully homomorphic encryption (FHE) for data scientists to help them automatically turn machine learning models into their homomorphic equivalent. Concrete ML's built-in models have APIs that are almost identical to their scikit-learn counterparts. You can also convert PyTorch networks to FHE with Concrete ML's conversion APIs. Note, however, that FHE with Concrete ML could be slow without tuned hardware.

  • 90. Crabviz

    Crabviz is a Visual Studio Code plug-in to create call graphs. The graphs are interactive, which is essential when working with even moderately large codebases such as a microservice. They show types, methods, functions and interfaces grouped by file and also display function calling relationships and interface implementation relationships. Because Crabviz is based on the Language Server Protocol, it supports any number of languages, as long as the corresponding language server is installed. This means, though, that Crabviz is limited to static code analysis, which might not be sufficient for some use cases. The plug-in is written in Rust and is available on the Visual Studio Code Marketplace.

  • 91. Crux

    Crux is an open-source cross-platform app development framework written in Rust. Inspired by the Elm architecture, Crux organizes business logic code at the core and UI layer in native frameworks like SwiftUI, Jetpack Compose, React/Vue or WebAssembly-based frameworks (like Yew). With Crux, you can write side effects–free behavior code in Rust and share it across iOS, Android and the web.

  • 92. Databricks Asset Bundles

    The recent public preview release of Databricks Asset Bundles (DABs), included with Databricks CLI version 0.205 and above, is becoming the officially recommended way to package Databricks assets for source control, testing and deployment. It has started to replace dbx among our teams. DABs supports packaging the configuration of workflows, jobs and tasks, as well as the code to be executed in those tasks, as a bundle that can be deployed to multiple environments. It comes with templates for common types of assets and supports custom templates. While DABs includes templates for notebooks and supports deploying them to production, we continue to recommend against productionizing notebooks and instead encourage intentionally writing production code with the engineering practices that support the maintainability, resiliency and scalability needs of such workloads.

  • 93. Electric

    Electric is a local-first sync framework for mobile and web applications. Local-first is a development paradigm where your application code talks directly to an embedded local database and data syncs in the background via an active-active database replication to the central database. With Electric, you have SQLite as the local embedded option and PostgreSQL for the central store. Although local-first greatly improves user experience, it is not without challenges, and the inventors of CRDT have worked on the Electric framework to ease the pain.

  • 94. LiteLLM

    LiteLLM is a library for seamless integration with various large language model (LLM) providers' APIs that standardizes interactions through an OpenAI API format. It supports an extensive array of providers and models and offers a unified interface for completion, embedding and image generation functionalities. LiteLLM simplifies integration by translating inputs to match each provider's specific endpoint requirements. This is particularly valuable in the current landscape, where a lack of standardized API specifications for LLM providers complicates the inclusion of multiple LLMs in projects. Our teams have leveraged LiteLLM to swap underlying models in LLM applications, addressing a significant integration challenge. However, it's crucial to acknowledge that model responses to identical prompts vary, indicating that a consistent invocation method alone may not fully optimize completion performance. Note that LiteLLM has several other features, such as proxy server, that are not in the purview of this blip.

  • 95. LLaMA-Factory

    We continue to caution against rushing to fine-tune large language models (LLMs) unless it’s absolutely critical — it comes with a significant overhead in terms of costs and expertise. However, we think LLaMA-Factory can be useful when fine-tuning is needed. It’s an open-source, easy-to-use fine-tuning and training framework for LLMs. With support for LLaMA, BLOOM, Mistral, Baichuan, Qwen and ChatGLM, it makes a complex concept like fine-tuning relatively accessible. Our teams used LLaMA-Factory's LoRA tuning for a LLaMA 7B model successfully, so, if you have a need for fine-tuning, this framework is worth assessing.

  • 96. MLX

    MLX is an open-source array framework designed for efficient and flexible machine learning on Apple silicon. It lets data scientists and machine learning (ML) engineers access the integrated GPU, allowing them to choose the hardware best suited for their needs. The design of MLX is inspired by frameworks like NumPy, PyTorch and Jax to name a few. One of the key differentiators is MLX's unified memory model, which eliminates the overhead of data transfers between the CPU and GPU, resulting in faster execution. This feature makes running the models on devices such as iPhones plausible, opening a huge opportunity for on-device AI applications. Although niche, this framework is worth pursuing for the ML developer community.

  • 97. Mojo

    Mojo is a new AI-first programming language. It aims to bridge the gap between research and production by combining the Python syntax and ecosystem with systems programming and metaprogramming features. It’s the first language to take advantage of the new MLIR compiler backend and packs cool features like zero-cost abstraction, auto tuning, eager destruction, tail call optimization and better single instruction, multiple data (SIMD) ergonomics. We like Mojo a lot and encourage you to give it a try. The Mojo SDK is currently available for Ubuntu and macOS operating systems.

  • 98. Otter

    Otter is a contention-free cache library in Go. Although Go has several such libraries, we want to highlight Otter for two reasons: its excellent throughput and its clever implementation of the S3-FIFO algorithm for good cache hit ratio. Otter also supports generics, so you can use any comparable types as keys and any types as values.

  • 99. Pkl

    Pkl is a configuration language and tooling created for use internally by Apple and now open-sourced. The key feature of Pkl is its type and validation system, allowing configuration errors to be caught prior to deployment. It generates JSON, .plist, YAML and .properties files and has extensive IDE and language integration including code generation.

  • 100. Rust for UI

    The impact of Rust continues to grow, and many of the build and command-line tools we’ve covered recently are written in Rust. Now, we’re seeing movement in using Rust for UI development as well. The majority of teams who prefer to use the same language for code running in the browser and on the server opt to use JavaScript or TypeScript. However, with WebAssembly you can use Rust in the browser, and this is becoming a little more common now. Frameworks like Leptos and sauron focus on web development, while Dioxus and several other frameworks support cross-platform desktop and mobile app development in addition to web development.

  • 101. vLLM

    vLLM is a high-throughput and memory-efficient inferencing and serving engine for large language models (LLMs) that’s particularly effective thanks to its implementation of continuous batching for incoming requests. It supports several deployment options, including deployment of distributed tensor-parallel inference and serving with Ray run time, deployment in the cloud with SkyPilot and deployment with NVIDIA Triton, Docker and LangChain. Our teams have had good experience running dockerized vLLM workers in an on-prem virtual machine, integrating with OpenAI compatible API server -— which, in turn, is leveraged by a range of applications, including IDE plugins for coding assistance and chatbots. Our teams leverage vLLM for running models such as CodeLlama 70B, CodeLlama 7B and Mixtral. Also notable is the engine’s scaling capability: it only takes a couple of config changes to go from running a 7B to a 70B model. If you’re looking to productionize LLMs, vLLM is worth exploring.

  • 102. Voyager

    Voyager is a navigation library built for Android's Jetpack Compose. It supports several navigation types, including Linear, BottomSheet, Tab and Nested, and its screen model integrates with popular frameworks like Koin and Hilt. When using Jetpack Compose in a multiplatform project, Voyager is a good choice to implement a common navigation pattern across all supported platforms. Development on Voyager has picked up again and the library reached version 1.0 in December 2023.

  • 103. WGPU

    wgpu is a graphics library for Rust based on the WebGPU API, notable for its capacity to handle general-purpose graphics and compute tasks on the GPU efficiently. wgpu aims to fill the gap left by the phasing out of older graphics standards such as OpenGL and WebGL. It introduces a modern approach to graphics development that spans both native applications and web-based projects. Its integration with WebAssembly further enables graphics and compute applications to run in the browser. wgpu represents a step forward in making advanced graphics programming more accessible to web developers with a range of applications, from gaming to creating sophisticated web animations, positioning wgpu as an exciting technology to assess.

  • 104. Zig

    Zig is a new language that shares many attributes with C but with stronger typing, easier memory allocation and support for namespacing, among a host of other features. Zig's aim is to provide a very simple language with straightforward compilation that minimizes side-effects and delivers predictable, easy-to-trace execution. Zig also provides simplified access to LLVM's cross-compilation capability. Some of our developers have found this feature so valuable they're using Zig as a cross-compiler, even though they’re not writing Zig code. We see teams in the industry using Zig to help build C/C++ toolchains. Zig is a novel language and worth looking into for applications where C is being considered or already in use.

Hold ?

  • 105. LangChain

    We mentioned some of the emerging criticisms about LangChain in the previous Radar. Since then, we’ve become even more wary of it. While the framework offers a powerful set of features for building applications with large language models (LLMs), we’ve found it to be hard to use and overcomplicated. LangChain gained early popularity and attention in the space, which turned it into a default for many developers. However, as LangChain is trying to evolve and keep up with the fast pace of change, it has become harder and harder to navigate those changes of concepts and patterns as a developer. We’ve also found the API design to be inconsistent and verbose. As such, it often obscures what is actually going on under the hood, making it hard for developers to understand and control how LLMs and the various patterns around them actually work. We’re moving LangChain to the Hold ring to reflect this. In many of our use cases, we’ve found that an implementation with minimum use of specialized frameworks is sufficient. Depending on your use case, you may also want to consider other frameworks such as Semantic Kernel, Haystack or LiteLLM.

Unable to find something you expected to see?

 

Each edition of the Radar features blips reflecting what we came across during the previous six months. We might have covered what you are looking for on a previous Radar already. We sometimes cull things just because there are too many to talk about. A blip might also be missing because the Radar reflects our experience, it is not based on a comprehensive market analysis.

Download the PDF



English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes