Over the last few decades computational notebooks, first introduced by Wolfram Mathematica, have evolved to support scientific research, exploration and educational workflows. Naturally, in support of data science workflows and with the likes of Jupyter notebooks and Databricks notebooks, they've become a great companion by providing a simple and intuitive interactive computation environment for combining code to analyze data with rich text and visualization to tell a data story. Notebooks were designed to provide an ultimate medium for modern scientific communication and innovation. In recent years, however, we've seen a trend for notebooks to be the medium for running the type of production-quality code typically used to drive enterprise operations. We see notebook platform providers advertising the use of their exploratory notebooks in production. This is a case of good intentions — democratizing programming for data scientists — implemented poorly and at the cost of scalability, maintainability, resiliency and all the other qualities that a long-lived production code needs to support. We don't recommend productionizing notebooks and instead encourage empowering data scientists to build production-ready code with the right programming frameworks, thus simplifying the continuous delivery tooling and abstracting complexity away through end-to-end ML platforms.
Jupyter Notebooks have gained in popularity among data scientists who use them for exploratory analyses, early-stage development and knowledge sharing. This rise in popularity has led to the trend of productionizing Jupyter Notebooks, by providing the tools and support to execute them at scale. Although we wouldn't want to discourage anyone from using their tools of choice, we don't recommend using Jupyter Notebooks for building scalable, maintainable and long-lived production code — they lack effective version control, error handling, modularity and extensibility among other basic capabilities required for building scalable, production-ready code. Instead, we encourage developers and data scientists to work together to find solutions that empower data scientists to build production-ready machine learning models using continuous delivery practices with the right programming frameworks. We caution against productionization of Jupyter Notebooks to overcome inefficiencies in continuous delivery pipelines for machine learning, or inadequate automated testing.