With its 2.0 release, TensorFlow retains its prominence as the industry’s leading machine learning framework. TensorFlow began as a numerical processing package that gradually expanded to include libraries supporting a variety of ML approaches and execution environments, ranging from mobile CPU to large GPU clusters. Along the way, a slew of frameworks became available to simplify the tasks of network creation and training. At the same time, other frameworks, notably PyTorch, offered an imperative programming model that made debugging and execution simpler and easier. TensorFlow 2.0 now defaults to imperative flow (eager execution) and adopts Keras as the single high-level API. While these changes modernize TensorFlow's usability and make it more competitive with PyTorch, it is a significant rewrite that often breaks backward compatibility — many tools and serving frameworks in the TensorFlow ecosystem won't immediately work with the new version. For the time being, consider whether you want to design and experiment in TensorFlow 2.0 but revert to version 1 to serve and run your models in production.
Google's TensorFlow is an open source machine-learning platform that can be used for everything from research through to production and will run on hardware from a mobile CPU all the way to a large GPU compute cluster. It's an important platform because it makes implementing deep-learning algorithms much more accessible and convenient. Despite the hype, though, TensorFlow isn't really anything new algorithmically: All of these techniques have been available in the public domain via academia for some time. It's also important to realize that most businesses are not yet doing even basic predictive analytics and that jumping to deep learning likely won't help make sense of most data sets. For those who do have the right problem and data set, however, TensorFlow is a useful toolkit.