AWS Sagemaker
In the first classes, we explored alternatives for ML deployment, both on-premise and in the cloud. The use of cloud services, Docker, and serverless resources allows developers to have fewer concerns about infrastructure and to focus more on product development.
We learned that the ML lifecycle involves various challenges associated with model development, training, and deployment. Libraries, tracking tools, model registries, data versioning, and more aim to facilitate different stages of this cycle, and any assistance is welcome!
Today, we will work with AWS SageMaker, which provides a complete end-to-end workflow for ML. It makes it easier for developers and data scientists to build, train, and deploy ML models.
From the AWS
Amazon SageMaker is a fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.
In the upcoming topics, we will explore some solutions offered by SageMaker and the problems it solves. Since the platform is packed with numerous features, we will only be able to cover a portion of them in this class.