
3
models
4
languages
99.9%
guaranteed
Amazon SageMaker AI is a fully managed MLOps and generative AI platform from Amazon Web Services that covers the complete machine-learning lifecycle — from data preparation and experimentation through training, deployment, monitoring, and pipeline automation. The platform is an integral part of the Amazon SageMaker family, which also includes SageMaker Unified Studio, SageMaker Lakehouse, and SageMaker Catalog.
SageMaker Studio is a browser-based integrated IDE that unifies notebooks, experiments, debugger, profiler, and model management in a single interface. SageMaker JumpStart provides a catalog of ready-to-deploy foundation models — including Llama, Mistral, DeepSeek, and Stable Diffusion families — with one-click deployment and fine-tuning without writing any infrastructure code. SageMaker Pipelines is a native ML pipeline orchestrator with CI/CD integration, artifact versioning, and lineage tracking. Model Registry enables model version management with approval workflows before production deployment.
SageMaker supports distributed training on GPU/Trainium clusters with automatic model and data parallelism (SageMaker Distributed Training). Built-in algorithms and support for TensorFlow, PyTorch, MXNet, and scikit-learn allow training jobs to run on managed infrastructure without server configuration. SageMaker Clarify detects bias in training data and explains model predictions using SHAP values.
The platform offers four hosting modes: real-time endpoints (low latency), serverless inference (no infrastructure management), asynchronous inference (large payloads), and batch transform (offline processing on large datasets). Auto-scaling and VPC deployment provide network isolation and cost flexibility.
SageMaker Feature Store provides centralized ML feature storage with support for online serving (low-latency inference access), offline storage (historical training data), and streaming ingestion. Data Wrangler enables visual data preparation and transformation from over 40 sources — including Amazon S3, Redshift, Athena, and AWS Glue — without writing code.
SageMaker AI holds FedRAMP High, FedRAMP Moderate, HIPAA, SOC 2 Type II, PCI DSS, GDPR, and DoD Impact Level 5 certifications. The platform supports VPC isolation, encryption at rest and in transit, identity management via AWS IAM Identity Center (with SAML 2.0, OIDC, Okta, and Microsoft Entra ID federation), and full audit logging in AWS CloudTrail. Resources can be scoped per project and per user with granular cost controls and budget alerts.
SageMaker AI uses a pay-as-you-go model: charges are based on training instance runtime and endpoint uptime (per second), data processed, and optionally provisioned throughput for JumpStart foundation models. Per-project and per-user cost limits are available with budget alerts. The platform offers Standard and Enterprise 24/7 support tiers with a 99.9% SLA.
Pricing models
Resource quotas
SLA & Support