Use Amazon DocumentDB to construct no-code machine studying options in Amazon SageMaker Canvas


We’re excited to announce the launch of Amazon DocumentDB (with MongoDB compatibility) integration with Amazon SageMaker Canvas, permitting Amazon DocumentDB prospects to construct and use generative AI and machine studying (ML) options with out writing code. Amazon DocumentDB is a completely managed native JSON doc database that makes it easy and cost-effective to function crucial doc workloads at just about any scale with out managing infrastructure. Amazon SageMaker Canvas is a no-code ML workspace providing ready-to-use fashions, together with basis fashions, and the power to organize knowledge and construct and deploy customized fashions.

On this put up, we talk about easy methods to deliver knowledge saved in Amazon DocumentDB into SageMaker Canvas and use that knowledge to construct ML fashions for predictive analytics. With out creating and sustaining knowledge pipelines, it is possible for you to to energy ML fashions along with your unstructured knowledge saved in Amazon DocumentDB.

Resolution overview

Let’s assume the function of a enterprise analyst for a meals supply firm. Your cell app shops details about eating places in Amazon DocumentDB due to its scalability and versatile schema capabilities. You wish to collect insights on this knowledge and construct an ML mannequin to foretell how new eating places can be rated, however discover it difficult to carry out analytics on unstructured knowledge. You encounter bottlenecks as a result of it’s worthwhile to depend on knowledge engineering and knowledge science groups to perform these objectives.

This new integration solves these issues by making it easy to deliver Amazon DocumentDB knowledge into SageMaker Canvas and instantly begin getting ready and analyzing knowledge for ML. Moreover, SageMaker Canvas removes the dependency on ML experience to construct high-quality fashions and generate predictions.

We reveal easy methods to use Amazon DocumentDB knowledge to construct ML fashions in SageMaker Canvas within the following steps:

  1. Create an Amazon DocumentDB connector in SageMaker Canvas.
  2. Analyze knowledge utilizing generative AI.
  3. Put together knowledge for machine studying.
  4. Construct a mannequin and generate predictions.

Stipulations

To implement this answer, full the next stipulations:

  1. Have AWS Cloud admin entry with an AWS Identity and Access Management (IAM) user with permissions required to finish the combination.
  2. Full the surroundings setup utilizing AWS CloudFormation by way of both of the next choices:
    1. Deploy a CloudFormation template into a brand new VPC – This selection builds a brand new AWS surroundings that consists of the VPC, non-public subnets, safety teams, IAM execution roles, Amazon Cloud9, required VPC endpoints, and SageMaker domain. It then deploys Amazon DocumentDB into this new VPC. Obtain the template or fast launch the CloudFormation stack by selecting Launch Stack:
      Launch CloudFormation stack
    2. Deploy a CloudFormation template into an present VPC – This selection creates the required VPC endpoints, IAM execution roles, and SageMaker area in an present VPC with non-public subnets. Obtain the template or fast launch the CloudFormation stack by selecting Launch Stack:
      Launch CloudFormation stack

Observe that if you happen to’re creating a brand new SageMaker area, it’s essential to configure the area to be in a personal VPC with out web entry to have the ability to add the connector to Amazon DocumentDB. To study extra, check with Configure Amazon SageMaker Canvas in a VPC without internet access.

  1. Comply with the tutorial to load pattern restaurant knowledge into Amazon DocumentDB.
  2. Add entry to Amazon Bedrock and the Anthropic Claude mannequin inside it. For extra data, see Add model access.

Create an Amazon DocumentDB connector in SageMaker Canvas

After you create your SageMaker area, full the next steps:

  1. On the Amazon DocumentDB console, select No-code machine studying within the navigation pane.
  2. Below Select a site and profile¸ select your SageMaker area and person profile.
  3. Select Launch Canvas to launch SageMaker Canvas in a brand new tab.

When SageMaker Canvas finishes loading, you’ll land on the Information flows tab.

  1. Select Create to create a brand new knowledge move.
  2. Enter a reputation on your knowledge move and select Create.
  3. Add a brand new Amazon DocumentDB connection by selecting Import knowledge, then select Tabular for Dataset sort.
  4. On the Import knowledge web page, for Information Supply, select DocumentDB and Add Connection.
  5. Enter a connection identify similar to demo and select your required Amazon DocumentDB cluster.

Observe that SageMaker Canvas will prepopulate the drop-down menu with clusters in the identical VPC as your SageMaker area.

  1. Enter a person identify, password, and database identify.
  2. Lastly, choose your learn desire.

To guard the efficiency of major situations, SageMaker Canvas defaults to Secondary, which means that it’s going to solely learn from secondary situations. When learn desire is Secondary most popular, SageMaker Canvas reads from obtainable secondary situations, however will learn from the first occasion if a secondary occasion shouldn’t be obtainable. For extra data on easy methods to configure an Amazon DocumentDB connection, see the Connect to a database stored in AWS.

  1. Select Add connection.

If the connection is profitable, you will notice collections in your Amazon DocumentDB database proven as tables.

  1. Drag your desk of option to the clean canvas. For this put up, we add our restaurant knowledge.

The primary 100 rows are displayed as a preview.

  1. To start out analyzing and getting ready your knowledge, select Import knowledge.
  2. Enter a dataset identify and select Import knowledge.

Analyze knowledge utilizing generative AI

Subsequent, we wish to get some insights on our knowledge and search for patterns. SageMaker Canvas gives a pure language interface to research and put together knowledge. When the Information tab masses, you can begin chatting along with your knowledge with the next steps:

  1. Select Chat for knowledge prep.
  2. Collect insights about your knowledge by asking questions just like the samples proven within the following screenshots.

To study extra about easy methods to use pure language to discover and put together knowledge, check with Use natural language to explore and prepare data with a new capability of Amazon SageMaker Canvas.

Let’s get a deeper sense of our knowledge high quality by utilizing the SageMaker Canvas Information High quality and Insights Report, which robotically evaluates knowledge high quality and detects abnormalities.

  1. On the Analyses tab, select Information High quality and Insights Report.
  2. Select score because the goal column and Regression as the issue sort, then select Create.

This may simulate mannequin coaching and supply insights on how we will enhance our knowledge for machine studying. The whole report is generated in a couple of minutes.

Our report exhibits that 2.47% of rows in our goal have lacking values—we’ll deal with that within the subsequent step. Moreover, the evaluation exhibits that the deal with line 2, identify, and type_of_food options have essentially the most prediction energy in our knowledge. This means fundamental restaurant data like location and delicacies could have an outsized affect on scores.

Put together knowledge for machine studying

SageMaker Canvas gives over 300 built-in transformations to organize your imported knowledge. For extra data on transformation options of SageMaker Canvas, check with Prepare data with advanced transformations. Let’s add some transformations to get our knowledge prepared for coaching an ML mannequin.

  1. Navigate again to the Information move web page by selecting the identify of your knowledge move on the prime of the web page.
  2. Select the plus signal subsequent to Information sorts and select Add rework.
  3. Select Add step.
  4. Let’s rename the deal with line 2 column to cities.
    1. Select Handle columns.
    2. Select Rename column for Rework.
    3. Select deal with line 2 for Enter column, enter cities for New identify, and select Add.
  5. Moreover, lets drop some pointless columns.
    1. Add a brand new rework.
    2. For Rework, select Drop column.
    3. For Columns to drop, select URL and restaurant_id.
    4. Select Add.
      [
  6. Our rating feature column has some missing values, so let’s fill in those rows with the average value of this column.
    1. Add a new transform.
    2. For Transform, choose Impute.
    3. For Column type, choose Numeric.
    4. For Input columns, choose the rating column.
    5. For Imputing strategy, choose Mean.
    6. For Output column, enter rating_avg_filled.
    7. Choose Add.
  7. We can drop the rating column because we have a new column with filled values.
  8. Because type_of_food is categorical in nature, we’ll want to numerically encode it. Let’s encode this feature using the one-hot encoding technique.
    1. Add a new transform.
    2. For Transform, choose One-hot encode.
    3. For Input columns, choose type_of_food.
    4. For Invalid handling strategy¸ choose Keep.
    5. For Output style¸ choose Columns.
    6. For Output column, enter encoded.
    7. Choose Add.

Build a model and generate predictions

Now that we have transformed our data, let’s train a numeric ML model to predict the ratings for restaurants.

  1. Choose Create model.
  2. For Dataset name, enter a name for the dataset export.
  3. Choose Export and wait for the transformed data to be exported.
  4. Choose the Create model link at the bottom left corner of the page.

You can also select the dataset from the Data Wrangler feature on the left of the page.

  1. Enter a model name.
  2. Choose Predictive analysis, then choose Create.
  3. Choose rating_avg_filled as the target column.

SageMaker Canvas automatically selects a suitable model type.

  1. Choose Preview model to ensure there are no data quality issues.
  2. Choose Quick build to build the model.

The model creation will take approximately 2–15 minutes to complete.

You can view the model status after the model finishes training. Our model has an RSME of 0.422, which means the model often predicts the rating of a restaurant within +/- 0.422 of the actual value, a solid approximation for the rating scale of 1–6.

  1. Finally, you can generate sample predictions by navigating to the Predict tab.

Clean up

To avoid incurring future charges, delete the resources you created while following this post. SageMaker Canvas bills you for the duration of the session, and we recommend logging out of SageMaker Canvas when you’re not using it. Refer to Logging out of Amazon SageMaker Canvas for more details.

Conclusion

In this post, we discussed how you can use SageMaker Canvas for generative AI and ML with data stored in Amazon DocumentDB. In our example, we showed how an analyst can quickly build a high-quality ML model using a sample restaurant dataset.

We showed the steps to implement the solution, from importing data from Amazon DocumentDB to building an ML model in SageMaker Canvas. The entire process was completed through a visual interface without writing a single line of code.

To start your low-code/no-code ML journey, refer to Amazon SageMaker Canvas.


About the authors

Adeleke Coker is a Global Solutions Architect with AWS. He works with customers globally to provide guidance and technical assistance in deploying production workloads at scale on AWS. In his spare time, he enjoys learning, reading, gaming and watching sport events.

Gururaj S Bayari is a Senior DocumentDB Specialist Solutions Architect at AWS. He enjoys helping customers adopt Amazon’s purpose-built databases. He helps customers design, evaluate, and optimize their internet scale and high performance workloads powered by NoSQL and/or Relational databases.

Tim Pusateri is a Senior Product Manager at AWS where he works on Amazon SageMaker Canvas. His goal is to help customers quickly derive value from AI/ML. Outside of work, he loves to be outdoors, play guitar, see live music, and spend time with family and friends.

Pratik Das is a Product Manager at AWS. He enjoys working with customers looking to build resilient workloads and strong data foundations in the cloud. He brings expertise working with enterprises on modernization, analytical and data transformation initiatives.

Varma Gottumukkala is a Senior Database Specialist Solutions Architect at AWS based out of Dallas Fort Worth. Varma works with the customers on their database strategy and architect their workloads using AWS purpose built databases. Before joining AWS, he worked extensively with relational databases, NOSQL databases and multiple programming languages for the last 22 years.

Leave a Reply

Your email address will not be published. Required fields are marked *