Scaling Personalization: Demystifying Personalized AI Recommendations with AWS!

Harshit Dawar
16 min readJust now

--

This article aims to explain the step-by-step process of creating a fully managed, intelligent, & scalable personalized recommendation system for your target audience/users/customers using AWS Personalize, AWS Lambda, & AWS API Gateway!

Source: Image by Growtika on Unsplash

If you are in search of the best way possible to create a smart, intelligent, highly scalable, secure, & fully managed personalized recommendation system with a clear understanding of its architecture, then you are at the right place; this article will act as a one-stop solution for you.

Being a part of a world where there are numerous options to perform a particular task, where each option has plenty of steps to perform, is very difficult. This becomes an even bigger pain because the steps of a particular option differ significantly from the steps of other options.

As a solution to this problem in the case of a personalized recommendation system, this article will not only explain the architecture of the system that we are going to build but also showcase the architecture diagram as well as explain all the steps required to build the system.

So, that being said, let’s proceed to have some fun 😃!

Overview of the AI system & the Architecture!

Overview!

We are going to develop an , highly scalable, secure, & personalized recommendation system using AWS services that will be capable of recommending books to users.

User-Item interaction data, i.e., the users-book interaction dataset, will be used to train the system, & then the system can recommend high-quality books for the user that are most likely to be enjoyed by the user. The system will support the REST API Endpoint to provide personalized recommendations for the user, which can then be subsequently utilized anywhere, for example, on some websites & mobile apps to enhance the user experience & increase their retention rate.

Tools & Services required for the development!

To develop the desired system, we will be using the following tools & services:

  1. AWS Personalize
  2. AWS Lambda
  3. AWS API Gateway
  4. AWS S3 (Only to import the data from in AWS Personalize)
  5. Postman/Any CLI tool supporting the “curl” command

Hence, you should have an AWS account with the required privileges (admin privilege is desired if you want to have minimal to-and-fro for the permissions & ease for the development).

The Architecture!

The architectural diagram of the system is showcased below:

Architecture of the system—Image by Author!

The above image illustrated the architecture of the system that we are going to develop. The system working can be divided into the following points:

  1. The user (any person) will send the “user_id” to the API endpoint (generated using AWS API Gateway) for which the recommendations need to be generated.
  2. AWS API Gateway will send the user_id to the AWS Lambda, where processing logic is defined in a function.
  3. AWS Lambda will invoke the AWS Personalize campaign that is serving the AI model trained to recommend the books to the users.

Note: Its recommended that you should be familiar with the AWS services utilized in this project, though an overview of all of them is mentioned in their respective development sections further in this blog.

Development process of the system!

The development process is divided into 3 steps; each step is required to build the complete system. Once all the steps are completed, the final system will be ready for use.

Step 1: Setting up AWS Personalize!

This is the core step that will be responsible for the intelligence of the system.

AWS Personalize is a fully managed service by AWS that can be used to develop a real-time, highly scalable recommender system on the go. Whether you want to develop a recommender system for personalized recommendations for a user, or you want to re-rank the items that need to be recommended to the user, or user segmentation, or to find similar items based on others, this is a directly available service. Also, it is directly integratable with other AWS services.

AWS Personalize also supports specialized domains for specific use cases, which are:

  1. E-commerce: It must be used when there is a requirement of recommending the right products at the right time in a business.
  2. Video on Demand: It must be used when there is a requirement to increase engagement by recommending relevant content to the users.
  3. Custom: It must be used when there is a custom use case.
Domains in AWS Personalize—Image by Author!

For this project, “E-commerce” is the best domain in which the use case fits into; hence it will be used in the development.

To set up AWS Personalize, several steps need to be performed; let’s delve into the configuration. 🙌

Creating Dataset Group: Its a group of the relevant data for the use case. In every dataset group, there are 3 datasets present:

  • Users Dataset (Optional): Its the dataset that is used only to provide user information specifically.
  • Items Dataset (Optional): Its the dataset that is used only to provide items information specifically.
  • Items Interaction Dataset (Mandatory): Its the dataset that is used to provide users & items interactions information.

Let’s create the dataset group for our use case.

Log in to the AWS account & head towards the AWS Personalize service. When you land on the service page for the first time, it will look like:

AWS Personalize Landing Page—Image by Author!

Click on “Create dataset group” & you will see the following page.

AWS Personalize Dataset creation page—Image by Author!

Fill in the dataset name you want to use, then select “E-commerce”, & click on “Create group”. You will be landed on the page:

AWS Personalize Dataset group configuration page—Image by Author!

“Item interaction dataset” is the only required dataset, & in our project, only this dataset will be used. If you want, you can upload 2 other datasets as well & experiment with them. Hence, click on the “Item Interactions dataset - required”, you will land on the below screen.

Link to the dataset I have used, is mentioned at the end of this article.

User-Item Data Importing—Image by Author!

You need to click on “Next” by keeping the option by default; in this way, you can mention your dataset path (user-interaction data that needs to be imported) of AWS S3.

In case you want to use Data Wrangler to import data from other sources supported by it, then you can use that for the customized use case.

Now, you will land on the screen to configure the schema of the dataset.

Dataset Configuration Page—Image by Author!

In the “Dataset details”, you need to mention the name of the dataset & the schema you intend to use.

In “Schema Definition”, you need to mention the schema of the data to import.

Note: A specific schema is expected here by AWS Personalize service; you cannot keep the column names as per your wish. You need to follow the standards, & they are:

  • Any unique identifier of a user must have the column name “USER_ID”.
  • Any unique identifier of the item (in our case, a book) must have the column name “ITEM_ID”.
  • The column specifying the interaction value between user & item (for example: rating) must be named “EVENT_VALUE”.
  • A column specifying timestamp is must, & its name should be “TIMESTAMP”.
  • A column specifying event type is must, & its name should be “EVENT_TYPE”.

In case you don’t have timestamp & event value columns, then you can synthetically create them. The rest of the 3 columns you must definitely have; otherwise, your data is incomplete & can’t be used anywhere for the recommendation system.

Example Schema:

You can use the below schema as it is by making your data compatible with the schema.

{
"type": "record",
"name": "Interactions",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "USER_ID",
"type": "string"
},
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "TIMESTAMP",
"type": "long"
},
{
"name": "EVENT_VALUE",
"type": "float"
},
{
"name": "EVENT_TYPE",
"type": "string"
}
],
"version": "1.0"
}

Now, click on Next. “Its present at the bottom right of the page”. You will be landed on the page to configure the data source in S3.

Dataset Import Job Part 1—Image by Author!

Here, you need to mention the job name that will import data on behalf of you; it can be anything. Also, you need to mention the dataset (users-item interactions data that you want to import) location in S3.

Note: If you have an event ingestion SDK & you want to import data incrementally using an API, that is also possible, but it is out of the scope of this article.

When you scroll down on the same page, you will see the details to attach the IAM role so that AWS Personalize can communicate with AWS S3.

Dataset Import Job Part 2—Image by Author!

Just create an IAM role for the AWS Personalize service that has full access to AWS S3 for ease; otherwise, you can restrict the access as well if you want (its the best practice). Mention the ARN (Amazon Resource Number) of that role, or you can click on “Create a new role”, it will create a new role for you on the spot. Keep the rest of the things as they are, & do not click on “Start Import” as of now; one more important thing needs to be done before, & it’s adding a policy to the S3 bucket containing your dataset so that AWS Personalize can access it without any hassle.

Policy Template to add:

{
"Version": "2012-10-17",
"Id": "PersonalizeS3BucketAccessPolicy",
"Statement": [
{
"Sid": "PersonalizeS3BucketAccessPolicy",
"Effect": "Allow",
"Principal": {
"Service": "personalize.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::amzn-s3-demo-bucket",
"arn:aws:s3:::amzn-s3-demo-bucket/*"
]
}
]
}
Screenshot of the policy adding template — Source

Link to AWS documentation explaining the policy-adding step (will take you to the documentation page from which the above screenshot is taken):

To add this policy, you can follow the steps:

Go to S3 Service -> Click on the respective S3 Bucket containing your dataset -> Click on Permissions -> Locate Bucket Policy -> Click on Edit -> Add the Policy

An example of added policy:

Example Bucket Policy for AWS Personalize—Image by Author!

Now, click on the “Start Import” page of the Dataset import job configuration. You will be landed on the below screen:

Dataset import job started—Image by Author!

This concludes the dataset import step; now we will proceed towards the creation of the real-time, highly scalable, & intelligent recommendation system.

Creating Recommendation System: This will be the smart AI model responsible for recommending books to the users.

Expand the “Custom Resources” tab on the left, & then click on “Solutions & Recipes”.

Solution & Recipes Page—Image by Author!

Now, click on “Create Solution”.

Configuring Solution & Recipe Part 1—Image by Author!

Provide a name for the solution; it can be any. This article's scope is to develop a system of “item recommendation”; hence the corresponding solution type has been selected. In case you want to build a “User segmentation” system, then you can do that as well.

Whereas “Recipe” is the core algorithm that you need to select based on the use case, whether its personalized ranking of items, finding similar items, finding item popularity, user personalization, or any other, it can be selected from the “Recipe” list. Since we are developing a recommendation system to recommend items preferred by a user, hence, the latest recipe of the “User personalization” category has been selected.

Click on “Next”, & you will be landed on the below screen:

Configuring Solution & Recipe Part 2—Image by Author!

Here, in this case, “Automatic training” is set to “Turn off” because this is a demo of the use case; in the case of a production system, its recommended to keep it “Turn on”; it will retrain the AI model based on the set frequency to get the best results based on the complete data (existing & latest). Keep the rest of the settings as they are, & click on “Next”, you will be landed on the review screen.

Configuring Solution & Recipe Part 3—Image by Author!

Review the solution & click on “Create Solution”.

Note: Creating a solution will take a few minutes, & it's an expensive service. Make sure you check the pricing & accordingly move ahead.

Creating a Campaign: This will deploy the recommendation system that has been created as of now.

Once again, expand the “Custom Resources” tab on the left & click on “Campaigns”.

Campaign Creation Part 1—Image by Author!

After clicking, you will be landed on:

Campaign Creation Part 2—Image by Author!

Now, click on “Create Campaign”, & you will be landed on the below screen:

Campaign Creation Part 3—Image by Author!

Now, fill in the campaign name that you want to, & select the solution that has been created, & click on “Create campaign”.

It will take around 20 minutes for the campaign to get created, & then your recommendation system is ready to be leveraged.

Once the campaign is ready, you can see a screen like this:

Campaign Creation Completed—Image by Author!

At the top, the campaign name is mentioned, & in case you want to test the recommendations, then you can mention any user_id (present in your training dataset) in the “Test campaign results” section, & you will get the corresponding recommendations for that user.

This concludes step 1 of the complete system, & believe me, you have completed the biggest step. ☀️

Step 2: Setting up AWS Lambda!

Here, the system functionality to get recommendations based on a user_id will be set up.

AWS Lambda will be used for this setup, It’s a serverless compute service offered by AWS to execute the function directly without worrying about any infrastructure, programming language installations, or package installations.

A Python function is created that will interact with AWS Personalize by sending it the user_id & then taking the recommendations back.

Function to be used in AWS Lambda for fetching the recommendations based on a user_id-Code by Author!

The above code needs to be executed by AWS Lambda; let’s set it up quickly.

Open the AWS Lambda service.

Lambda service home page tab—Image by Author!

Click on Functions, & then on “Create function”, you will be landed on the below screen:

Lambda Configuration Part 1—Image by Author!

Since the code is provided directly without any blueprint or container, hence, keep the default option “Author from scratch” as it is, & provide a name for the function, select your desired runtime, & any version of Python that is compatible with the code; in the current scenario, Python v3.9 is selected. Keep the rest of the options as they are.

Click on “Create function”, & you will be landed on the below screen:

Lambda Configuration Part 2—Image by Author!

At the bottom, there is a code editor; just paste the above code (mentioned few steps above) there with your campaign ARN & click on deploy.

Now, permissions for AWS Lambda to acess AWS Personalize needs to be added.

Click on “Configuration” & then on “Permissions” tab. You will be presented the screen shown below:

Lambda Configuration Part 3— Image by Author!

Click on the “Role name” hyperlink, it will open the Role’s permissions.

Lambda Configuration Part 4— Image by Author!

Click on “Add permissions”, then “Attach Policies”, & search for “Personalize”. You will get the screen as mentioned below:

Lambda Configuration Part 5— Image by Author!

Select “AmazonPersonalizeFullAccess” for ease, & then click on “Add permissions”. That’s it, your AWS Lambda is configured properly now.

You will see the below screen after adding the permissions:

Lambda Configuration Part 6— Image by Author!

This concludes the second step of the complete system development. 🌕

Step 3: AWS API Gateway Setup!

This step will expose the recommendation system by creating an API for it.

AWS API Gateway is a service that can create multiple endpoints with multiple types of requests (like GET, POST, PUT etc.) & security features.

Let’s implement it quickly.

Open the AWS API Gateway service page & click on “Get Started” (this will appear only if you are creating your first API), then click on “Create API”. You will be landed on the below screen:

AWS API Gateway Configuration Part 1— Image by Author!

Click on “Build” for “REST API”; you will then be presented with the below screen:

AWS API Gateway Configuration Part 2—Image by Author!

Fill in the API name & keep the rest of the things as they are. Click on “Create API”. You will then be presented with the below screen:

AWS API Gateway Configuration Part 3—Image by Author!

Click on “Create Resource”. You will then be presented with the below screen:

AWS API Gateway Configuration Part 4—Image by Author!

Checkmark the “CORS”, fill in the “Resource name” which will be the endpoint route of the API; in the current scope, “recommend” is mentioned (you can add any name), & click on “Create Resource”. You will then be presented with the below screen:

AWS API Gateway Configuration Part 5—Image by Author!

Click on “/recommend” or any other endpoint name you have given, then click on “Create method”). You will then be presented with the below screen:

AWS API Gateway Configuration Part 6— Image by Author!

Select “Lambda Function” as the “Integration type”, then select the appropriate region & the lambda function you have created. Specify “user_id” in the “URL query string parameters” & check mark “Required”, this will ensure that whenever this API is called, “user_id” must be passed as a query parameter to it. Then hit “Create method” button.

You will be presented the following page:

AWS API Gateway Configuration Part 7— Image by Author!

Go to Integration request tab, and click on “Edit” tab.

AWS API Gateway Configuration Part 8— Image by Author!

After clicking on “Edit”, scroll down, & you will be presented with the following screen:

AWS API Gateway Configuration Part 9— Image by Author!

Click on “Add mapping template” in the “Mapping templates” section. Then add the following details as shown:

AWS API Gateway Configuration Part 10 — Image by Author!

This will ensure that the “user_id” is passed as an input parameter to the Lambda Function, so that it can use it to get the recommendations from AWS Personalize.

Template body code for your ease is:

{
"queryStringParameters": {
"user_id": "$input.params('user_id')"
}
}

Click on the “Save” button. This will configure the API.

If you go back to the Lambda function, you will see that the AWS API Gateway will be added as a trigger.

AWS Lambda Integration with AWS API Gateway — Image by Author!

Now, click on “Deploy API” for the API Deployment.

AWS API Gateway Configuration Part 12— Image by Author!

After clicking, you will be presented with the below screen:

AWS API Gateway Configuration Part 13 — Image by Author!

Mention any stage name you want for the development. Stages here signifies the environment to deploy the application; hence, following the best practices, “development” is mentioned.

Now, click on “Deploy”. You will get the following screen with the API endpoint.

AWS API Gateway Configuration Part 14 — Image by Author!

Now, the complete system is deployed & ready for use.

Congratulate yourself for achieving this success by developing such a big system; you are really learned a lot. 😍

Let’s test the system as well!

Getting recommendations from the system!

In this article, Postman is being used for hitting the API deployed; though, you can use any of the available methods, like the “CURL” command, hitting the API using a programming language, or any other.

To get the recommendations, just copy the API Invoke URL & append it with the route endpoint created.

For example:

# Since we created route endpoint with "/recommend", hence it's been used!
<API Endpoint>/recommend
Getting the recommendations using Postman—Image by Author!

In the above screenshot, recommendations for user_id 10 are generated, & by default, AWS Personalize provides 25 recommendations. You can alter that if you want in the function that has been created.

After getting the recommendations, you can then format as per your requirement!

This concludes this amazing blog. I hope you enjoyed it a lot. Do let me know your thoughts in the comments, and don’t forget to follow me. Also, if you want me to write an article on some of the topics you are in search of, do reach out to me on LinkedIn or comment on any of my articles. I will be extremely happy to do the same.

I hope my article explains each and everything related to the topic with all the detailed concepts and explanations. Thank you so much for investing your time in reading my blog & boosting your knowledge. If you like my work, then I request you to applaud this blog & follow me on Medium, GitHub, & LinkedIn for more amazing content on multiple technologies and their integration!

Also, subscribe to me on Medium to get updates on all my blogs!

--

--

Harshit Dawar
Harshit Dawar

Written by Harshit Dawar

Complete AIOPS Expert, have a demonstrated history of delivering large and complex projects. 28x Globally Certified. Rare & authentic content publisher.

No responses yet