Developing a personalized meal informer through RAG using AWS Bedrock!

This blog aims to explain the service of AWS Bedrock in complete detail, why AWS Bedrock?, what RAG is?, its earth-shattering use cases and need, and how to develop a personalized meal informer using these services. As a bonus, this blog will also help you understand the true potential of Generative AI.

Harshit Dawar
10 min readMar 16, 2024
Image Credit — Unsplash

So, we have already entered the world of Generative AI, but the most important problem that still exists is that people are not aware of the true potential of Generative AI; they are unaware of what exactly Generative AI is capable of, how many problems it can solve, and how much boost in speed it can provide to a work in particular. Let’s discuss that, and then proceed ahead with the development of a personalized meal informer using AWS Bedrock.

Most people just think that Generative AI is just an assistant or a model that can just answer some questions that will be asked to it, and that’s it for them; however, to their surprise, it is not that; this question-answering is just the start.

The reality or the power of Generative AI lies in its integration with your existing workloads to make them automated and attach a very high-speed nitrous to them. Few of the examples for the same are as follows:

  1. “Consider you are a tuition teacher, and want to remind the students of their fees, you can just use Generative AI to send the alerts to the students via email or message by just asking it to do that, and that too just by saying it in your voice”, isn't that amazing, I know the answer is a “big yes.
  2. Consider yourself a DevOps engineer, and if you want to spin up some machines on top of a cloud platform, you can again do that with a simple voice command using Generative AI.
  3. Consider yourself an accountant, you want to perform any kind of analysis, you can do that very easily again with a simple voice command using Generative AI.
  4. etc…

(If you want to know some more examples, do let me know through the comments on this story, or reach out to me by using the reach-out details present in my profile).

What is Retrieval Augmented Generation (RAG)?

In today’s world, where Generative AI is been adopted at a very high speed, everyone is talking about it, and everyone wants to adapt to it, existing LLMs whether they are open-source or paid, can’t solve the use cases alone.

For example:

  1. We can take the first example mentioned above, showcasing the power of Generative AI where you have to send alerts to the students for the fees. In this case, you are the one who has the data of the students, if you use any LLM to do something for a particular student (send alert in the present case), it will simply fail because the required information is not there with the LLM.
  2. Considering you run an e-commerce website, you want to know how the total sales for today. Again, in this case as well, LLM won’t be able to answer this because the data is not there with the LLM, its with you.
  3. and so on…

We have infinite use-cases like this which are not solvable of Generative AI alone.

To solve these use cases, we have to follow a strategy, which is:

  1. Put data in a centralized place, where the data is been converted into embeddings (it is one of the most important topics of Natural Language Processing, I will be publishing a blog on it as well very soon), which are numeric representations of the data, on a very high level, this can be understood for what embeddings are. The words having similar meanings are represented with numbers very close to each other, and vice versa also holds here. This centralized place is known as “Knowledge Base”, where your custom data (not on the internet) is present.
  2. Provide access of your Knowledge Base to the LLM.
  3. Ask anything related to your data.
  4. LLM will check whether the answer to your question is present in the Knowledge Base or not, if it is, then the answer will be returned to you, otherwise, a fixed message conveying that LLM wasn’t able to find the answer to your question will be returned to you. The fixed message varies with the LLM.

This approach that is:

Question asked to LLM =====> LLM will search the Knowledge Base for the answers =====> Return the answer for the specific question

is exactly Retrieval Augmented Generation (RAG)

I believe this provides a clear understanding of the use cases that RAG can solve, and how useful it is, which in itself clarifies its need.

Why AWS Bedrock?

You must be familiar with ChatGPT and other LLMs like Claude by Anthropic, Gemini by Google, mistral by Mistral.ai, Jurassic by AI21 labs, Llama2 by Meta, Dall-E by OpenAI, Midjourney, and so on. With how many of them, you have already worked with, if you have used all or even some of them, and you have a good knowledge about them, then you must be aware of the fact that “each LLM has its own unique specialty”.

There are times when you want to solve multiple use cases, & in that case multiple LLMs must be used, and you have no other choice instead of using an appropriate LLM for a particular use case, and this will be continued till the complete goal is achieved by leveraging the power my multiple LLMs. But, this is not very convenient and its a time taking process, and to solve this issue, here comes “AWS Bedrock”.

AWS Bedrock is a one-stop solution for all these requirements, it has support for multiple different LLMs that can solve the requirements based on any of the below-mentioned use cases as of now:

  • Text
  • Chat
  • Image

It is not limited to just LLMs, AWS Bedrock provides you inbuilt capabilities to create a Knowledge Base, Agents (a very interesting and powerful topic, I will cover its detailed use-cases and implementation in my next blog), custom LLMs creation, LLM Model evaluation, Embeddings generation, and so on.

List of LLMs family supported by AWS Bedrock (till date when this blog is written):

  • AI21 Labs — Jurassic 2 Family
  • Amazon — Titan Family
  • Anthropic — Claude Family
  • Cohere — Command Family
  • Meta — Llama 2 Family
  • Mistral AI — Mistral Family
  • Stability AI — SDXL Family

Based on this knowledge, let’s start setting up AWS Bedrock and developing our personalized meal informer.

Setting up AWS Bedrock!

When you are using it for the first time, you need to request access to LLMs because by default it doesn’t provide you the access.

When you land on the AWS Bedrock page for the first time, it will look like:

AWS Bedrock Landing Page — Image by Author!

Click on “Get Started”, then you will be landed to the “Getting Started Window of the Bedrock”. Click on “Model Access” there.

Getting Started AWS Bedrock — Image by Author!

Click on “Manage Model Access” and it will provide you the option to select the models to which you want access to, tick the checkboxes for the models you require access, and then submit. After few minutes, you will see “Access Granted” in front of those models as you can see in the below image.

AWS Bedrock Model Access Tab — Image by Author!

However, Anthropic, Mistral AI, and Llama 2 Family Models are only available to access after you submit the use-case details, and they are approved by the internal team of AWS.

For our use-case, that is building a Knowledge Base and asking questions based upon it, is only possible through the Claude Models (as of now), hence we need to submit the use-case details for Claude, and once the access is been granted, we are good to go.

After submitting use case details, click on “Manage model access” again, and tickmark the Anthropic Models family, and then click on “Save Changes”, this will start the access allocation of the models, which will inturn change the “access status” to “In Progress” from “Use case details submitted” (screenshot shown below), and within few minutes you will be able to access them.

AWS Bedrock Access Status Change — Image by Author!

After the access is been granted, you can see the “Access Status” as “Access Granted” for the Anthropic Model Family.

AWS Bedrock — Access Granted to Anthropic Models Family — Image by Author!

This marks the end of the “Setting up AWS Bedrock” Section and leads us to the “Developing Personalized Meal Informer” section!

Developing Personalized Meal Informer!

To achieve our goal, we need to perform 2 steps that are

  1. Knowledge Base setup
  2. Chatting with Personalized Meal Planner

Each step is explained below in absolute detail. Let’s get our hands dirty!

Note: Use an IAM user except “root” user with the admin rights to perform the further practicals because creation of Knolwedge Base is not permitted to root user by AWS (as of now).

Part 1 ==> Knowledge Base Setup!

To create a Knowledge Base, click on the “Knowledge base” tab, and you will be landed on its homepage.

Knowledge Base Homepage — Image by Author!

Now, click on “Create Knowledge base”, and fill in the required details for it like “Knowledge Base name”, “Knowledge Base description” (it's important, just ignore the optional keyword mentioned by AWS, because description will act as a guide for the LLM to know whether to use this knowledge base or not”, IAM permissions.

Knowledge Base Configuration Part 1 — Image by Author!

Click on “Next”.

Create a S3 bucket or use an existing S3 Bucket to upload the data required to create a Knowledge Base. I have used an Excel sheet containing my meal plan to create a Knowledge Base.

Meal Plan — Image by Author!
Excel File uploaded to S3 — Image by Author!

Provide the details of this file in S3 to the Knowledge Base creation Dialog Box, & keep the rest of the settings as it is, then click on “Next”.

Knowledge Base Configuration Part 2— Image by Author!

Select a model to create embeddings of the data uploaded for Knowledge Base creation. You can choose any model here, I have selected “Amazon Titan”, keep the rest of the settings as they are. (Vector Stores is a very important topic, but not required to be discussed in this blog, will be covered in some other blog, do let me know in the comments if you want me to cover this topic in my next blog, and I will pick this as the topic for my next blog).

Click on “Next”.

Knowledge Base Configuration Part 3— Image by Author!
Knowledge Base Configuration Part 4— Image by Author!
Knowledge Base Configuration Part 4 — Image by Author!

Just click on “Create Knowledge base”, and it will take a few minutes to create the knowledge base for you.

Knowledge Base Configuration Part 5 — Image by Author!

Once the knowledge Base setup is done, then you will the screen like the one shown below.

Knowledge Base Configuration Part 6 — Image by Author!

Part 2 ===> Chatting with Personalized Meal Planner!

Once the Knowledge Base is created, you have to click on the “Sync” button shown in the green tab, or you can click on “Sync data source” (refer to the “Knowledge Base Configuration Part 6” figure). This will load all the data for the LLM to chat.

Once the data is synced, its status will be showcased as “Ready” in the Data Source tab.

Data Synced from Knowledge Base — Image from Author!

Click on “Select Model” (button in orange), to select a LLM to act as your personalized Meal Informer, & then click “Apply”.

You will only see Claude Model Family in the list (as of now), I have selected the Claude v2.1 for this.

LLM selection for Personalized Meal Informer — Image by Author!

Now you are ready to use your Personalized Meal Informer. Some examples of the same are shown below:

Chat 1 — Personalized Meal Informer — Image by Author!
Chat 2 — Personalized Meal Informer — Image by Author!
Chat 3 — Personalized Meal Informer — Image by Author!
Chat 4 — Personalized Meal Informer — Image by Author!
Chat 5 — Personalized Meal Informer — Image by Author!

In case you ask anything that is not present in the Knowledge base, LLM will not search this on the internet or use its own memory to answer, like normal LLM behavior is, in the case of Knowledge base, you will only get the answers if you have them in the Knowledge base!

This concludes this amazing blog. I hope you enjoyed it a lot. Do let me know your thoughts in the comments, and don’t forget to follow me.

I hope my article explains each and everything related to the topic with all the detailed concepts and explanations. Thank you so much for investing your time in reading my blog & boosting your knowledge. If you like my work, then I request you to applaud this blog & follow me on Medium, GitHub, & LinkedIn for more amazing content on multiple technologies and their integration!

Also, subscribe to me on Medium to get updates on all my blogs!

--

--

Harshit Dawar

AIOPS Engineer, have a demonstrated history of delivering large and complex projects. 14x Globally Certified. Rare & authentic content publisher.