unity knowledge

Reading time: 11 minutes

How utilizing Amazon Personalize can enhance your website’s product recommendations


With the rapid development in machine learning in recent years, industry-wide data science applications followed. People instantly realized that they don’t necessarily need to be top-notch scientists to take advantage of the new toys. You don’t have to keep up with all the most recent NeurIPS, ICLR or ICCV publications to be actually productive and deliver a new class of products and projects to your customers. Hell, in many cases (SaaS solutions come to mind) you don’t even need to understand the underlying mathematical ideas at all. This message is constantly repeated, especially by cloud vendors.

Enter content recommendation

Arguably, the most obvious application of machine learning in the ecommerce industry is content personalization. This is seen in most big ecommerce sites world-wide – they no longer show only the most popular offers on their front pages. The content you see while browsing is strictly personalized to your tastes. The more pages you see and products you click on, the more data about your preferences is sent back to the website. The algorithm there gets to know you better and recommends you more relevant wares.

Newsletters with promotions are also no longer crafted manually. After all, there are hundreds of products with prices slashed every week. It isn’t really feasible to display them all to every customer. Instead, only a tiny subset of them is sent, but each client gets a different collection of products – only the ones with the highest buy probability are presented in the email in order to maximize CTR and conversion.

This is obviously related not only to the ecommerce industry – it is very much relevant when applied to news sites, movie aggregators or recipe sites. Whether you are a local business or a global powerhouse, you will almost certainly benefit from content personalization.


Amazon Personalize to the rescue

Now, you may be asking yourself: how would I implement a recommender on my website? As mentioned before, content recommendation methods are constantly being refined – which is good news. You can use a variety of existing models and ideas, from the easiest popularity rankings through pattern mining algorithms like Prefix Span, all way to deep learning methods. All of them have their pros and cons, but many of them require some expertise in data mining or machine learning. Thankfully, AWS couldn’t let such low hanging fruit remain there for too long – their implementation of an SaaS content personalization tool called Amazon Personalize went into general availability a few months ago. This tool requires very little knowledge (if any) of data science, and its capabilities are relevant and beneficial to most companies.


There are a few main use cases it can prove useful for. Of course, it solves the most common one of users who viewed this item also viewed that one with little hassle. This can work both for anonymous users (identifying items similar to the one currently viewed) and your loyal customers (taking their entire purchase/browsing history into account when making a recommendation).

You are also able to reorder an existing list of items to match the current user’s taste – useful in search engines, for example. Lastly, you may generate a list of products a given customer may be interested in.


Amazon Personalize works not only in real-time but also in batch mode. More on that later, but you either generate recommendations “offline”, based on S3 mechanisms, or provision an API endpoint to be used by your other services.

How do I use Amazon Personalize?

AWS needs some data first. The only accepted format is csv, and the only accepted place for the data to live is Amazon S3. It is obviously tempting to use other AWS services to automate the process and periodically import your fresh data there, but there is no requirement to go all-in with AWS. You just push your data to S3, then create a Schema and a Dataset object, link them together and compose them with DatasetGroup. When your data is ready, make and trigger DatasetImportJob, which will move the data from the S3 bucket into the realm of Amazon Personalize. Keep in mind that you pay for every GB of transferred data. After the process finishes (the status changes to ACTIVE – it may take a while), you are ready to specify a Solution, in which you choose an algorithm and its objectives.


How do I train it?

At the heart of recommendation system always lies a piece of computer science (and math) that decides which items should be recommended. Amazon Personalize calls them Solutions made of Recipes. People familiar with data science will find Recipes rather poor though. There isn’t much to choose from, the available hyperparameters are pretty much non-existent, the cross-validation ranges are hardcoded to 90%/10%, the metrics available to decide whether the model is fine or not are disappointing, and stuff like optimizers, learning rates and activation functions are unheard of (or irrelevant in some recipes). Please note, however, that if you do have people specialized in this branch in your company, then you will most likely benefit more from using other AWS tools. Amazon Personalize is a SaaS solution and has to be simple enough to be still usable by a wider range of people. If you’re looking for UI with computing power for Keras or PyTorch, you should look into AWS SageMaker and build the model yourself. This still is a good place to rapidly deliver a recommendation engine solution for your website or client though. If you find Amazon Personalize results lackluster, you can then develop the model yourself later (and have a good baseline model to compare with).

There are three versions of Hierarchical Recurrent Neural Networks available – the basic version, another which also looks into item metadata (at the cost of longer training), and one that can deal with coldstarts (if you frequently add new items without having the prior data which users were interested in those items). In theory, HRNNs look into the entire customer purchase history. If somebody has bought a lot of Apple accessories in the past, they might be interested in buying more of them. If it looks like somebody has just built a new apartment or had a baby, it will recommend certain items accordingly.

Apart from that, there is a simple popularity ranking which just returns the most popular items (duh), a Personalized-Ranking which re-ranks an already existing list of items (you have a list of items but want to show them in personalized order to a currently logged-in customer), and Item-to-Item similarities (SIMS), a collaborative filtering method. The last one is based most likely on the well-known matrix factorization method and looks at the entire data set to discover the standard “customers who viewed this item also viewed that one“. It’s a rather simple model, not context-aware.

Historically, there also used to be DeepFM (matrix factorization with a neural network instead of dot product) and FFNN (feed-forward neural network, in which you could personalize the eta, dropout ratio, batch size, etc.) available in private beta, but those didn’t make it to general availability. This suggests they didn’t work very well, or were too complicated for a SaaS solution. A pity, because I’d rather have more options available.

You pick the model accordingly depending on the business case you want to solve. To start the actual training you create a SolutionVersion, which triggers the training. This may take a long time – it is not uncommon to wait dozens of hours until the status finally changes to ACTIVE. You are billed for every hour of training.


How do I recommend with it?

After you put your data into Amazon, picked the right (hopefully) model and trained it, you can finally use the power of machine learning. To do this, you create the Campaign object, linking the SolutionVersion (trained model) to it.

Utilizing the pretrained model is rather straightforward and boils down to either provisioning an endpoint for real-time predictions or using the batch prediction mechanism. The most obvious use case for the second one are marketing campaigns – this mechanism enables you to prepare hundred of thousands recommendations ready to get pushed into an external marketing automation solution. You pay for every 1,000 recommendations created this way.

If you wish to use the real-time predictions mechanism, then you have to specify one parameter minProvisionedTPS first. This specifies how many transactions (recommendations) per second your system should be able to handle at all times. You might want to collect some metrics about the load on your system before specifying it though. You pay for every provisioned TPS, so set it too high and your recommendation engine will cost thousands; set it too low and your users may experience temporary service unavailability. Of course, whenever AWS detects you are using almost all of your provisioned capacity it will automatically scale the service up (to a maximum of 500 transactions per second). This, however, takes some time, and as we’ve said before, it may result in lost recommendations due to the API being unavailable. Remember to take this into consideration when specifying the minProvisionedTPS parameter.

You can, of course, tweak minProvisionedTPS yourself, provisioning a higher minimum-amount in the daytime and reducing it at night.

Similar to most of AWS Services, to get a real-time prediction you either use aws-cli (handy when using shell scripts), AWS SDK (handy when you don’t mind wiring the AWS dependencies into your project) or AWS API (if you prefer to treat AWS as fully external service and just call the HTTP like you always do). You get json results back and use them however you want.

When it comes to the batch mechanism, look into CreateBatchInferenceJob API Action, which operates purely in the Amazon S3 realm. It takes the input ids from there and spits the output ids to another specified S3 Bucket.


How do I retrain it?

Retraining the existing model is straightforward – you push the new data into an S3 bucket, trigger DatasetImportJob, wait until the import finishes, create a new SolutionVersion, wait until the training finishes and lastly update the existing Campaign, linking the freshly created SolutionVersion to it. The entire process is transparent to your applications, as the Campaign ARN stays fixed in place.

Should the new model perform worse, you could always roll back to the previous SolutionVersion quickly. As I’ve already said, it is tempting (and cost-effective) to have the entire process automated using other AWS services and tools. Little code there, and that smart mix of AWS Lambda (with AWS Step Functions if necessary), AWS Glue and AWS CloudWatch will spare you a lot of manual work. If the recommendation engine is doing fine, you should consider investing in such automation.

It is worth mentioning that Amazon Personalize may also work on real-time event data – you can track what your user is currently doing on the website and send the events to Amazon, which in turn will take them into account when making a prediction.


So are we doing it?

As you can see, it still requires a good bit of work from your IT department to deliver the recommendations. It does, however, relieve you from having a dedicated data team to do so. This is great news if data science isn’t your core business. The cost of AWS resources will be far lower than the cost of recruiting people from new, exotic branches (please note it will cost around a thousand dollars per month for basic real-time usage)

The good

It is easy. It gives you a solution rather quickly. It is approachable. It does not really require you to know much about machine learning. It is affordable. You don’t have to go all in on AWS. It provisions the infrastructure and automatically scales up/down. Basically, as many other AWS services, it is just “good enough” – you will not get anything extraordinary from it, but you will definitely recommend the proper content to each user. Yes, it will sometimes be repetitive and annoying – have you ever bought or viewed one specific item like a ladder, only to have the entire internet decide that you must have bought a new house and surely you’ll be wanting all the construction materials in the world? Your recommendation engine made in Amazon Personalize will most likely act in exactly the same way. But still, for the effort required to set it up, I think it’s definitely a fair trade.

The bad

The first disappointment is that you can’t use a real-time predictions mechanism in a truly serverless manner. You must provision the minimum amount of computing power. I can understand the reasons why it isn’t available (loading the model into the memory may take quite a long time and AWS would have to have it always-on – this is exactly what you’re paying for with provisioned TPS), but as a customer I would expect more. Google Cloud Run solved the memory issue somehow, why can’t AWS do the same?

The second one is the complete lack of IaC support. The reasons vary, but I guess the fact that the training may take a lot of time and Campaign object might stay inactive for hours (or days!) before it is actually accessible is the main issue. Amazon Personalize is completely absent in the land of IaC. No Terraform, no CloudFormation and no AWS CDK for you. This may, of course, change in the future. You don’t have to wait for your AWS CloudFront distribution and yet the technology is fully supported – IaC tools can just skip the waiting phase. Why can’t they skip the training wait time of Amazon Personalize? You have to build your own pipelines, calling the API manually instead of using industry standards. Duh.

The good is also the bad – as mentioned before, it is rather disappointing for people coming from the data science world. Unlike, for example, Amazon Forecast, which lets you freely set all Prophet parameters, the provided models are static (in the sense that they are not really customizable). They also slashed the premade recipes, leaving only a few of them, narrowing your options even more.


The answer to the question “Is Amazon Personalize for us?” depends mostly on the scale of your company and expertise in data science projects. If you are in need of content personalization but have no prior machine learning experience, then wait no more – hop into Amazon Personalize and enjoy a drastic shift in the content you recommend to your users. It will certainly make a difference, while requiring little work from your IT department. The black box from Amazon can solve common recommendation tasks for you. However, if you are an experienced data science company, there isn’t much benefit to you from Amazon Personalize, except for rapid baseline model creation and sweet autoscaling features. This tool was made for software engineers, not data scientists, and will remain like this, as most of their AI Services are.




Let’s build something great together!

Contact Us

I agree to the processing of my personal data on the terms set out in the privacy policy . If you do not agree to the use of cookies for the purposes indicated in it, including profiling, turn off the cookies in your browser or leave the website. more