Uber said it would contribute 3% of a driver's earnings into a pension plan, while drivers can choose to contribute a minimum of 5% of their qualifying earnings.
Uber said it would contribute 3% of a driver’s earnings into a pension plan, while drivers can choose to contribute a minimum of 5% of their qualifying earnings.

AI models are as good as the data they are fed with. But what if the data that’s being fed is no more relevant? What kind of results can you expect when there’s not a single anomaly in the data set but the entire data set turns anomalous? That’s exactly what data scientists have been struggling with since the onset of Covid-19.

Uber was faced with a similar problem last year with hundreds of terabytes of its historical data suddenly adding no value in anomaly detection. Uber then decided to adopt recency bias in its ML models.

“If you look at last year, then yes this looks anomalous, but you look at the time window, it’s a historical moving average. Based on what you see in the recent past, the recency bias—where we look at the most recent data—is not really an anomaly, considering this is the direction in which the trend is moving. These models can catch such variations. It has a recency bias to look at most recent data and give more weightage to the recent average than something that happened months ago,” Pallavi Rao, Staff Software Engineer at Uber told ETCIO.

Uber doesn’t follow a centralized anomaly detection framework. Instead it looks at anomalies on a use case to use case basis.

“Sometimes the problem may not require you to have a sophisticated model, whereas sometimes it may be as simple as saying, here’s my standard deviation, this is what I see and then suddenly a deviation from that median. So how far away we are from that median will tell me whether there’s an anomaly or not,” Rao explained.

This deviation from the median is what Uber refers to as the ‘Z-score’. It is a simple metric that is able to tell data scientists whether there’s a deviation. In certain cases, that might not be possible, and that’s where the team at Uber applies statistical models. One of the examples of a statistical model is the historical moving averages. Uber looks at what was the average historically, and keep moving that average for that time window and say this is how the average should trend.

“Then based on the average trending, we make a forecast given that this is trending this way, let’s say next one hour it should be here. If our forecast doesn’t match with what has actually happened, then it’s an anomaly. We forecasted that the average will be X, but we suddenly saw Y, which is very different from our forecast, that means the data that we are seeing is not really tracking the average. Those are called statistical models where we use forecasting and the difference between a forecast and actual to predict if there’s an anomaly. But yes, it depends on how sophisticated you want your anomaly detection to be. It can be as simplistic as a ‘Z-score’ to more sophisticated as a statistical model,” Rao said.

Building local context

For a company that operates in multiple countries, following the same ML model can be challenging, especially when taking into account local cultural nuances. This becomes especially critical when trying to predict customer satisfaction even before the customer puts in the ratings.

“When I’m trying to predict and attribute satisfaction to a user, there are a lot of cultural differences that come into play. There are a lot of logistical plays that come. Even simple things like a cultural difference, like in certain cultures, we don’t greet the rider, it might be considered rude, whereas in certain other cultures, they are very happy if there is no interaction with the driver at all and typically, the way it was handled is we create datasets per region, or per country. So, when we are training a model, the model is the same, the way the model is built in the same, the algorithm is the same but it is trained on different data sets, which are bifurcated by either a region or a country or a cohort,” Rao said.

Customer support is a good source of information for companies like Uber to not just track customer satisfaction but also to identify possible issues with the product itself.

“For example, our users are complaining that they cannot log in, right? And, then we suddenly see a spike of those tickets on the customer support site. So, if we’re just sitting on it, we might not even realize it. But what we’re doing is we are constantly analysing that and when we see a spike of such issues, we realize there must be something wrong that is going on either with the product, there must be a bug or a feature issue that is causing this spike and unless we watch for that spike, and unless the model sees that anomaly, we won’t be able to detect it,” Rao explained.

Another example was when the company started enforcing masks for its drivers. But not many drivers paid attention to the new rule and that led to authentication failure each time they’d try to start a ride.

“If the mask verification was failing, we would suddenly see a spike in support issues. While users may not be directly reporting on the app, saying, “I have an issue” that comes in as a support issue, so that by analysing something else, or by analysing support issues, we are able to detect some of the product issues,” Rao said.





Source link