The Filipino freelancers driving the world’s autonomous vehicles

As more self-driving vehicles hit highways across the globe, our roadways are becoming increasingly reliant on the artificial intelligences steering them. But for every driver replaced, scores of freelance data workers take their place behind the wheel.

One quarter after order books for the new Lexus NX 2022 opened, the Japanese luxury car manufacturer began accepting orders for the Nimble Crossover to customers in the Philippines. Pulling from its parent company Toyota’s treasure trove of autonomous vehicle patents—the largest of any company in the world—the new Lexus model’s Safety System 3.0 provides steering input to help drivers avoid objects on the road, guides drifting vehicles back onto their lanes, and even reads nearby road signs, displaying their information on the car’s onboard touchscreen display. 

These new offerings follow a decade of rising investments in autonomous vehicles, a $31.85 billion industry expected to grow to $37.22 billion next year. As nations like Singapore and the Netherlands lead the pack in terms of autonomous vehicle readiness, and massive markets like the United States and China drive up demand, the world’s roadways look to be becoming increasingly reliant on the artificial intelligences behind the wheels of these self-driving cars. 

But the promise of fully autonomous vehicles is still very much a ways away. These vehicles can detect obstacles and respond to them at impressively accurate levels, but only due to the massive amounts of data being continuously trained behind the scenes by scores of human analysts. “Driverless” is a work in progress—one that’s incredibly reliant on cheap, outsourced labor from emerging economies. While Lexus’ AI-assisted offering will be bringing many of these newer technologies to Philippine highways for the first time, the nation—specifically its labor force—has been fueling the industry’s growth for years.

Driverless is not human-less

Ashley Nunes, MIT Researcher

Self-driving cars navigate streets using a combination of cameras, radar sensors, lidar sensors, GPS antennas, and other tools to map out their surroundings. This machine vision allows them to “see” what’s around them. But seeing is just the first part. A runaway dog, a vehicle with a broken tail light at night, the sudden onset of a rain shower, or a pedestrian waiting at an intersection—these vehicles need to be able to respond to obstacles and situations as they arise, just like human drivers can. In order to do that, these self-driving cars need to be outfitted with powerful decision-making algorithms, ones fed with immense amounts of highly accurate and detailed data. 

As companies rushed to build the first self-driving car, about ten years ago, they began hiring workers who could build training datasets. These datasets usually contain hundreds of thousands of images and videos captured from actual drives that need categorization and localization. Workers, known as microtaskers, work at labeling what is depicted in them, so that a machine-learning algorithm can slowly learn to differentiate a tree from a stop sign from a child. 

To be sure, the work is important; it will teach computers how to react in real-world settings. It is expected to reduce the number of vehicular accidents significantly, and is foreseen to have positive environmental impact as well. However, the work is incredibly tedious. In order to generate all these data points, many companies turned to the existing global outsourcing industry, and put the microtaskers to work. 

Anton S., 24, is a freelance microtasker that charges $20 an hour to make little boxes around images in visual scenes. “It sounds like a lot of money when you think about it in terms of hourly rates for actual work done,” he said. “Most people my age aren’t pulling six figure salaries for 8 hours a day, 5 days a week with basically just twelve months of experience. I’ve been lucky that I’ve had 55 jobs that all went well. But, my rate is actually three times cheaper than an entry-level image recognition analyst in the US. It’s a very healthy stream of income right now, but there is still a big salary divide based on where you happen to be generating your data from. But as far as I know, it’s the same data we’re parsing in Pasig as they do in Paris.”

Anton’s rates are firmly on the lower end of the wage spectrum, as the majority of image recognition professionals in Asia charge upwards of $40 for an hour of work, and a select few American freelancers charge approximately $150 an hour. Through their image analysis work, microtaskers like Anton are helping build the neural networks that AI technologies need to automate driving, enabling computers to coherently assess their surroundings quickly and effectively. These analysts accomplish all this by drawing color coded boxes around things picked up by vehicles’ machine vision sensors: pedestrians, signs, and even raindrops.

If you’re wondering why something as simple as a yellow square is a pivotal piece in the connected cars puzzle, it pays to note that human visual systems are the benchmark for self-driving cars to classify objects, perform edge detection, track lanes and expand visibility range, according to a recent article on deep learning. This hearkens to the now-famous quote by MIT researcher, Harvard fellow, and Forbes contributing journalist Ashley Nunes that says, “driverless is not human-less“.

Nunes’ research on the history of technology suggests that such advances might reduce the need for human labor, but it seldom, if ever, eliminates that need entirely. Regulators in the US and elsewhere have not yet signed off on the use of algorithms crucial to safety, “without there being some accompanying human oversight,” as per Nunes’ findings. This is why autonomous vehicle technology requires a veritable armada of people working to generate data and train networks efficiently—inevitably raising costs for human capital, exchanging driver salaries for outsourced analyst salaries, and, as Nunes said, “exchanging the gig economy for inefficiency”.

If I can afford not to take the next from the same client or contractor, I choose a better gig. When I can’t afford it, I try not to have any questions.

Anton S., Freelance Microtasker

Over the last few years, experts have noted that many microtasking and third-party outsourcing firms needed to change the way they operated to account for the mounting requirements for microtaskers. First, platforms introduced stringent quality control measures to ensure jobs for autonomous vehicle clients come back with very few mistakes, where an 88% accuracy rate is considered low and necessitates a warning. On top of labeling the data points, other microtaskers train more workers to label, as well as check and correct completed tasks, taking on the role of a de facto team leader without the authority or salary bump. 

Platforms and contractors also insist on separating their clients and workers, the latter of which are unable to raise concerns, give feedback, offer observations, provide analysis and insight, or even ask basic questions about the tasks they are assigned. 

“The usual response to any question is ‘that’s above your paygrade’, or sometimes ‘that question will make it very difficult for me to hire you for the client again’,” Anton said. “When that happens, I make sure to finish the task as quickly as possible so that I don’t have time to notice how unfair that answer really is. Then if I can afford not to take the next task from the same client or contractor, I choose a better gig. When I can’t afford it, I try not to have any questions.” 

These high-stakes, high-turnover environments are largely fueled by the rapid pace at which the autonomous vehicle market is growing—as well as the immense scrutiny the sector has faced in recent years. Three cases in particular have cast the spotlight on autonomous driving and the risks that accompany this technology, with, in the case of the US, the users-in-charge liable for anything untoward that may happen. 

The first case occurred in 2020, when an Arizona grand jury indicted Rafaela Vasquez, a former safety driver in Uber’s self-driving car project, for the death of pedestrian Elaine Herzberg in Tempe, Arizona in 2018. The crash occurred after dark on a well-lit stretch of road. Elaine Herzberg was crossing the multilane road with her bicycle when the Uber SUV struck her at 38 miles per hour. Footage from a driver-facing camera showed that Vasquez had been watching The Voice, a reality show, on her phone just seconds before the collision.  Vasquez was sentenced to six years for negligent homicide with a “dangerous instrument”. Uber struck a quick settlement with Herzberg’s heirs, and shelved its fledgling self-driving technology program for several years.

The second case occurred in May 2021, where California Highway Patrol arrested Param Sharma, 25, for riding down a highway from the back seat of his self-driving Tesla. Sharma was only the tip of the iceberg for Tesla-related incidents. The National Highway Traffic Safety Administration (NHTSA) has opened investigations into more than two dozen Tesla crashes, including a fatal incident in April 2021 that police said occurred with nobody in the driver’s seat. Tesla’s advanced driver-assistance system—a step up from the standard Autopilot feature—enables a car to automatically change lanes, navigate highway on-ramps and exits, and recognize stop signs and traffic lights.  But even in its most advanced iteration, the Full Self-Driving system has major flaws. Tesla warned the software’s beta testers to be vigilant, as the feature may “do the wrong thing at the worst time.” Reports have shown Teslas driving through stop signs, slamming on the brakes for yield signs even when the merge was clear, and stopping at every exit while going around a traffic circle.

Following Sharma’s stunt, a man whose Tesla was said to be on Autopilot when it purportedly crashed into a car, killing two people has been charged with vehicular manslaughter. This third case, that of 27-year-old limousine service driver Kevin George Aziz Riad, is likely the first involving a motorist being accused of a felony while using a partially automated driving system, resulting in a fatal accident. 

According to reports, Riad’s 2016 Tesla Model S collided with a Honda Civic in Gardena, California, on Dec. 29, 2019, killing Gilberto Alcazar Lopez and Maria Guadalupe Nieves-López. A civil case which names Riad and Tesla Motors Inc. as defendants alleged that the car was traveling at an “excessively high rate of speed” when it crashed. Riad and his companion, a woman in the Tesla, were hospitalized with non-life-threatening injuries.

When asked about the manslaughter charges against Riad, the National Transport Safety Board (NTSB) issued a statement saying there is no vehicle on sale that can drive itself. And whether or not a car is using a partially automated system, the agency said, “every vehicle requires the human driver to be in control at all times.” The NTSB added that all state laws hold human drivers responsible for the operation of their vehicles. Though automated systems can help drivers avoid crashes, the agency said, the technology must be used responsibly.

It really looks like AI will be built with FI—Filipino Input. It’s exciting and I’m glad to be part of it.

Jan A., Freelance Microtasker

Across the pond from the US and its embattled autonomous vehicle sector, the UK is now considering removing criminal liability for driverless car accidents, as proposed by Government law experts. This has been outlined to Parliament, detailing a shift in legal responsibilities when self-driving vehicles are set to hit the streets later this year.  The proposed switch could mean the end of speeding tickets for users-in-charge because they won’t be held accountable for mistakes made by their cars.

According to microtasker Jan A., a recent microtasker working on image localization, absolving users-in-charge from criminal liability “feels like a huge leap… in the wrong direction.” Jan speculates that the responsibility for car accidents may somehow eventually fall on the freelancers and analysts who do the heavy lifting with image recognition and deep learning, jeopardizing the industry’s burgeoning revenue streams in the Philippines. She recommended instead that legislation and regulation should ensure that humans take oversight over autonomous mobility technology, as is the case with autopilot technology in aviation. 

Another major hurdle is that driverless tech still needs to catch up with its marketing spiel. In May 2021, The New York Times reported that cars are still overwhelmed with the multitude of scenarios they encounter while driving. Things like road flares or fog may be normal sights to human drivers, but these seemingly mundane instances continue to befuddle machines and cause them to course-correct unnecessarily and potentially figure in accidents.  Companies may need to pump billions of dollars more into R&D in order to perfect the tech, and train the networks exhaustively, according to experts. Meanwhile, evolving challenges are seen and felt most keenly in the tasks required of workers like Jan and Anton. For example, in a category called Atmospherics, microtaskers are asked to label each drop of water in a rainstorm, so that cars do not mistake the raindrops for obstacles. 

Florian Alexander Schmidt, professor at the Dresden University of Applied Sciences, noted that the most important innovation to emerge from self-driving cars are not the cars themselves, but the massive pool of labor that the self-driving car industry has inadvertently created. Many freelancers and microtaskers on popular sites have since trained AI in medical technology, smart home devices, and even populated recycling models for world governments. Now, independent contractors and microtasking platforms are trying to find ways to break other technology tasks down into smaller pieces or allow people to do them on their phones, opening up jobs for an even greater number of Filipino workers, both at home and abroad. 

“The only chinks in the chain I can see would be the expensive internet connection costs and dismal broadband speeds, the politicized 5G tech rollout, and a wide skill gap for tech workers,” Jan said. “But other than that, it really looks like AI will be built using FI: Filipino input. It’s exciting, and I’m glad to be a part of it.”

I.W. Gonzales

START A DISCUSSION

Do you have any questions or insights regarding this article?
The most frequently mentioned topics will be placed in our discussion board.

Registration isn't required

Comments are closed.