Many computer vision interns lack AI ethics training – Protocol

The computer vision research community is behind on AI ethics, but it’s not just a research problem. Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.
Dismissive attitudes toward ethical considerations can hinder business goals to operationalize principles promised in splashy mission statements and press releases.
Computer vision forms the foundation for AI-based technology products, offering immense potential for helping spot disease symptoms or ensuring that autonomous vehicles accurately recognize objects on the road. But the same techniques also form the underpinnings of tech with immense potential for personal harm and societal damage — from discriminatory facial recognition-fueled surveillance and disinformation-spreading deepfakes to controversial tech used to detect people’s emotional states.
The possible negative impacts of AI that computer vision researchers help bring to life are getting more attention, prompting AI businesses to emphasize the importance of ethical considerations guiding how their products are built. Yet over the last several years, the computer vision community has been reluctant to recognize connections between the research advancements and cool math problem-solving achievements celebrated at one of its most prestigious annual conferences, and the possible uses for that tech once it is baked into apps and software products.
This year, that began to change, albeit slowly.

For the first time, the Computer Vision and Pattern Recognition Conference — a global event that attracted companies including Amazon, Google, Microsoft and Tesla to recruit new AI talent this year“strongly encouraged” researchers whose papers were accepted to the conference to include a discussion about potential negative societal impacts of their research in their submission forms.
“Because of the much more real impact that computer vision is playing in people’s lives, we instituted this process for the authors to discuss both the limitations of their papers [and] also potential social limits,” said Dimitris Samaras, a program chair of this year’s CVPR conference, and a professor and director of the Computer Vision Lab at Stony Brook University.
“It’s mostly so that people – authors – are forced to think and frame their work in a way that impacts are recognized as early as possible, and if necessary, [mitigated],” Samaras told Protocol.
The policy shift ruffled some feathers. Academics are “super aware” of the potential impact of their research on the real world, said one conference attendee who asked not to be named. However, he said, because researchers cherish their academic freedom, asking them to predict the future applications for research that could be in very early stages and years away from viability in products restricts that independence.
“They are not good at telling you what the applications of their research are. It’s not their job,” he said.
“That is exactly what pisses me off,” said Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute, and a researcher with a Ph.D. in computer vision. “[Computer vision researchers] have convinced themselves that it’s not their job.”
While presenting a workshop on fairness, accountability, transparency and ethics in computer vision at CVPR in 2020, Gebru said she experienced what she considered a general disregard for ethical considerations and the human rights impacts of computer vision-based technologies used for border surveillance, autonomous and drone warfare and law enforcement.
Gebru told Protocol she is now “done” with CVPR and has soured on the computer vision field because of the “inability for them to be introspective.”

“We personally believe it is the researcher’s job,” Samaras said, regarding consideration of computer vision’s ethical implications.

This isn’t just a research problem though. Some AI practitioners say that the ethics disconnect continues past the research phase as people like the young computer scientists vying for tech jobs at CVPR make their way into the ranks of corporate AI. There, dismissive attitudes toward ethical considerations can hinder business goals to operationalize ethics principles promised in splashy mission statements and press releases.
“They are not good at telling you what the applications of their research are. It’s not their job.”
“I think that was one of my frustration points in my tech career,” said Navrina Singh, a computer engineer and founder and CEO of Credo AI, which sells software for keeping track of data governance and audit reviews in the machine learning development process.
“As technologists, we were incentivized to build the highest-performing systems and put them out on the market quickly to get business outcomes,” said Singh. “And anytime we would talk about compliance and governance, the technologists were like, ‘Oh, this is not my problem. That’s not my space. That’s not my incentive structure.’”
CVPR attendance has doubled since five years ago; this year’s show attracted around 10,000 attendees, over half of whom participated in person, according to conference organizers.
The 2022 CVPR conference was held at the convention center in New Orleans, where a growing number of surveillance cameras installed throughout the city are plugged into a real-time law enforcement crime center. The city is currently considering lifting a ban on facial recognition and other surveillance tech established just two years ago.
In its new ethics guidelines, CVPR organizers listed some examples of negative impacts of computer vision. “Could it be used to collect or analyze bulk surveillance data to predict immigration status or other protected categories, or be used in any kind of criminal profiling?” they asked. “Could it be used to impersonate public figures to influence political processes, or as a tool of hate speech or abuse?”

Left: Conference attendees in front of a large PowerPoint presentation. Right: A flyer advertising internships at Amazon. Left: Computer vision researchers attend a workshop held at CVPR in New Orleans. Right: Amazon Science recruited interns at the CVPR conference, where the company held workshops on its Amazon Go computer vision tech.Photos: Kate Kaye/Protocol
Some researchers who presented their work at the conference acknowledged the possible downsides. In a paper about high-resolution face-swapping via latent semantics, researchers wrote, “Although not the purpose of this work, realistic face swapping can potentially be misused for deepfakes-related applications.” To limit the deepfake potential of their research, the authors proposed restricting how the model is released for use and developing deepfake-detection techniques.
However, because CVPR merely encouraged researchers to include an impact assessment in their papers, and did not require them to include that information in their published papers available for viewing outside the conference review process, many make no mention of the ethical implications of their work. For example, another publicly available research paper accepted at this year’s conference, detailing region-aware face-swapping — which can be used to enable deepfakes — does not include any social impact statements.
In fact, researchers were only asked to tell reviewers whether or not their work might have a social impact. “You could say that it’s a pure math paper [so] there isn’t social impact. If reviewers agree with you, there’s nothing to say,” Samaras said.
Some researchers bristle at the increased concern around ethics, in part because they are producing incremental work that could have many future applications, just like any tool might.
“It’s not the techniques that are bad; it’s the way you use it. Fire could be bad or good depending on what you are doing with it,” said François Brémond, a cognitive and computer vision researcher and research director at Inria, the French national research institute for digital science and technology, in an interview at the CVPR conference.
Brémond suggested there is too much focus on potentially negative uses of some computer vision research, particularly when it is designed to help people. His current work involves the use of computer vision to detect key points on faces to gauge subtle changes in expressions of autistic individuals or people with Alzheimer’s. The early-stage research could help decipher signs of internal changes or symptoms and help health care workers better understand their patients, he said.

Controversy over facial expression detection and analysis software led Microsoft to pull it from general use, but retain it in an app used to help people with vision impairment.
“It’s not the techniques that are bad; it’s the way you use it. Fire could be bad or good depending on what you are doing with it.”
Brémond said he saw no reason to include a social impact section in a paper he presented at CVPR because it addressed generalized video action-detection research rather than something directly related to a specific use. The research had no “direct, obvious link to a negative social impact,” Brémond wrote in an email last week. He explained that he is already required to provide information to Inria’s administration regarding the ethical issues associated with his research.
It’s no wonder CVPR program chairs — including Samaras and Stefan Roth, a computer science professor in the Visual Inference Lab at Germany’s Technical University of Darmstadt — aren’t pushing too hard.
“Our decision to make that gradual was a conscious decision,” said Roth. “The community as a whole is not at this point yet. If we make a very radical change, then the reviewers will not really know how to basically take that into account in the review process,” he said, referencing those who review papers submitted to the conference.
“We were trying to break a little bit of ground in that direction. And it’s certainly not going to be the last version of that for CVPR,” Roth said.
Changing hearts and minds may come, but slowly, said Olga Russakovsky, an assistant professor in Princeton University’s department of computer science, during an interview at the conference where she gave a presentation on fairness in visual recognition.
“Most folks here are trained as computer scientists, and computer science training does not have an ethics component,” she said. “It evokes this visceral reaction of, ‘Oh, I don’t know ethics. And I don’t know what that means.’”

The vast majority of tutorials, workshops and research papers presented at CVPR made little or no mention of ethical considerations. Instead, trending subjects included neural rendering and the use of multimodal data to train large machine learning models, or data that comes in a variety of modes such as text, images and videos.
One particularly hot topic this year: a neural network from OpenAI that learns visual concepts from natural language supervision called CLIP, or contrastive language-image pre-training.
“It’s getting much more on the radar of a lot of people,” said Samaras, noting that he counted 20 papers presented at CVPR that incorporated CLIP.
CLIP happened to be a topic of conversation at another AI conference, in Seoul, during the same week in late June when CVPR was held. But in this case, CLIP was not celebrated.
“CLIP is an English-language model trained on internet content gathered based on data from an American website (Wikipedia), and our results indicate that CLIP reflects the biases of the language and society which produced the data on which it was trained,” researchers wrote in a paper they presented at FAccT. The growing global conference is dedicated to research focused on fairness, accountability and transparency in sociotechnical systems such as AI.
While FAccT surely reached its endemic audience of AI ethics researchers, more than 2,000 people from the computer vision community who may have learned from that ethics-focused conference — including 460 from South Korea — were thousands of miles away in New Orleans at CVPR, advancing their craft with relatively minimal concern for the societal implications of their work. If anything, the physical separation of the simultaneous events symbolized the disconnect between the computer scientists pushing computer vision ahead and the researchers hoping to infuse it with ethical considerations.
But FAccT organizers hope to spread their message beyond the ethics choir, said Alice Xiang, a general co-chair of this year’s FAccT conference and head of Sony Group’s AI ethics office. “One of the goals we had as organizers of that is to try to make it as much of a big tent as possible. And that is something that we do sometimes worry about: whether practitioners who actually develop AI technologies might feel that this is just a conversation for ethicists.”

But cross-pollination could be a long time coming, Xiang said.
“We’re still at a point in AI ethics where it’s very hard for us to properly assess and mitigate ethics issues without the partnership of folks who are intimately involved in developing this technology,” she said. “We still have a lot of work to do in that intersection in terms of bringing folks along and making them realize some of these issues.”
Are you keeping up with the latest cloud developments? Get the Enterprise team’s newsletter every Monday and Thursday.

Your information will be used in accordance with our Privacy Policy
Thank you for signing up. Please check your inbox to verify your email.

Sorry, something went wrong. Please try again.
A login link has been emailed to you – please check your inbox.
Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of “Campaign ’08: A Turning Point for Digital Media,” a book about how the 2008 presidential campaigns used digital media and data.
Many people might think of the Google Play Store when they want to download a new app. But the Google Play Store is much more than that: It creates revenue for small businesses and provides jobs for many employees at those businesses. Google Play connects developers with over 2.5 billion monthly active users around the globe, helping to generate over $120 billion in revenues for developers, to date.
Purnima Kochikar, Vice President, Google Play Partnerships
As part of an exclusive fireside chat, Purnima Kochikar, Vice President, Google Play Partnerships, sat down with Protocol to discuss how Google helps developers succeed by giving them the tools to turn ideas into apps, build an audience to receive those apps and get feedback needed to create the best possible app to change people’s lives.
What do you find most gratifying about your role overseeing all aspects of the Google Play app ecosystem?
We started as a very tiny team, and over the past 10 years that I’ve been at Google, we’ve generated over $120 billion in revenue for developers, many who are entrepreneurs or work at small businesses.
I have the best job in the world – it is a huge responsibility and incredibly humbling. It’s also inspirational to provide business and technical consulting to help developers build apps that change people’s lives.
Because we sit on a large platform, we can look at best practices and guide developers to the best tools and technologies. We also have generated 2.5 billion monthly active users, which creates a ready-made global audience for the amazing creativity of app developers.
I’ve always said that Android and Google Play are blank canvases, and developers are the artists who paint on them. I love seeing how developers turn ideas into reality.
What have you learned while leading the Google Play Store?
One of the most important things that I’ve learned is that imagination and creativity are not constrained to the places where we think they are. You always think about Silicon Valley. We think about New York. We think about London and the big cities. But our developers come from everywhere — most are also small businesses, like the brick-and-mortar companies you see in your town or neighborhood. You can have a great idea sitting in Nashville, Tennessee, or San Diego, or New York, or San Francisco. But when you offer those ideas through Google Play, you have a global audience waiting for you.
One of my favorite examples is GoNoodle, which is a small business in Nashville. A creative entrepreneur there saw joy when people get together and get healthy. Now, 95% of elementary schools in the U.S. use the app, which shares interesting ways for kids to get moving and focus on their health. This level of reach would have been unthinkable in the past, but now the app builder has the platform and distribution model to give all schools access to the app.
You mentioned developer creativity. Let’s talk about the role that creativity plays in the app development process.
Small businesses have amazing ideas. And they really understand their customers. They want to create truly amazing apps and really focus on their uses. But some stumble with the practical reality. We give developers the tools and technology to turn their creative ideas into apps that users can download and then use to change both their lives and their users’ lives.
One of the ways that we help small businesses succeed is Google Play Academy, which is a self-serve education platform. A developer anywhere in the world can access our videos, blogs and tips to understand how to both build and publish a great app. Because small companies don’t have lengthy testing times like bigger companies do, we also created the alpha-beta program where a developer can invite users to test their apps. We also provide tools and templates, such as pre-registration, to help generate an audience for their app before it’s even published.
The statistic that I’m most proud of is that in 2021, more than 2 million jobs existed in the United States, thanks to Android and Google Play. These aren’t jobs at Google, these are jobs that exist because developers grew their small businesses after publishing apps on the Play Store.
Many small businesses were hit especially hard during the pandemic – did you see any effects of that for the Google Play ecosystem?
During the past two years there has been a big debate between life and livelihood. A lot of people had to make a choice between the two. Those who could work from home didn’t have to make that hard choice because we could have both life and livelihood — and tech was the reason people could have both.
Mobile apps let small businesses digitize, pretty much overnight. We saw small restaurants use apps to make food delivery possible so people could stay home and order food. The apps helped them stay open and even created new jobs, especially in delivery. During the pandemic, most of the delivery apps in the Play Store hit 10-year KPIs – meaning that they saw engagement in just two years that most apps take 10 years to hit. It’s been truly fascinating — and truly humbling — to see what apps made possible in the middle of some of the worst times of our lives.
You’ve shared several differences in the apps and developers — location, size, ideas. Is there something that most apps have in common?
Each app developer truly focuses on a problem that’s near and dear to their heart, something that sparked their imagination, or something they feel deeply about. They each truly believe that they have a solution to make their community, their country, or the world better – and they aspire for their app to be used by a lot of people – they want to succeed.
Interestingly, most app developers with apps in the Play Store are actually small businesses, and they’re experiencing the benefits and challenges that come with starting out — even today’s big businesses started as small businesses.
People often ask me if they can succeed, because they don’t see other successful people who look like themselves. We need more women starting companies. We need more underrepresented groups starting companies. So, we are investing in this area. We want to make sure there are more people and companies like each of us on Google Play — so that the next kid who has a dream believes that they can be successful with their app.
I’m super excited about a program we launched in 2017 called Change the Game. Many people are surprised to learn that in 2020, 41% of people playing digital games were women. We still need more people to build apps for women. We want women and girls to know that they can create games, and so with Change the Game, we’re helping them build game apps – helping to support and empower women as game players and creators. We want to help developers succeed, and this is one of the many ways we are doing that.
When researching for this article, I was surprised that 97% of developers do not pay any fees to Google Play. What are some other things that people get wrong about Google?
Many people don’t realize the many ways developers benefit from Google Play and that the core DNA of Android is open. From the minute that developers get a creative idea, they have every tool they need to build the app, understand the security policies, launch the app and gain a global audience.
Another common misconception is that apps must be downloaded from a single location. But there are alternative app stores that are available on Android — and developers can distribute their apps through a website, meaning their creativity is not constrained. Developers have choices about where to distribute.
Most importantly, I truly want people to get to know the developers and the value that they’re finding on Play. You can read their amazing stories on our We Are Play site. Take a few minutes and download their apps to see firsthand how they can help you change your own life.
Skeptics say there’s a better way to protect warehouse workers: redesign jobs to make them safe.
If the devices can consistently and accurately identify risky movements, the data could help companies shirk responsibility for creating jobs that cause injuries.
Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.
There’s a leaderboard that ranks you against your teammates. There’s a vibrating plastic rectangle that straps to your hip, your belt or your back. Earn enough points, and you might even win a prize: headphones or maybe a flat-screen TV.
But this isn’t laser tag — this is an industrial warehouse.
And this isn’t a game: It’s a system that is supposed to reduce the chance that you might blow out your back or your knees working in one of the countless warehouses, factories, data centers or delivery companies that make up the backbone of the country’s tech industry.
Tech startups like Kinetic, Modjoul and StrongArm, inspired by the popularity of Fitbits, Apple Watches and other consumer wearables, are selling similar devices to companies like Walmart and Amazon with the promise they’ll reduce the nagging — and costly — problem of worker injuries. The pitch: Give watches, belts or harnesses to every worker, track their movements and record their data, buzz them whenever they make an unsafe move, and watch injury rates drop.

The pitch is working, even as workplace safety experts point out the obvious: that the human body is not designed to repeat at rapid-fire pace the tasks required of many of today’s warehouse jobs. Fitting workers with these gadgets could enable companies to avoid bigger, long-term fixes to their workplaces, all while allowing them to record workers’ every twist, fall and bend, skeptics say.
In April, Amazon chose Modjoul for its first round of investments from its new $1 billion Industrial Innovation Fund (Amazon declined to comment further about its investment). Walmart had implemented StrongArm’s wearable tech in 18 buildings and across more than 6,000 workers as of May 2021, and a Walmart spokesperson confirmed in June 2022 that the company is continuing to roll out the technology to more facilities.
These wearable tech companies gamify safety. Most of them give workers a safety score or allot points every day to rank employees against their peers, turning their scores into a competition and urging them to do better when they next clock in. StrongArm goes so far as to call workers “Industrial Athletes” and names its demo devices for famous athletes (I used the “Alex Morgan” device when I tested the system in June).
These tools — whether belts, watches or other wearables — work in the same general way. They record movements like bends, twists and lifts, and then use proprietary algorithms to calculate when those movements cross into territory that could cause an injury, prompting either the worker or the manager to correct how someone is moving. Some record location data when workers move through a facility; some have microphones. All of them promise that they do not collect biometric or sensitive health information.
The analytics tools provide an immense trove of granular data about every worker. StrongArm’s platform shows every worker’s safety score over any time span and specifies the time of day when the riskiest movements occur, both on average across the workplace and for each worker. It even charts “tilt speed,” which is the rate at which someone bends at an angle, and counts “forward bends,” as well as the time of day when they happen. The platform can generate charts for how “athletes” compare to their peers and how facilities compare to industry averages.

We want to make sure that you are able to play with your grandkids and go fish on the weekend when you retire.
Most of these companies were conceived within the last 10 years and moved out of pilot and beta testing just before or during COVID-19. As the market continues to grow, some industry experts worry about the potential for abuse. If the devices can consistently and accurately identify risky movements — still an “if,” according to experts interviewed by Protocol — the data could help companies shirk responsibility for creating jobs that cause injuries. For example, if I received a poor safety score and then pulled a muscle in my back, the injury could be blamed on a failure to move properly as indicated in my safety score, even though the strain of performing the job led to the injury. The data could also be used punitively: A company could generate a list of the least “safe” workers and fire them before they get hurt.
“These companies will not protect workers with highly repetitive jobs where they are forced to do forceful, stressful … positions over and over and again, twisting and turning and bending their wrists and elbows and contorting their body,” said Debbie Berkowitz, a fellow at Georgetown University’s initiative on labor and the working poor as well as a former senior policy official for the Occupational Safety and Health Administration. “The job will still be dangerous.”
All of the company leaders interviewed by Protocol insisted that their technology is not built for punitive or masking purposes. StrongArm has companies pledge not to use its data punitively; Modjoul COO and founder Jen Thorson said that its data focuses on identifying companywide problems. These leaders say their tools are intended to help companies identify poorly designed jobs and reduce injury risk, saving them money on worker compensation payouts and reducing turnover in a tight labor market.

“How we view it and how we think about it is, this is not different from the gloves that protect your hand from laceration,” Thorson said. “This just makes sure we are protecting that part that’s unprotected. We want to make sure that you are able to play with your grandkids and go fish on the weekend when you retire.”
The story of these startups exemplifies a recurring theme in the tech industry. New technology revolutionizes an industry — in this case, ecommerce and delivery — and causes new problems along the way. Startup founders then design more new technologies that promise to address those problems, usually by collecting vast amounts of data and creating proprietary algorithms to analyze it.
Amazon’s shipping and fulfillment revolution has transformed consumer expectations for widespread and almost immediate access to any good at any time. Every major Amazon competitor has been forced to evolve its warehousing and delivery models in response, leading to a dramatic expansion of the industry and the number of people employed in it, as well as major changes to the physical jobs themselves.
Wearables Fitting workers with wearables could enable companies to avoid bigger, long-term fixes to their workplaces.Photo: Anna Kramer/Protocol
Heavily roboticized warehouses — especially Amazon’s — have reduced the amount of walking and carrying the average worker needs to do to fill and ship a package.
Instead, workers now perform more specific, repetitive jobs that complement the robotic systems, usually involving constant picking, turning, placing and scanning while standing in one place. Those continual motions, especially at the high speeds required for most workers to meet productivity expectations (and the consumer demand that powers it all), cause musculoskeletal injuries.
Injury data reflects these changes: In 2016, the number of reported injuries in warehousing and storage in the U.S. was just over 14,000, according to occupational injury data estimates from the Bureau of Labor Statistics in a report generated by Protocol. Those numbers steadily increased over the next five years, hitting almost 25,000 in 2020 — approximately a 78% increase. By far the greatest number of injuries fall in the “overexertion and bodily reaction” category, at more than 11,500 such injuries in 2020. Nearly half of all of the 25,000 reported injuries in 2020 were sprains, strains or tears.

The problems are especially bad at Amazon’s warehouses, which had an average injury rate at about double Walmart’s, the ecommerce giant’s biggest competitor, from the beginning of 2017 to the end of 2020. Walmart and Amazon are the first- and second-largest private-sector employers in the U.S., respectively; Amazon has averaged somewhere between six and nine injuries for every 100 employees and Walmart has averaged between three and four per 100 employees since 2018, according to an analysis of OSHA data by the research arm of the Strategic Organizing Center, which represents a collection of major unions.
“Like other companies in the industry, we saw an increase in recordable injuries during this time from 2020 to 2021 as we trained so many new people — however, when you compare 2021 to 2019, our recordable injury rate declined more than 13% year over year,” Kelly Nantel, an Amazon spokesperson, wrote in a statement to Protocol in April 2022. “While we still have more work to do and won’t be satisfied until we are excellent when it comes to safety, we continue to make measurable improvements in reducing injuries and keeping employees safe.”
The problem has attracted the attention of the Department of Labor, which announced an audit of what OSHA has done to “address the increase in severe injuries at warehouse and order fulfillment facilities of online and other retailers,” according to a December 2021 memo.

“These wearables reduced Walmart’s warehouse injuries 64%.” After getting this pitch four times from StrongArm’s press team, I asked the company to let me try out its device. As a notoriously injury-prone person, I felt particularly suited to give these wearables a workout.
On a sweaty Wednesday in June, I hobbled into StrongArm’s Brooklyn office about 24 hours after twisting my knee, meaning that it was basically guaranteed I would be unable to move in an injury-free way during our demo. I would be putting StrongArm through its paces.

A black rectangular device, about the size and shape of a flip phone, tracked my movements.Photo: Anna Kramer/Protocol
I chose soccer star Alex Morgan for my demo character and picked up the clearly well-used, dinged-up black rectangular device — about the size and shape of a flip phone — that would track my movements. The sign-in monitor showed that on the previous day, the device’s user had scored a 77 for safety, fairly normal for the average worker and probably pretty upsetting for the real Alex Morgan. The black rectangle was slipped into a small backpack that fit over my shoulders, and StrongArm’s product team showed me how to get a reaction from the device.
I picked up the bag containing my heavy laptop and put it back down again, angering the little gadget tucked between my shoulder blades. It started to vibrate wildly as soon as I bent toward the ground, causing me to shoot upright in surprise. I lifted a chair over my head and dropped it to the floor, and my shoulders buzzed again. I leaned awkwardly over a low table to take notes and felt the same warning. Alex Morgan’s safety score probably dropped precipitously during the hour I limped around, testing its limits.
The easiest way to set it buzzing? Simple toe touches.
The toe touches illustrate the pros and cons of a device like StrongArm’s. Leaning over to lift something without bending your knees is one of the easiest ways to hurt yourself. The more vertical your back and the more bent your knees, the less strain you put on your body.
We all know that repetitive stressful movements cause injuries and the way to decrease them is to redesign jobs.
But if your job requires you to bend over again and again, a vibrating device won’t change that; it will, however, irritate the worker wearing it. StrongArm knows this, so its product is designed to stop vibrating if the wearer persists with a dangerous movement despite repeated vibrations. My injured knee made it impossible to bend down in a safe manner; after about five buzzes, the device stayed still and sulkily silent, even when I twisted sideways in a manner that was obviously dangerous.

In an ideal world, this scenario could flag managers and safety professionals that a job is inherently high-risk, making them realize that something about the way it’s performed needs to change. On StrongArm’s demo analytics dashboard, one of the tools available for managers shows a heat map of a prototype warehouse floor, with red indicating spots of high-risk motion where workers consistently score poorly. Theoretically, this would enable a company to focus on addressing problems in these areas of the workplace.
This is where the technology holds the most promise. But it’s also where the tech falls short. Berkowitz believes these bells and whistles detract attention from what every warehouse designer, safety professional, ergonomist and even most people walking down the street know: You shouldn’t be bending down again and again to lift things. Companies like Walmart and Amazon don’t need a device to flag high-risk jobs, she said.
“Warehouses have the data, they know who is reporting to their first aid stations, they have a record of everybody that has come in with hand pain, wrist pain, back pain, shoulder pain. They know exactly what job it was,” Berkowitz said. “It’s well-known what positions are neutral for the body and those that are not neutral, like raising your elbow, bending your wrist, if you’re doing it over and over again and you’re doing it in a forceful way.”
One problem with relying on tech to calculate workplace injury risks is the danger of trusting bad data, said Richard Goggins, an ergonomist who has worked for Washington state’s Labor and Industries department (the state-level OSHA agency) for more than 25 years. “If you don’t know you’re getting bad data and you make decisions based on that, you could say this job appears fine and this job appears risky when that may not be the case,” said Goggins, who spoke to Protocol not as an official spokesperson for the agency but based on his ergonomics expertise.
Goggins said he hasn’t decided if the wearables are worth the hype. He has observed warehousing companies in Washington implement these devices but said that they aren’t eager to share the data with Labor and Industries when he and his team arrive for investigations or inspections.

But companies like Walmart have clearly decided there’s something behind the hype that’s worth paying for. On the day I arrived at the StrongArm offices, the Brooklyn warehouse was stacked high with teetering boxes and shelves overflowing with equipment. The company was preparing to move to a space many times bigger than this one, a manager told me. StrongArm’s pallet shipments of equipment are now so large they can’t fit through the doors of the current office, and there’s nowhere near enough space for the number of employees who want to come in for work every day.
David Karbt, StrongArm’s production director, pulled a much smaller, sleeker, palm-sized black square out of one of the boxes. The new iteration of the company’s wearable wasn’t quite ready for me to test, but it could be worn on a belt instead of between the shoulders and could eventually have the potential to collect environmental data like temperature and sound levels.
StrongArm’s data-gathering is meant to help workers perform tasks more safely.Photo: Anna Kramer/Protocol
“This is going to be standard issue,” said Karbt, who sees the wearables eventually becoming status quo in warehouse-type settings. “We have the tech to solve this problem, and there’s a big return on investment.”
While I meandered around the front room of the Brooklyn warehouse, doing my lopsided bends, squats and toe touches, carrying my backpack across the room, I asked Karbt and Jervon Ralph, a production manager, whether they worry about the data being abused to penalize workers. “That would be very deflating,” Karbt said as Ralph nodded in agreement.
Ralph used to work in warehousing and started to feel back pain in his early 20s, which is what steered him to work for StrongArm. “One of the older employees told me to get a back brace, and I knew this is a big issue if, at 20, some guy is telling me to get a back brace,” he said.
While I couldn’t see my own safety score — the data had to upload after I removed the device from between my shoulders — Karbt assured me that it would have plummeted after all the ways we’d deliberately tanked it. If I had come back the next day, Alex Morgan would probably have a bright red square indicating very unsafe behavior.

In that case, StrongArm would advise coaching of the employee and ensuring that the job is designed to be performed in the safest way possible. But it’s impossible to know if companies will heed this advice. “The hope is always that employers will take that coaching approach,” Goggins said.
Berkowitz said the solution is far simpler. “Workers are working so fast that they don’t have time to lift properly,” she said. “We all know that repetitive stressful movements cause injuries and the way to decrease them is to redesign jobs.”
Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.
The Democrats have an ambitious legislative agenda following the July recess. Chip subsidies hang in the balance, and Sen. Mitch McConnell wants Democrats to stay focused on them.
USICA — and its $52 billion in subsidies for domestic chip manufacturing — no longer seems to be a sure thing.
Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He’s based in New York and can be reached at hchitkara@protocol.com.
Congressional Democrats are in need of a signature legislative win ahead of the midterms, and the United States Innovation and Competition Act could be exactly that. Chipmakers including Intel, Samsung and TSMC have forged ahead with plans to build fabrication facilities in the U.S., trusting that Congress will eventually sort things out and pass the bill.
But with Congress currently suspended for a tense July recess, the USICA — and its $52 billion in subsidies for domestic chip manufacturing — no longer seems to be a sure thing.
The industry’s patience is wearing thin. In June, Intel made the largely symbolic move of delaying its groundbreaking ceremony for its Ohio fabrication facility, which ultimately could involve an investment of around $100 billion. A TSMC board member also recently warned the timing of the $12 billion Arizona factory completion would depend on Congress passing subsidies.
Major industry players are heading to D.C. to lobby for the subsidies, which have significant bipartisan support but are ensnared in thousand-page-long bills, their political baggage dragging them down.

IBM’s vice president of government and regulatory affairs, Chris Padilla, said the company plans to fly employees to D.C. in the coming weeks to hold hundreds of meetings with policymakers. “We’re planning to spend this month making an all-out push,” he told Protocol.
Unfortunately for Democrats, Senate Minority Leader Mitch McConnell understands their need for a legislative win and intends to use it against them. At the end of June, he tweeted: “Let me be perfectly clear: there will be no bipartisan USICA as long as Democrats are pursuing a partisan reconciliation bill.”
Meanwhile, Senate Majority Leader Chuck Schumer is forging ahead with an ambitious legislative agenda that includes USICA and a new version of the failed Build Back Better bill. The resurrected Build Back Better deal is expected to introduce prescription drug pricing reforms, extend Affordable Care Act subsidies, raise some taxes and provide as much as $300 billion in subsidies for green energy initiatives. Further complicating things, Democrats also face pressure to make good on long-awaited promises to pass their tech antitrust bills and to codify Roe v. Wade.
“What could happen from all this is that you wind up getting nothing,” said Padilla. He added that as Congress nears the end of session, “everything gets linked to everything else — and then what you get usually is a grand bargain or nothing.”
Even if the Democrats go along with McConnell’s demand to only focus on USICA, the reconciliation process won’t be easy. The House and Senate both already passed versions of the bill, but with major differences in partisan slants: The House’s Competes Act passed with only one Republican vote in February, while the Senate USICA bill passed with a healthy bipartisan coalition of 68 votes at the end of March.
“It is critical to get the bipartisan Innovation Act to the president’s desk before the end of August,” Rep. Ro Khanna, one of the bill’s co-sponsors, told Protocol. “We cannot continue to blow by deadline after deadline for a bill to create thousands of good, paying jobs and ensure the United States remains competitive with China.”

Key differences between the House and Senate versions center on trade and immigration: Only the House version of the bill includes a renewal of trade adjustment assistance programs and plans to expand the immigration system for high-skilled tech workers. The proposed immigration reforms exempt foreign-born workers with doctoral degrees in STEM fields from annual green-card limits that often force them to leave the U.S.
These difficulties won’t easily be resolved, so the chip industry has a backup plan: get the subsidies stripped out and passed on their own.
One Republican staffer told Protocol the USICA package seems to be falling down the priority list for Democrats. The staffer said Democrats are now more inclined to focus on issues that would appeal to voters ahead of midterms, such as the drug pricing reforms.
The semiconductor subsidies aren’t likely to do much for voters, unless maybe they happen to live near proposed facilities. Congressional approval ratings have continued their steady decline to just 16% in June. Inflation, gun control and abortion access have emerged as key issues heading into elections. McConnell’s bargain would force the Democrats to prioritize USICA, with the intended consequence of hamstringing a bill that includes tax hikes.
That leaves the possibility that nothing at all gets passed — not USICA, and not the skinny package focusing on subsidies. In that case, the chip industry would likely find itself working with a Republican, business-friendly Congress after midterms, and it would still have a shot at getting the subsidies then.
That makes it all a question of timing: The chip industry and its champions in government have argued that nations such as Japan, Germany and France will lure chipmakers away from the U.S. with their own subsidy packages, which are already on the table.
“Mark my words … if Labor Day comes and goes and this Chips Act isn’t passed by Congress, these companies will not wait, and they will expand in other countries,” Commerce Department Secretary Gina Raimondo warned at the end of June.

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He’s based in New York and can be reached at hchitkara@protocol.com.
“CEOs have lost control of this.”
Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.
The office of the CEO has long been a cherished bully pulpit, a haven for business leaders to influence public policy and push causes important to their companies, employees or themselves.
In 2016, top U.S. business leaders banded together to oppose North Carolina’s controversial “bathroom bill,” including halting local projects or events in the area. The protest, a significant milestone that helped usher in the rise of CEO activism, was projected to ultimately cost the state an estimated $3.8 billion by roughly 2030. More recently, many CEOs quickly mobilized to condemn Vladimir Putin’s unjustified attack on Ukraine and to halt business with Russia, a move that could have a profound impact on the region.
But as CEO advocacy becomes the norm, the increased cadence with which bosses deploy the megaphone could ultimately undermine the influential perch and lead executives to become just another voice in a sea of outrage.
It’s clear that the U.S. political system is broken. Elected officials have almost fully entrenched themselves in their respective political camps, emerging solely to throw grenades at the opposing troops and, every so often, vote on legislation that actually improves the lives of those they represent. You know, do their jobs.

The world has changed dramatically since Benioff took that fateful stand in 2015 against then-Indiana Governor Mike Pence. There’s constantly a new reason to be mortified: a deadly pandemic, mass shootings, insurrection, the destruction of civil rights, monstrous hate crimes; the list goes on and on.
CEOs still remain a relatively trusted voice as belief in most major institutions, including government and the press, reaches near-crisis levels. The challenge now becomes balancing how to sustain that respected reputation amid a never-ending stream of chaos.
However, CEOs remain hesitant to stick their necks out too far, particularly if it means undermining the company’s financial performance. And, frankly, there are many stakeholders who would probably prefer that.
Such is the reality in business. But it is one money grab that could be mutually beneficial for many stakeholders besides investors.
To date, CEOs have largely resisted the more sweeping actions workers have called for that go beyond strongly worded statements.
The exacerbation of the political divide in the U.S. is likely to force the same future for businesses as the country itself: a corporate America increasingly divided by “blue” or “red” labels. Sadly, business leaders are one of the few remaining glimmers of hope in a gradually trust-less society. That means it’s now time for CEOs to decide which side to take.
Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.

source

Related Articles