Feeling Broke? Now We Have AI To Decide Our Next Pay Raise
- Feb 9
- 6 min read

There’s a new kind of tension in offices and workplaces that most of us haven’t fully learned how to describe. It isn’t just the sound of keyboards clicking, Zoom calls pinging, or productivity targets looming. It’s the creeping sense that one of the most personal parts of work, whether you get a raise, a promotion, or even keep your job, isn’t always being decided by another human being. Instead, some of those decisions are now being influenced or guided by artificial intelligence tools embedded into HR systems.
When I first learned about this, I felt a mix of fascination and discomfort. It makes perfect sense to want efficiency in large organizations. I’ve seen HR teams buried under paper, spreadsheets, emails, and performance ratings that pull them in every direction. The promise of AI swooping in to do some of that heavy lifting seems good on the surface. But then you realize something strange has happened: decisions that used to involve gut instinct, empathy, and a conversation between two people are now shaped by models trained on patterns and predictions. And that shift feels, well, oddly personal.
According to a recent report out of Washington, a growing number of companies are using AI tools like ChatGPT, Google’s Gemini, or specialized HR agents to help draft performance reviews, and guide decisions about promotions, raises, and even layoffs. More than 60 percent of surveyed managers said they use such systems to help with key employee decisions, and almost half admitted they let these tools influence high-stakes outcomes. Some even do it with minimal human oversight.
Think for a moment about that. Your raise, something you’ve worked for all year long, could be informed by an algorithm that sees you as a set of performance data points. Your next promotion might be weighed against patterns the AI has learned from hundreds of other employees rather than the unique contributions you make. When that sinks in, it’s hard not to feel unsettled, like the warmth and care we hope for in human judgment has been replaced with something clinical and distant.
There’s a promising side to this transformation too. At its best, AI can help level the playing field. It can reduce the burden of repetitive tasks for overloaded HR teams, allow for faster identification of performance trends, and even surface patterns that help eliminate long-standing biases when used thoughtfully and with transparency. Early adopters of these tools say they can save time and reduce administrative workload, giving human staff more bandwidth to focus on strategic and empathetic interactions that really matter.
But the reality on the ground is often messier. Many managers using these tools lack formal training in how to use AI responsibly, and a troubling number let the AI make decisions with little human review. That means the very thing that’s supposed to help, efficiency, could end up harming people if misunderstandings or errors go unchecked. The risk of lawsuits over unfair decisions is becoming more than just a theoretical worry.
Emotionally, this trend stirs something deep in us because our work isn’t just a transaction. It’s tied to our identity, dignity, and sense of accomplishment. When the question “Did I do enough?” becomes “Did the algorithm think I did enough?”, it can trigger anxiety, skepticism, and even resentment. You start wondering if the AI really sees you or just interprets a handful of metrics. You wonder if a human would understand the context of your efforts, your late nights juggling deadlines, or the quiet leadership you show in moments that don’t fit neatly into a spreadsheet.
And there’s a broader societal challenge. We’ve built systems that increasingly rely on AI without building the social infrastructure to make people trust those systems. Surveys suggest that while organizations embrace AI in HR, many HR professionals themselves admit they don’t trust AI to make workforce decisions without significant human oversight. That trust gap isn’t just a technical problem; it’s a human one.
So where does this leave us as a society? It leaves us in a kind of uneasy transition. On one hand, there’s a genuine opportunity to make human resource practices more efficient, consistent, and data-informed. On the other hand, there’s a deep emotional and ethical responsibility we haven’t yet figured out how to carry. We need AI that assists humans, not replaces the nuanced judgment that defines fairness and compassion in the workplace.
We are learning new ways of working alongside machines. What matters most now isn’t just the capability of AI but the human choices we make about how, when, and why we let it influence decisions that shape people’s lives. Because a raise isn’t just a number. It’s validation. Recognition. Security. And that matters to us in ways no algorithm, however advanced, can ever fully understand.
The growing use of AI in HR decisions is quietly reshaping society in ways that go far beyond office walls. It changes how people experience work, how fairness is understood, and how power is distributed between individuals and institutions. What makes its impact so strong is that work is not just economic. It is emotional, social, and deeply tied to identity.
One of the most immediate effects on society is a shift in how people feel about being evaluated. When AI tools influence hiring, promotions, performance reviews, or layoffs, many workers begin to feel watched rather than understood. Even if the system is meant to be neutral, the awareness that an algorithm is involved can create anxiety. People start to wonder whether their effort is being seen in full context or reduced to metrics, patterns, and scores. Over time, this can erode trust, not just in employers, but in the idea that work is a place where human judgment and empathy still matter.
At the same time, AI in HR can change social norms around productivity and behavior. When decisions are influenced by data-driven systems, employees may adapt themselves to what they think the algorithm rewards. This can lead to more performative work cultures, where people optimize for visibility, measurable output, or constant availability rather than meaningful contribution. Society slowly absorbs the message that being human at work is less valuable than being legible to a system.
There is also a broader impact on inequality. In theory, AI has the potential to reduce bias by standardizing decisions and flagging unfair patterns. In practice, it can just as easily reinforce existing inequalities if it is trained on biased historical data or used without careful oversight. Groups that already face disadvantages may find those disadvantages quietly automated. Because algorithms are often opaque, affected individuals may not even know why a decision went against them, making it harder to challenge or correct injustice. This creates a sense of powerlessness that can spread beyond the workplace into how people relate to institutions in general.
Another societal effect is how responsibility becomes blurred. When AI influences HR decisions, accountability can feel diluted. Managers may defer to the system, HR departments may rely on vendor tools, and companies may hide behind technology when outcomes are questioned. This diffusion of responsibility makes it harder for people to feel heard or validated when something goes wrong. Over time, society risks normalizing decisions that deeply affect lives without clear human ownership.
There are also cultural consequences. Work has long been one of the main places where people experience recognition and dignity. When feedback, advancement, or termination is shaped by AI, that sense of dignity can feel fragile. People may begin to internalize algorithmic judgments as objective truth, even when those judgments are incomplete or flawed. This can affect self-worth, motivation, and mental health, especially in already stressful economic conditions.
Still, the impact is not entirely negative. If used carefully, AI can reduce administrative burden, free HR professionals to focus on human relationships, and help organizations make more consistent decisions. In societies struggling with scale, complexity, and limited resources, these tools can support better organization and planning. The key difference lies in whether AI is used as an assistant or an authority.
Ultimately, the societal effect of AI in HR is not just about technology. It is about what kind of society we want to build around work. One where efficiency quietly overrides empathy, or one where technology supports human judgment rather than replacing it. The choices being made now will shape how future generations experience fairness, dignity, and belonging in their working lives.



Comments