A “Ponder” for Human Support
- Mackenzie Helms
- 2 days ago
- 5 min read
By: Trey Clark

Being a computer science student, a “tinker-er”, and a young person interacting with the job market; I’ve become far too familiar with AI. Also tangentially related is that I’m taking a specialty in philosophy which rarely means much besides “I think a lot” and that's what I plan to use it for here. The goal of this blog is to not take a definitive stance, some extensive audit, or anything of that nature of AI; but rather just a “pondering” of what the emergence of AI means for support garnered through human connection.
What do I mean by “support”? We’ll take an easy example: an “at risk teen” who drinks with his friends and is caught would be given a parole officer during their probation. Franklin University defines the role of a parole officer as to “provide social services to assist in rehabilitation … Make recommendations for actions involving formulation of rehabilitation plan and treatment of offender, including conditional release and education and employment stipulations”. This makes sense, a teen is making poor life decisions so someone who cares should be there to make clear the effects of these choices, in hopes they become someone they are proud to be. This can’t be done by some simple machine; this must be human connection to meet the definition, no machine can do this job like a self servicer register or 3D rendering algorithm can do theirs … is what I would have said like a year ago. “Human” support being replaced by A.I. is already being audited by equivalent-supervion.com in their article, “The Pros and Cons of AI Case Management and Parole Supervision”.
Without taking the “I hate progress” standpoint, let’s look into what equivalent-supervision actually claims: the pros revolve around cost effectiveness and efficiency, which the American Justice system is lacking in, and the cons revolve around bias and fairness. To say, “a computer that is trained off human decision may be biased towards a specific race, gender, occupation, family life, etc.” is just a roundabout way of saying “humans are biased and unfair and it may be reflected more blatantly when robots mimic it”.
Well this is horrible, and can be seen in a crazy number of statistics: according to the NAACP “African Americans are incarcerated at more than 5 times the rate of whites” and according to Berkeley school of public health “Relative to white students, Black students were 3.6 times more likely to have been suspended out of school … 3.4 times more likely to have been expelled”. Machine learning is literally just pattern recognition; and often what we call “AI” is often a multilingual model that does exactly that so it makes sense. In the same way I can see a shiny, red fruit and can recognize “apple”, AI can see race or some other bias and stamp “incarcerate or expel”. But again this is nothing new, as I just explained in the last paragraph.
So the next question becomes “Okay, so AI can be and is unfair because humans are unfair, so if it has the same outcome then what’s the difference?” and that’s where we get to a predicament: how do we quantify human connection? We can quantify bias, as I did just in the last paragraph, but if I look only at the outcomes of human support then I become as robotic as the AI I’m pondering on now. Obviously the end result is “more support” which is object-oriented, but what is the goal for the recipient of human support? Is it object-oriented outcome to fit in a statistic, or is it a new perspective or understanding to live happily. This isn’t rhetorical, I legitimately can’t answer it, and I doubt it can be answered definitively. Obviously people that hate their lives and seek support to reform want to fit into the “nicer” statistic, but at the same time, at risk teens don’t see the happiness to fit into the “nicer” statistic; they simply want to achieve the goals set out by their parole officer and be happy doing it.
With that very important question simply shrugged at, I would like to now move onto another danger of AI substituting human connection: it will be a temporary fix rather than a permanent solution. Let’s give an example in which the at risk teen is given an AI parole officer and the AI has so many safeguards so it does its job “right” to the best of its ability and the teen goes off to live their life. This is all fine and dandy? Yes, but let’s take a look at the argument of “AI is unfair because humans are unfair”. What is being done to change that? When a parole officer is unfair, there is some chance of change; the media can take a hold of this and look for social justice. But what do we do about a machine that is unfair? Business executives throw up their hands and go “we’re trying”, and nothing? Perhaps some internal AI affair would have to be made to enact social justice in the same way an AI would? Again, these are all ponderings, no definitive argument.
Now to bring a broader scope of America and AI’s place in view. We can take a basic approach and define A.I. as a tool to increase efficiency. The emergence of power tools didn’t eliminate the need for a labor force; it strengthened it as larger tasks could be undertaken in a quicker manner, take the steam revolution, for example. So could A.I. have a similar effect? And I think the answer is yes; the progression of society can only benefit from increased efficiency. The question becomes progression towards what exactly? Let’s take a horrific example: the cotton gin. In the late 1790s, Eli Whitney developed a machine to separate cotton from the prickly seeds from raw cotton, which made cotton far more profitable as cotton could be separated and sold at record speeds. This means plantations in the South now could compete with northern industrialization and create more profit off the backs of those enslaved, making the South much more dependent on slavery to the point that it resulted in a civil war nearly 60 years later.
My source for the cotton gin information opens with “Progress has different meanings for different people… what was progress for white people was enslavement and further degradation for African Americans”, which sums up the point of this whole paragraph. Sure increased efficiency can produce and yield more within society, but for whom is this progress for? With the emergence of self service cashiers, those that filled the positions of cash registers didn’t benefit. With the emergence of faster factory equipment, the workers of it rarely benefited, but the owner of the factory did. This looks likely the same with A.I. it is just affecting the educated and privileged communities now. First simple labor was automated and now more complex labor is being automated.
Tying this with education, somewhat because I have to or my boss won’t let me publish this, educators will be put out of jobs if their role of human support is replaced. Who benefits from this? The students? Maybe with increased efficiency. The teachers? No, unless somehow more positions for teachers open up and they can make the same or more pay. Society? Again maybe, if students truly do learn and grow better then yes, society would benefit, but tying back to my paragraph on bias, society for whom will actually benefit becomes the real question.









Comments