As the use of artificial intelligence (AI) and automated research processes continues to grow, sophisticated algorithms are being used in marketing industries to maximize efficiency by providing quick and accurate information. ChatGPT and research automation technologies have recently become popular tools that many media professionals and all sorts of people are tapping into – so much so that questions of their objectivity arise. In today’s blog post, we’re asking: can ChatGPT and research automation be truly objective? Please note that we use the term ChatGPT in this context as a synonym for any tool that can generate content and solve tasks. We discuss the potential benefits – and drawbacks – of these intelligent technologies with a humorous yet informed eye.
Do they help us or do we help them?
I don’t trust something or someone else to do my research or cross-check data for me. The reason is that I don’t know for sure if they are presenting objective findings or were thorough in their research. In fact, there are some nuances to researching that just cannot be taught or expressed. Will ChatGPT be able to overhear a conversation at the coffee machine and take that into account? I doubt so, at least not for now…
It is like in pottery – you work with the clay yourself instead of having an assistant do that for you. That way you get a feel and are in touch with the material. Then you can interact better with it on the potter’s wheel. You get a feeling of how malleable it is, how much you can stretch it, and what kind of ceramic product it would be suitable for.

In fact, I believe that there is something to gain from going on that journey of discovering new information, in the process of validating facts. Sure, automation tools are there to assist me when I’m strapped for time, but I very much prefer to be able to do that digging myself. It very much depends – if it’s about using microfiche in a library, I am so thankful for online search engines that help me scour through digitalized news pieces for what I want. However, if you’re trying to get to the bottom of the matter, you’d want to hear from the source.
There are implications beyond the initial solutions
Imagine having police get their information from computers instead of having them interact with the victims, perpetrators, and all involved. Would victims of rape or abuse feel comfortable having a machine convey the harm done to them when filing a report or as part of a trial? Can the machine capture the essence of how much a victim freezes during the situation or the turmoil that haunts them after the incident?
On the other hand, one might be less scared and more willing to come forward quickly if one could skip the process of plucking up courage in order to verbalize what happened to a human. Maybe it helps with getting things moving, lowering the barriers to reporting such crimes that stem from feelings of shame whenever something really bad happens to someone.

I get how having a ‘machine’ intermediary could also help eliminate bias, especially when it comes to appearances. Maybe these humans do not mean to judge, but given socialization, have some innate bias within them they didn’t even know about. It’s their gut feeling, they say. In such cases, I embrace the beauty of neutralism, having computers in between that human interaction for objectivity. Can computers really be objective though?
What you input is what you get. They’re great aggregators. They’re great tools to help you get things done quickly, like searching through online publications to see how many times it mentions “5G”, just as an example. But it’s only as good as its database and search queries and how it’s programmed and trained to present that data. We know from our school days, how history and the mere way of presenting information could skew a story. Everything is dependent on the information available where you come from, and how it is culturally acceptable to collect, and present facts.
It’s about trust and what we do with it
All that I want to express is, like any project, the output you get can only be as good as what you put in. Without good content, we cannot work out a story that is going to live up to expectations. If you need information “quick and dirty”, the likes of tools like ChatGPT and research automation definitely help to save time. But I am sure there is a lot of room for play for humans who take on the full process themselves. We can embrace these tools and innovations. And beyond “trust issues”, I believe in seeing and collecting things firsthand in order to know for sure it is true.

Are we all that trusting now? Do we take things at face value more than we used to in the past? Are these tools powered by lines of code able to be objective? How thorough can machines be? So many questions about how our tools in daily life affect our habits and eventually, our own learning and behavior. We have Uber, we forget how to flag down taxis. We have delivery applications, many find it harder to step out to get groceries. There’s instant text messaging, we get a hell lot more impatient when waiting for a response. In fact, I rarely pick up the phone to make a call anymore.
So many questions, triggered by the realization that we are truly entering the age of automation, where the simplification of daily life in order to free up time and energy for humans to spend on activities of higher value could possibly backfire. The question is, what will be the impact on us humans, if we adapt to AI and tools like ChatGPT, just like we adopted smartphones and the Internet itself? It is going to be really interesting to watch.
Photo credit: The feature image has been done by Maksim Chernyshev. The pottery photo has been taken by Samy Koka. The picture showing a woman with a futuristic visor was prepared by Ionut Dragoi. The image of a woman posing with a baseball bat was done by Shane McMahon. All images are symbolic and decorative.