Digital Physiognomy and What Apps Could Do with Your Data


Physiognomy is a pseudoscience. This means it could have been considered at some point in history, but we don’t consider physiognomy as a true science with our current knowledge. So far, so good. But something is happening in this and similar spaces, and we need to pay a bit of attention to this right now.

What is Physiognomy?

One of the Oxford definition of the term describes it as “the assessment of someone’s character or personality from their face and other external bodily features.” The concept dates back to ancient times. While this sort of scientific racism and profiling got popular again in the 18th and 19th centuries, it’s widely dismissed as actual science by today’s understanding of physiology and psychology.

Doublures_of_characters physiognomies
Physiognomy used in a political caricature, 1798

There’s a lot to read about physiognomy. In a nutshell, the method describes how certain visual features of a person, primarily focusing on the face, could indicate habits and character traits. If I lived back in the day and happened to have a particularly flat nose, maybe they would judge me to be more likely to be a criminal. Just like that, without me even saying a single word.

Why is this sort of method returning?

Okay, so it’s 2021, and we, fortunately, understand what science is and what pseudoscience is. Maybe our theories of today will be discarded by future researchers as pseudoscience, but that’s called progress. As of now, we have no physiognomy practitioners running around judging and profiling people just based on their looks, but we do something similar. Or at least – we let machines do the judging based on AI if you want to call it that, and machine learning if you want to call it that.

The technological progress allows us to leverage facial recognition hardware and software not only to identify people, but it sometimes goes further. We try to assess the mood of a person on camera just as well as classify the people in demographic groups, such as their age or gender, based on what the algorithm considers as most likely. We do that, for instance, in digital signage ad targeting.

Human physiognomies next to animal physiognomies, 1820

While passing by the same storefront, the exact same display with facial recognition technology would present different ads to a man returning from work and looking tired. It would show something different to a cheerful group of women passing by, just as it would show something different to a child on its way to school. This is happening as of now, and that’s merely the application of what we learned from science and research. It is or could be an aspect of marketing technology, also MarTech in short.

It’s a bit like how tracking on the Internet works, which causes you to see ads that are most likely going to be of interest to you, just in the real world. But what could be the next step? Would someone consider using any sort of mechanic from physiognomy in our modern-day times? Why wouldn’t companies leverage the results of their research in order to increase profits? That’s what businesses do.

Who could potentially benefit from this?

Companies that are currently impacted by increased privacy shielding functions and new laws protecting people online might find it interesting to consider new means of improving their ad targeting. For a moment, put yourself in the shoes of Facebook. The social network company uses facial recognition software for a long time. Since 2017 they began to scan every image and video to check it against their user database to alarm the users in case others have uploaded media featuring them. Of course, this is all subject to a user’s privacy settings, but it’s mostly enabled.

So, on the one hand, you have a library of user preferences and activities, and on the other hand, you have pictures from the users. Taking into account the 2.7 billion monthly active users, as of the second quarter of 2020, you have a good amount of faces that you can compare with each other, and check if there are any interests a certain group of people, based on their face’s visual features, and not just based on their demographic data and the now limited Internet-browsing history. Even if the outcome of any matches aren’t perfect, you could still do A/B tests with the ads that you show to the users and verify your results. So based on how you look, they might find that many users who share a particular facial feature might also all enjoy the same soft drink or chocolate. In that case, even without trackers or cookies, they could cater me ads that I am more likely to react to, based on my own profile picture.

Facebook has already acquired many companies that deal with such technologies. FacioMetrics, for instance, was integrated for a possible “smile-to-like” feature in their app. If they can even read out emotional feedback from the users directly, the quality of preference data would be even better. Comprehending a smile is even more complicated than just measuring the features of your facial physiology. They are also working on posting content based on brain scans. It does not appear to be a far-fetched idea to think of leveraging the same data for even better results with even less technical effort. It’s maybe not good science, but it might work for them on some level.

Digital Physiognomy and What Apps Could Do with Your Data Digital Twin Cyberpunk Man City Night
Could this be a new level for a digital twin?

Now Facebook is not just Facebook. The group also consists of Giphy, Oculus VR, WhatsApp, and Instagram, among other services. The number of users and the amount of preference data could be cross-checked on other platforms as well to make the system of ad targeting based on facial features even more accurate. Why is this good? By increasing the ad targeting, their ad clients can more efficiently sell their services or goods, and Facebook can charge them more. A win-win scenario for the companies.

Such a new sort of digital physiognomy could also be interesting for dating platform companies in order to increase their matchmaking efficiency. For instance, Match Group owns 45 dating and matchmaking services across the globe., OkCupid, and Tinder might be amongst the more popular apps here. Still, the theoretical pool of user data of appearance as well as preference and historic activity could help to improve their algorithm further and, therefore, allows them to charge for better service tiers.

Is this a problem?

It’s certainly not a problem for a company to increase its revenue, but could it be a problem for the users? As long as the companies adhere to laws and regulations, the users won’t have a choice but to disable such features, request not to be part of this research with their data, or delete their account and request the removal of all their data. So the user always has a choice, but they need to proactively inform themselves, read the terms of service, and actively reject and such ideas from the social network or platform provider.

Opting-out even for CCTV face recognition? Face Changer: Building a Personal Countermeasure against Biometrics [Video]

As long a company is transparent about all of this and allows users to opt-out or delete their accounts together with all data, this does not seem to be a problem. If the people and higher up the politicians feel like any aspect of this practice should be changed or outlawed, they will do so. Then it will affect all users in the region where a particular law or restriction is enacted. For example, the EU law on data privacy won’t affect US-based users and the other way around. If you’re lonely and looking for a significant other without wasting years dating people who don’t match you well, it might be okay for some to barter with personal data to achieve this faster.

While there is no immediate threat, even if this would happen, it’s a user’s obligation to read, understand, and agree to the terms of service, or they choose to reject them and not use the service anymore. As long as the companies stay transparent on how they generate the revenue, to offer a service which they don’t charge the users for, and as long as the user can decide on their own if they want to use the service and decide what sort of personal data they’d like to share, all of this is not problematic. There might be ethical implications of visual-based profiling, including differentiation of skin color for the evaluation of data. Still, the rest is up to the users of a service, as they are ultimately the currency for such a business.

Photo credit: The feature image was done by Marcos Amaral. The illustration “Doublûres of Characters” was done by James Gilray. The illustration “human physiognomies next to animal physiognomies” was done by Charles Le Brun, credit to Wellcome Images. The photo towards the end of the article was taken by Chester Wade.
Source: Oxford Reference / Facebook help article / Statista data / Josh Constine (TechCrunch) / Alisha Mahmood (TechAcute)

Was this post helpful?

Christopher Isak
Christopher Isak
Hi there and thanks for reading my article! I'm Chris the founder of TechAcute. I write about technology news and share experiences from my life in the enterprise world. Drop by on Twitter and say 'hi' sometime. ;)
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -