By Lenore Devore, B.S. Journalism 1984

When Juliana Fernandes works with other researchers, the University of Florida College of Journalism and Communications’ (UFCJC) Advertising assistant professor brings a wealth of social sciences knowledge surrounding how disinformation moves quickly through social media. But she also learns a lot from colleagues in the Wertheim College of Engineering’s Department of Electrical and Computer Engineering.
“They have so much knowledge on how to extract meaningful information from all sorts of content and doing that in an automated fashion,” said Fernandes, who was recently elected secretary of the American Academy of Advertising for 2025-2026. “In communications, we are just now starting to apply computational methods to our research. Just a few years ago, if I wanted to analyze content from ads, I had to do it manually.”
She appreciates the different tools and techniques that make it easier and faster for her to understand very large data sets like those used in two recent projects. And she’s noticed that computer scientists’ research is more applied, with the intention of trying to develop something, such as a tool, a model or software, she said, whereas her research is more theoretical.
“It’s interesting to see other perspectives and how they think about the same problems and questions that we have, but in a very different, applied way,” she said.
The two interdisciplinary research projects she’s worked with computer scientists on focused on:
Studying why social media users find disinformation ads so engaging, which was published in the peer-reviewed Interactive Journal of Advertising.
Developing a framework to investigate influence cues in online texts to detect deception, which was published in the peer-reviewed Frontiers of Computer Science.
For the first project, Fernandes and her colleagues used 3,200 ads (of 80,000 pieces of online text) that were publicly available from the 2016 election, when Congress investigated Russian influence. The ads were labeled as “disinformation” by the U.S. House Permanent Select Committee on Intelligence.
“The research was all about trying to identify what makes disinformation ads engaging in social media, in particular on Facebook,” she said. “What makes people want to click and pay attention to these types of ads? What is it about the content of the ad and some attributes of the ad that actually made people click on that ad?”
The five researchers found four factors that can fuel disinformation with ads on Facebook:
The ads did not contain many words. “Short sentences would lead people to click more on that ad,” she said.
They used some form of familiar and informal language, things we are used to saying.
The ads were big buys. “Whoever posted those ads actually kept them up for a long period of time so they would reach more people and consequently people would have more chances to engage with that ad.”
Ads that had a positive sentiment or emotion were more engaging.
The research is important, Fernandes said.
“The more knowledge we have of how these things disseminate and specifically what is the content, what is in these type messages and how they influence us, whether it is just by clicking on it or by learning there is a certain amount of emotionality or a principle of persuasion, such as the use of authority – I think that’s very useful for society to know,” she said.
The challenge: How the findings can be translated into something that can be easy for the average person to use. She and her colleagues suggest a label, much like nutrition labels found on food – “something that would highlight the presence or absence of these persuasion cues.”
Some social media platforms already use labels that say something like “this content might contain misleading info,” but they don’t say what the misinformation is, she said.
“The concept of labeling online already exists. We usually don’t process information deeply, in very elaborate ways, especially when using social media, when you’re just forwarding things to your friends and family. There could be an application for that.”
For the second project, she worked with six others, using 1,000 pieces of online text from mainstream news media and 2,000 pieces of deceptive content – some of which was from the election data set. The goal was to see if there is a way to use machine learning to quickly identify whether something is deceptive.
Her role was to devise a coding scheme where students they recruited manually labeled each piece of text into several categories so they could identify the influence or cues used in the text. Among the categories she developed:
- Persuasion principles: things like the notion of scarcity (it ends today; you have to do this now) or the concept of social proof (if there are people doing that, you are missing out), among others.
- Framing a message to potentially cause a gain or loss – the possible benefits or losses if you do or don’t perform a given action or behavior.
- Emotional salience – positive or negative, depending on the way the text was written.
- Objectivity vs. subjectivity – was it based on evidence?
- Blame or guilt – referring to another person for any type of wrongdoing or bad things that have happened.
- Emphasis– is the text in all caps, italics or bold face? Does it have multiple exclamation or question marks, or anything used to call attention to the ad?
The researchers found all could be used to influence people to get you to do something, Fernandes said.
“Social media information disseminates so quickly, you could be sharing something that is misleading, false, deceptive or disinformation, and it’s difficult to stop that,” she said. “It’s almost impossible. The moment you learn how online text is being disseminated or what kind of elements they have, something new shows up. The tools you developed to do this are already obsolete. It’s a very complicated process.”
Fernandes got her bachelor's degree in journalism from Unisinos in Brazil, then earned her master’s and doctorate in mass communications from UFCJC, praising the strong Ph.D. program that she saw as a chance to further her career, work closely with other Ph.D. students and be part of the next generation of mentors.
She was an assistant professor at Florida International University and the University of Miami before returning to the UFCJC in 2019. “Coming back to teach, mentor and do research at my alma mater is truly very special to me. When I arrived in the U.S. some 20 years ago, I could barely articulate a correct sentence in English, and UF and CJC helped me in so many ways to develop into the scholar, teacher and person I am today. I could not say no to this opportunity.”