This post is an edit of a luncheon keynote given to the MMPA’s 2017 Summit on April 26, parts of which were also included in a May 2 address to the graduating students in the MCTC Design Program. It’s focus is on the role of the idea person in marketing, advertising and publishing, which has never been more challenging. Originally published on Medium.
I’m keen on the idea of Idea People; and especially the Business of Ideas. I believe that idea discovery, idea articulation, idea presentation, idea optimization and idea distribution are all scientific and artful, in equal measures.
As Idea People, we are also agitators. I’ll paraphrase Robert Grudin, who describes us in his book The Grace of Great Things, “Many [Idea People] initially are seen as troublemakers simply because their vigorous and uncompromising analysis exposes problems that previously had been ignored.” Grudin warns that, “Creativity is dangerous. We cannot open ourselves to new insight without endangering the security of prior assumptions. Creative achievement”—and that’s what I believe all of us Idea People are all about — “Creative achievement is… an adventure. It’s pleasure is not the comfort of the safe harbor, but the thrill of the reaching sail.”
So onward we sail.
Now, here’s the thing: We’ve been here before.
Every innovation, ever, offers threats and opportunities to job roles, to the kind of work we do, to industries, to culture. And in each case, innovations (like artificial intelligence) offer a sense of what the author Neil Postman might refer to as “magic.” But I prefer Arthur C. Clarke’s Third Law definition.
“Any sufficiently advanced technology is indistinguishable from magic.”
If we work backwards from today, consider…
First, the magic of Desktop publishing.
— Which threatened the idea of how we print, who prints, when and where we print, the notion of control of image and design
— Simultaneously, desktop publishing created more content by specific authors and publishers for more specific audiences. The magic of desktop publishing created new ways for more people to be idea people.
Then consider the magic of Photography.
— Which threatened the idea of visual expression and the recording of images and likenesses as defined by the hand and eye through painting and drawing
— And yet Photography created a new way of seeing, a new Art. Photography expanded our understanding of the world around us and helped deliver a boom in publishing .
Going further back, consider the magic of Printing.
— Which threatened oral traditions and the power of those few who could speak and tell stories
— Meanwhile, printing created a need for literacy, a need for teachers, the expansion of nations and belief systems.
So we’ve been here before, over the millennia. New innovations arrive and the residents panic. And sometimes rightly so.
Let’s begin by talking about the idea of Automation, of robots and scale; of simple, repetitive work once done by humans then handled by machines and now handled by software. The stats can look grim.
“83% of US jobs paying less than $20 per hour will be subject to automation or replacement. While up to 47% of all US jobs are in danger of being made irrelevant due to technological advancements, with most job losses due to occur amongst the undereducated.” So says a January 2017 report authored by The Obama White House titled, “Preparing for the Future of Artificial Intelligence.” [Source via Scott Abel @ The Content Wrangler]
In a “Robot Proof Jobs” report from the consultants at McKinsey, we hear, “Across all occupations in the US economy, one-third of the time spent in the workplace involves collecting and processing data. Both activities (collecting and processing) have a technical potential for automation exceeding 60 percent.” The report continues, “And it’s not just entry-level workers or low-wage clerks who collect and process data; people whose annual incomes exceed $200,000 spend some 31 percent of their time doing those things, as well.”
Bringing things closer to home, James Somers writes recently in The Atlanticthat, “Newspapers and magazines used to have a rather coarse model of their audience. It used to be that they couldn’t be sure how many people read each of their articles; they couldn’t see on a dashboard how much social traction one piece got as against the others. They were more free to experiment, because it was never clear ex-ante what kind of article was likely to fail. This could, of course, lead to deeply indulgent work that no one would read; but it could also lead to unexpected magic.”
There’s the crux of it. Can automation help scale our labors in the continuous search for unexpected magic?
As Idea People, we ought to look at Automation for its ability to serve our readers, to enable the audience instead of to deceive them.
So, yes, please, Automate processes that make reading and enjoying your product easier. Automate the means for your audience to engage, on their terms, versus yours. Just don’t try to automate unexpected magic.
Not when you could have an artificial intelligence create it for you. Right?
It’s abundantly clear that “Artificial Intelligence” is the buzzword du jour. And not without merit.
Stanford organizational sociologist, R. David Dixon Jr., writes, “We humans are largely only still involved in the process because we’re still the cheapest option for whatever task we’re doing. Cheaper because the technology is currently too expensive or non-existent, and cheaper because wages can always be lowered. As technology advances, however, humans are increasingly less effective and more expensive than good machines. This is true not just for those working at the ground floor, but also for the managers above them.”
Wait, it gets better!
Dixon continues, “As artificial intelligence and machine learning develops, particularly in their ability to understand and contribute in natural human conversation, humans will reach the end of their usefulness in an increasing number of industries systems entirely.”
How’s everyone feeling? Who’s excited to return to work tomorrow?
We’re already seeing this story evolve within the financial services industry. Paraphrasing from The New York Times in March of this year… “The investment firm BlackRock laid out an ambitious plan to consolidate 11% of its actively managed mutual funds ($30 billion in assets) with peers that rely more on algorithms and models to pick stocks. As part of the restructuring, seven of BlackRock’s 53 stock pickers are expected to step down from their funds. At least 36 employees connected to the funds are leaving the firm.”
The researchers at Forrester posit that today, 38% of enterprises are already using artificial intelligence (AI), growing to 62% by 2018. Forrester is predicting a 300% increase in AI investments in 2017 compared to 2016 and IDC believes AI will be a $47 billion market by 2020. [Source]
Oh, and some of the Idea People at Coca Cola have announced they want to use AI to facilitate making advertising.
Well, let’s not cower under our afghans just yet.
At this point, it’s worth asking the question, what, exactly, is Artificial Intelligence? Or as Neil Postman reminded us back in 1985, “…in every tool we create, an idea is embedded that goes beyond the function of the thing itself.” So, what’s the idea embedded behind Artificial Intelligence?
The concept was first coined by Stanford professor John McCarthy in the 1950s. And we know that intelligence, artificial or not, is rooted — as AdAgeeditor Kate Kaye writes, “in the tsunami of data generated by digitized systems, and the availability of relatively inexpensive and fast cloud computing.”
So, AI, in short is predicated upon Data. And lots of it. Data easily connected, easily parsed, and inexpensively processed — to generate what looks like and smells like and wiggles and wobbles like—thinking.
The Defense Advanced Research Projects Agency’s Information Innovation Office has weighed in via YouTube, and suggested we distinguish between three different waves of AI. (A big hat tip to Roey Tzezana at Futurism.com for summarizing DARPA’s lengthy video.)
Summarizing Tzezana’s summary:
“First Wave artificial intelligence systems are capable of implementing simple logical rules for well-defined problems, but are incapable of learning, and have a hard time dealing with uncertainty.” “With first wave AI, parameters for each type of situation are identified in advance by human experts. As a result, first wave systems find it difficult to tackle new kinds of situations. They also have a hard time abstracting — taking knowledge and insights derived from certain situations, and applying them to new problems.”
In other words, first wave AI only knows what it knows. Take voice activation. As examples of first wave artificial intelligence, Alexa or Google Home can only give you answers they have access to, for questions they comprehend.
Summarizing Tzezana again: In Second Wave AI systems…
“Engineers and programmers don’t bother with teaching precise and exact rules for the systems to follow. Instead, they develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient.”
For example, consider how we’re training AIs to recognize images of cats or faces, or the recent advancements in both accuracy and speed of Google Translate. These second wave AIs are using complex models to compare and hypothesize the accuracy of a response. It’s closer to what you and I do when thinking, but it’s not yet human thinking.
Finally, Tzezana summarizes, Third Wave artificial intelligence will go beyond leveraging models we humans create, to “discover by themselves the logical rules which shape their decision-making process.” Sounds almost human, doesn’t it? But let’s be clear that third wave AI is — at least according to DARPA — decades away from reality.
But why? The answer is Data.
As Joe Lonsdale, a co-founder of Palantir and general partner at investment firm 8VC, noted recently, “Before artificial intelligence can tackle some of the harder problems — it will take years if not decades to [figure out how to] structure the data these systems will ingest.”
Ah, data. The soil upon which intelligence takes root. Currently our data is messy, dissimilar, inconsistent. Julie Fleischer at Neustar, calls it, “a swamp: an opaque, poorly understood mess.” Which is why Lonsdale and others claim, “AI is decades away from matching human creativity.”
If the data is a mess, so too is the intelligence.
Despite Move 37.
It’s true AlphaGo’s historic win against Lee Sedol, the world’s best Go player in Match 2 was unexpected. But, we haven’t seen evidence Google’s artificial intelligence understands its own achievement. Yes, the artificial intelligence won, but did it even know it won — or what winning means? As technology pundit Shelly Palmer puts it, “AlphaGo is dangerous to 9-dan [level] Go masters, but harmless to people who optimize media purchases.”
By way of another example, take Minnesota’s own Lucy, an artificial intelligence focused on marketing services from the team at Equals 3 Media. Lucy is powered by IBM’s Watson. Lucy’s intelligence can certainly help you get closer, help you focus, help you distill insights to fuel an idea. But Lucy isn’t going to suggest you Think Small. Or suggest you put Andy Warhol in a soup can on your magazine cover.
It still takes the brainpower of Idea People to connect the dots.
And it also takes thoughtful UX and UI to benefit from artificial intelligence. It takes amazing Design. Remember, artificial intelligence can’t yet organize and design itself. How we humans experience AI — how we interact with it, how we query, how results or actions are delivered, how confusion is resolved — oftentimes matters much more than the intelligence itself.
But the clock is ticking.
Seth Godin says, “The question each of us has to ask is simple (but difficult): What can I become quite good at that’s really difficult for a computer to do one day soon? How can I become so resilient, so human and such a linchpin that shifts in technology won’t be able to catch up?”
It’s not clear yet whether we are headed down Orwell’s dark or Huxley’s bright, yet dark path. Because the artificial has not yet learned to be curious the way Idea People like you and me are curious.
So I believe the one word answer to Godin’s question, and to the threat of both automation and artificial intelligence is Curiosity.
Curiosity demands we seek a further, less obvious, less assured horizon. As Grudin puts it in The Grace of Great Things, “One must cultivate a leaning for the problematic, a chronic attraction to things that do not totally fit, agree or make sense. …To think creatively is to walk at the edge of chaos. In thinking the original, we risk thinking the ridiculous.”
Now, I don’t believe Curiosity is ridiculous. Perhaps Niccolò Machiavelli put it best…
“And one ought to consider that there is nothing more difficult to pull off, more chancy to succeed in, or more dangerous to manage, than the introduction of a new order of things.”
Now, I’m not saying Curiosity gets us out of harm’s way. Far from it.
This new order of things — a world increasingly run via Automation and AI — is unavoidable. And it is driven by sharp, extremely curious minds.
What matters most now is our reaction to these developments. Don’t throw up your hands. Don’t fold. Instead, be even more curious. Can our thinking, our ideas outpace technologies which might appear to threaten our existence?
And the thing I’m most curious about is how Idea People like you can enhance our publications, our content, our engagement through automation and/or artificial intelligence.
As VC Joe Lonsdale put it, “There’s just a huge gap between how the biggest industries in America currently run, and how they will run with the best IT and with the best computer science.” So I’m curious — what if you editors, you publishers, writers and designers thought of yourselves as technologists? How might your product evolve, what new products would emerge — from curious Idea People seeking to apply the benefits of AI to the sustained, periodic shipment of words, images and motion to subscribers?
Thomas Hayes Davenport and Julia Kirby, authors of Only Humans Need Apply put it this way, “Instead of viewing these machines as competitive interlopers, we see them as partners and collaborators in creative problem solving.”
We should explore. We should embrace and prototype. What kind of two-week sprint will your team run, starting tomorrow, to understand and leverage artificial intelligence or automation inside your organization? What new experience of your publication is waiting to be revealed as a result?
Jonas Prising, the CEO of Manpowergroup — the multinational human resource consulting firm, says, “In an environment where new skills emerge as fast as others become extinct, employability is less about what you already know and more about your capacity to learn.”
So thank you for this opportunity to talk today.
I must admit I am not a scientist. I am not a software developer. I can’t spool up an artificial intelligence on Amazon Web Services. But I can ask questions and I can learn. In learning about AI and automation I’ve found I am not afraid of the future of Idea People. I’m bullish on our abilities to derive opportunity from the evolution of technology.
I believe the long term, passionate, purposeful thinkers in this room will discover unique, robust and profitable ways to benefit from automation and artificial intelligence. If we remain curious.
I’ll leave you with a last, favorite quote, from Boston Symphony conductor Ben Zander and his wife Rosamund, from their book, The Art of Possibility.
“Grace comes from owning the risks we take in a world by and large immune to our control.”
Thank you very much.