Don’t rush to be embarrassed by the first version of your product

The intent of good quotes is lost over time. So, they are often misunderstood and misused because they are applied out of context. Reid Hoffman’s quote – “If you are not embarrassed by the first version of your product, you’ve launched too late.” – is a great example of loss of intent.

I’ve seen this quote used as an excuse to justify a crappy v1/first version product. I haven’t heard Reid talk about this in person – but, I’m fairly certain that that wasn’t the intent.

There are two good reasons to be embarrassed about v1 (in hindsight). The first is the most common – you didn’t know better and/or couldn’t do better with the tools available. The first website I put together looked horrendous. I didn’t understand the basics of web design and it was also built on an early version of Adobe Dreamweaver. Now, however, I have slightly better design skills and, more importantly, have access to amazing tools. Thanks to the likes of Bootstrap and services like WordPress, it is very easy to build a good website.

The second is the result of prioritizing one killer use case/risky assumption for your product and ignoring everything else. You may still be embarrassed by the first version – but, you’ll still have served that basic user/customer need.

Source: Unknown – thank you to whoever made this.

The truth is that you’ll be embarrassed by nearly everything you ship. Over time, your skills will improve, the tools will get more sophisticated, and your understanding of the user/customer need will get better. So, you don’t have to work too hard to cut a few corners now to ship something you’ll be embarrassed by. Time will take care of that. The key, instead, is not to knowingly do something you will regret.

So, the two questions I’d suggest asking are –

  • Is what we are shipping helping us learn what we want to learn while providing value to the user?
  • Is this our best effort based on what we know/have access to now?

If the answers to both are yes, ship away. Even if you are eventually embarrassed about what you ship, this approach will make sure you will not have any regrets.

How we were sold tobacco, bacon and the ideal of thin women

Edward Bernays is one of the most influential persons in the 20th century. He is considered the father of “Public Relations” and changed how we think of mass marketing and advertising at scale. And, yet, it is likely you’ve never heard of him.

Despite his enduring impact on the world, there are many reasons for this lack of popularity. However, chief among them is a reluctance among the folks in his industry to talk about his work. So, you don’t hear Marketing professors or advertising executives mention him or his work. Not doing so denies some fascinating lessons that might shape how we think about the attention economy.

Edward Bernays and Propaganda
Edward Bernays was an Austrian American whose family moved to the United States in the 1890s. He spent the early part of his career as a Medical Editor and Press Agent. In both these roles, he showcased an ability to take strong positions on certain causes and successfully solicit support from the public — among them elite figures like the Rockefellers and the Roosevelts.

After the US entered World War I, he was recruited by the US Government’s “Commission on Public Information” to build support for the war domestically. Since a large portion of Americans had just fled Europe, this didn’t make much sense. But, Bernays coined a phrase — “Make the World Safe for Democracy” that became the meme President Woodrow Wilson needed. It gave the senseless war a higher purpose. And, Bernays began referring to his work as “psychological warfare.”

Bernays also added significant artillery to his propoganda techniques. He did this by incorporating the lessons from a then-infamous psychologist uncle who’d published work about how individuals are driven by unconscious needs, desires, and fears. Sigmund Freud, until then, was a relative unknown as he had been scorned by European society. But, his nephew, Edward Bernays, made him and his work famous in the United States and ensured he attained fame and prestige. Bernays applied his uncle’s insights to great effect by manipulating public opinion through mass media. As he became the world’s foremost expert in propaganda, he realized it was as powerful a tool in peacetime as it was during war.

So, after the war, he moved to New York and decided to counsel companies in propoganda. However, since the word propoganda was controversial and since its alternate “advertising” was too mundane, he decided to rename it “Public Relations.”

Why tobacco and bacon are great for you
Sadly, Bernays’ counsel was sold to anyone who cared to pay him well for it. And, chief among his clients were the tobacco companies and the pork industry. In his work with them, he demonstrated his skills as a master campaign strategist.

For example, he staged the “Torches of Freedom” event during the 1929 Easter Day Parade as a means of conflating smoking and women’s rights. Tens of millions of women threw off their shackles to claim their right to smoke in public.

His media strategy involved persuading women to smoke cigarettes instead of eating. He began by promoting the ideal of thin women by using photographers and artists in newspapers and magazines to promote their “special beauty.” He, then, had medical authorities promote the cigarettes over sweets.

Bernays also pioneered the covert use of third parties. For instance, he conviced a doctor to write to 5,000 physicians asking them to confirm that they’d recommend heavy breakfasts. 4,500 physicians wrote back and agreed. He arranged for these findings to be published in every newspaper across the country while stating that “bacon and eggs should be a central part of breakfast.” Sales of bacon went up.

 

The Engineering of Consent
Bernays called his brand of mass manipulation the “engineering of consent.” He worked with every major political power during his day to help provide the tools to non-coercive control of the mind. In 1928, he crystallized some of his lessons in his book “Propoganda.” Here’s a passage that describes his thought process about the importance of his work in society —

“The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. …We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized. Vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society. …In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons…who understand the mental processes and social patterns of the masses. It is they who pull the wires which control the public mind.”

Enid Blyton and Bernay’s popularity
When I was growing up in India, books from “Enid Blyton” (a British children’s authors) were recommended reading for all kids. I was a huge Enid Blyton fan myself. One of the enduring memories I have of her storytelling is her focus on food and her insistence — via various characters in the book — that “bacon and eggs was the best breakfast in the world.” As I grew up and learnt more about bacon, I couldn’t understand why she said that.

Now I do.

Edward Bernays held the masses in contempt. That’s why we don’t know much about Bernays. He simply didn’t care about popularity in the eyes of those who he held in contempt. As is evident from the Enid Blyton story, the impact of his techniques on society are undeniable. Every marketing and PR campaign since has used his techniques to shape our minds. We study his work in every case on mass marketing — without ever referring to him.

The incredible jump in the proportion of British workers who voted “leave” in the 2 months before the EU referendum would not have been possible if it wasn’t for Bernays techniques.

 

3 notes to ponder 

(1) Many column inches in the past year have been devoted to the role ads and social media have played in the tumultous political climate in the past couple of years. Here’s what scares me — a large proportion of the population is responding by consuming and discussing events on private messaging tools like Whatsapp. The Reuters Insititute confirms this trend among younger Americans. If you think polarized social news feeds can be propoganda carriers, you haven’t experienced the power of Whatsapp in spreading lies (see this and this).

(2) How would we go about learning marketing and public relations if we studied the life and work of Edward Bernays? I understand why professors and executives don’t want to talk about Bernays. Discussing his beliefs and techniques can seem akin to touting the power of the dark arts. But, every useful tool has its dark sides. And, the founding story of the PR industry is a great example of that. It is not an example we should avoid. Instead, it is a story we must learn from. It’ll make us all better marketers and, perhaps, better human beings.

(3) I was one among many who was surprised by the size of the impact of social media tools on the global politics. In retrospect, there were many warning signs. But, hindsight is always 20:20. That said, I don’t think I’d have been anywhere as surprised if I’d read the Edward Bernays story. The story of the propoganda maestro from a hundred years ago, it turns out, is very relevant today.

History doesn’t repeat itself — but it rhymes. It is why any attempt to understand the present and predict the future is futile if it isn’t preceded by an understanding of history.


Links for additional reading

Invisible asymptotes

Eugene Wei, who used to run Product at Hulu and Flipboard, had a fantastic post out on Invisible Asymptotes. An invisible asymptote is the ceiling of the growth curve if we proceeded down the certain path.

For example, Amazon’s invisible asymptote (and that of most e-commerce businesses) in the early days was shipping. People hated shipping fees and bought considerably more once they were on Amazon Prime. While such insights are obvious in retrospect, these asymptotes aren’t easy to identify. And, to that end, he offers two thought provoking insights.

The first is that customers are excellent at telling us what they don’t want or don’t like. Product managers spend a lot of time optimizing their funnels and learning more about who reaches the bottom. This is great in the early days as survival depends on strong product market fit with one group. However, as a company grows, we identify our invisibly asymptotes by understanding who falls out at the top of the funnel. That’s how we expand our offering.

The second is about around how he finds successful people to be much more conscious of their own personal asymptotes at a much earlier age than others. Somebody he knew determined in grade school that that she’d never be a world-class tennis player or pianist. Another knew a year into a job that he wouldn’t be the best programmer at his company and so he switched over into management; he rose to become CEO.

His final two paragraphs brings both these takeaways together beautifully –

By discovering their own limitations early, they are also quicker to discover vectors on which they’re personally unbounded. Product development will always be a multi-dimensional problem, often frustratingly so, but the value of reducing that dimensionality often costs so little that it should be more widely employed.

This isn’t to say a person needs to aspire to be the best at everything they do. I’m at peace with the fact that I’ll likely always be a middling cook, that I won’t win the Tour de France, and that I’m destined to be behind a camera and not in front of it. When it comes to business, however, and surviving in the ruthless Hobbesian jungle, where much more is winner-take-all than it once was, the idea that you can be whatever you want to be, or build whatever you want to build, is a sure path to a short, unhappy existence.

Phone detoxing

I’ve been experimenting with phone detoxing over weekends in the past few months. While my normal approach has been to play hide-and-seek with the phone by putting it in some obscure place and forgetting about it, I decided to do a complete switch off this weekend. My 3 lessons from switching the phone off for a 60 hour period –

1. I missed 3 use cases – i) Waze/maps when we were driving, ii) Ability to call contact other when we split ways at a crowded area, and iii) Whatsapp to send the occasional message to framily.

2. I did not miss the following – i) Checking if there’s any new email or message because the phone is close by and ii) Reading articles on my phone – I prefer a larger screen but the phone is really convenient.
Overall, I can’t say I missed the phone all that much. I did cheat a bit by sending some messages from my wife’s phone too coordinate with friends – but, it was minimal. I enjoyed doing all my writing and reading from a larger screen – it was more targeted and intentional than reflexively picking up my phone.

3. I’ve been disconnecting from work email for a full 48 hours between Friday evening – Sunday evening for a few months now. And, while that has enabled me to be better engaged through the weekend, there was something wonderfully liberating about switching off completely. We normally associate detoxing with the body. But, there’s something to be said for detoxing for the mind.

I look forward to doing this more.

From AI doomsday to IA, Orwell and Social Support

Was the invention of the axe a good thing or a bad thing? The axe was among the first simple machines — a breakthrough in technology that propelled humanity forward. It helped our ancestors chop wood and hunt. But, it was also used as a weapon in war.

Every incredible advance has had a dark side. We have prevented infant mortality thanks to advances in ultrasound technology. And, yet, the same technology was responsible for female infanticide. Industrial farming has helped us feed billions of humans with fewer humans involved in agriculture than ever before. However, it has also resulted in routine horrible treatment of farm animals.

Given this context, it is often amusing to see the discussion around artificial intelligence. We see talk of doomsday one day (“all the jobs are going away”) and techno-optimism on another (“AI is going to help us by freeing us from repetitive tasks”). Of late, I’ve been seeing more media devoted to the latter. It is worth examining both sides of the conversation.

Not doomsday. The central hypothesis behind the idea that there is no doomsday on the cards is the idea that we’re moving into a world with IA or “Intelligence Augmentation.” The idea here is that AI is great at finding answers. But, it is on us to find questions. We’ll find new and interesting questions to keep us occupied while AI helps us eliminate repetitive tasks and make us more efficient. And, we’ll use ingenuity to create new jobs that don’t exist — just as we created “Yoga instructor” or “Zumba instructor” jobs after the industrial revolution.

One example of this is a painting robot that was featured on Wired (see video — 4 mins) that increased the productivity of human laborers by 4x while taking over all the repetitive tasks. You’ve probably come across similar stories.

The surge in recent positivity is also thanks to an OECD research report that classified ~10% of American jobs as high risk. This is much lower than previous forecasts that labelled ~50% of jobs as high risk.

Maybe doomsday. From The Atlantic on WalMart’s future workforce —

Walmart executives have sketched a picture of the company’s future that features more self-checkouts and a grocery-delivery business — soon escalating to 100 cities from a pilot program in six cities. Personal shoppers will fill plastic totes with avocados and paper towels from Walmart store shelves, and hand off packages to crowdsourced drivers idling in the parking lot. Assembly will be outsourced, too: Workers on Handy, an online marketplace for home services, will mount televisions and assemble furniture.

Such examples are also dime-a-dozen these days. More automation promises more returns to shareholders => happier executives and boards.

Of course, it is also easy to counter all examples of optimism. For example, the same painting robot (featured above) that increased productivity of human laborers by 4x is a great place to start. At some point — assuming other painting firms invested in robots — we will have 4x the amount of painting capacity at hand. Are there as many jobs to go around?

And, the above OECD report that said risks of “massive technological unemployment” are overblown cautioned that we face risks of “further polarisation of the labour market” between highly paid workers and other jobs that may be “relatively low paid and not particularly interesting.”

This Economic graph summarizing some of the findings was particularly interesting.

Notice how the percentage of jobs at risk of automation decreases as a country gets richer?

The polarization that the report warns may not be limited to high skill and low skill jobs then. There is reason to believe that we might see a growing schism between richer and poorer countries.

The truth likely lies somewhere in the middle. All this brings us back to the story of the axe. Every technology breakthrough has a dark side. The challenge, then, is to not get caught in all the techno-optimism that accompanies the emergence of breakthrough technology and to take the effort to think through the second and third order consequences.

As we’ve seen in the revelations about the effects of social media in the past 2 years, the absence of such thought can have serious long term consequences.

So, how do we proceed?

My recommendation would be to stop any debate about whether we’re heading to an AI induced doomsday and, ask the three following questions —

1. Are we clear on what we’re talking about when it comes to AI? There are three major domains of AI that we discuss –

  • AGI or Artificial General Intelligence. This is when robots become capable of being human (a.k.a. West World). Scientists like Alan Turing and John McCarthy envisioned this 70–80 years ago and we’re no closer to it now than we were then.
  • IA or Intelligence Augmentation. A classic current example of this a search engine as it augments our memory and factual knowledge. Many of the machine learning applications today are in this domain.
  • II or Intelligence Infrastructure. An example of this would be machine learning powered security systems that make use of a web of devices (infrastructure) to make human environments safer or more supportive. While we’re still in the early days, there’s plenty of investment in start-ups and fledgling companies directed here.

It is important to be clear about these domains because a lot of mainstream discussion bandwidth is wasted in talking about the dangers of Artificial General Intelligence. That is a waste of time.

Instead, our discussions should center around IA and II. We’ve made plenty of progress using techniques like Deep Learning. And, while both extend human capabilities, they also automate tasks that currently employ large groups of humans in the near term.

2. Are we conscious about the possible dark side of AI — specifically the use of artificial intelligence for surveillance? 
The Economist outlined this in a piece about the Workplace of the future —

And surveillance may feel Orwellian — a sensitive matter now that people have begun to question how much Facebook and other tech giants know about their private lives. Companies are starting to monitor how much time employees spend on breaks. Veriato, a software firm, goes so far as to track and log every keystroke employees make on their computers in order to gauge how committed they are to their company. Firms can use AI to sift through not just employees’ professional communications but their social-media profiles, too. The clue is in Slack’s name, which stands for “searchable log of all conversation and knowledge”.

The good news is that most of the preceding portions of the article talked about the benefits of algorithms in the workplace — fairer pay rises and promotions, improve productivity and so on.

It will be on us to strike a good balance.

3. Are we designing the right social support systems to be able to prepare us?
In a great piece titled “The Robots are coming, and Sweden is fine.” by The New York Times, I found 3 notes fascinating –

  • “In Sweden, if you ask a union leader, ‘Are you afraid of new technology?’ they will answer, ‘No, I’m afraid of old technology,’” says the Swedish minister for employment and integration, Ylva Johansson. “The jobs disappear, and then we train people for new jobs. We won’t protect jobs. But we will protect workers.”
  • 80% of Swedes express positive views about robots and artificial intelligence versus 72% of Americans who declared themselves “worried” per a Pew Research survey.
  • The challenge, of course, is taxation. Taxes are ~60% in Sweden and are a key part of the social contract.

While the thought of ~60% taxes in the US would be morally repulsive, it is unclear how long we’ll be able to sustain the current reality.

German Economist Heiner Flassbeck had a powerful graph showing the declining share of national wealth in rich countries (except Norway).

National wealth in the US and UK is now negative. Low public wealth limits the government’s ability to regulate the economy, redistribute income and mitigate rising inequality.

Regardless of Artificial intelligence, income inequality has been rising everywhere.

If AI is expected to further increase the level of inequality, we’ll need to double down on the discussion on social support systems.

For the record, I’m not optimistic that this will happen. Our ability to prepare for changes before they hurt us is poor (see: climate change).

But, I’m hopeful that we can begin by changing how we approach conversations around AI. Maybe next time we hear a conversation about sentient machines, we’ll put a stop to the conversation and focus it on the actual issues like Orwellian uses of data and investing in social support systems to counter inequality. Maybe that, in turn, will mean thoughtful uses of AI in the the organizations we’re part of.

And, maybe, just maybe, we’ll succeed in making the transition to a world with Intelligence augmentation and Intelligence Infrastructure in the coming decades a lot less painful…


Links for additional reading

  • Shor’s algorithm to solve factorization with quantum computers — on Wikipedia
  • How to Become a Centaur — on MITpress
  • The Painting Robot that didn’t take away anyone’s job — on Wired
    A respite from the robots (but a retraining emergency) — on Axios
  • Machines will take fewer jobs but low-skilled workers will still be badly hit — on The Financial Times
  • OECD research visual — on The Economist
  • The Artificial Intelligence revolution hasn’t happened yet — on Medium
  • The origins of Artificial Intelligence — on Rodney Brooks’ blog
  • The workplace of the future — on The Economist
  • AI State of the Union on YouTube
  • The Exponential View — a curated newsletter that is the source of many of these links — Thanks Azeem
  • The robots are coming, and Sweden is fine — on The New York Times (a must read)
  • How inequality is evolving and why — on Flassbeck Economics (another must read)

Services I’m thankful for

For anyone on the web

Spycloud‘s free service will tell you if your passwords showed up in any breaches. It is a must have.

The Exponential View by Azeem Azhar is an excellent weekly newsletter that curates some of the most thought provoking articles on changes in technology, politics and society. Azeem is a great curator and I’d recommend subscribing.

WordPress.com is great if you want to run a blog on your own domain without worrying about getting hacked or phished. For $36, they take good care of you (chat support during weekdays)

Related, I just started working with Feedblitz to manage email subscribers on this blog. As I migrated, I’ve realized that there many subscribers from the past decade were stuck in Feedburner confirmation purgatory. Thanks to Feedblitz, I feel confident it’ll be better in the next decade.

Breevy is a great text expander for Windows. I use Breevy’s text expansion capabilities when I fill forms on the web as well as for any recurring phrases I use over email at work. They have a 30 day trial and cost $34.95 for a permanent license across all your devices.

As you can see, many of the above are paid services. I’ve belatedly begun to appreciate the benefits of paying for software. That said, there are a couple of free services I love. I don’t know what I’d do without Lastpass. Microsoft OneNote is beyond brilliant. And, Unsplash has a wonderful collection of free photos that you can use without worrying about copyright.

United States focused

Credit Karma monitors any password breaches as well – in addition to keeping tabs on your credit score.

Trim is a personal finance assistant whose team will negotiate those exorbitant AT&T and Comcast bills down on your behalf. They’ve brought my bills down by $260 this year.

A real world service – Earthbaby is a compostable diaper service that ensures you don’t make the landfill problem worse. I can’t recommend them strongly enough. I wish Earthbaby was available in every location on the planet. Sadly, it is a service limited to areas around San Francisco. I’m hoping there’s a similar service near you.

Quantum Computing — Superposition Of Optimism And Pessimism

‘Nature is quantum, goddamn it! So if we want to simulate it, we need a quantum computer.’

Those words from Richard Feynman’s keynote in the “First Conference on the Physics of Computation” was what researchers in the then nascent field of quantum computing needed to hear. It gave the field the boost that it needed to begin a multi-decade long search for the first working quantum computer.

Classical computing. In 1948, MIT Professor Claude Shannon published a landmark paper — “A Mathematical Theory of Communication.” In this paper, he laid out the basic elements of communication (transmitter, channel, receiver, etc.) and popularized the term “bit” as a unit of information.

The bit remains the building block of the computers and phones that we use today. And, thanks to Shannon’s model and breakthroughs from numerous other researchers, we were able to use computers to solve many hitherto unsolved problems.

However, there classical computers cannot tackle very complex problems because it would require insane amounts of computation. For example, cryptography relies on the fact that no classical computer can break a large number into its various prime factors (e.g. 15 has 2 prime factors — 3 and 5) easily. A 232 digit large number took scientists two years to factor using hundreds of classical computers.

So, why can quantum computers do more? Super position and Entanglement. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits, on the other hand, can exist in a superposition of the states 1 and 0. This means that a qubit’s state could be 1 or 0 with some probability.

Next, to perform computation, these qubits must exist in an interdependent state where changing the behavior of one can affect the other — this is called entanglement. This means that operations on qubits count more than on simple bits. While computational resources increase linearly as we increase the number of bits on classical computers, they increase exponentially on quantum computers. So, adding an additional qubit roughly doubles the computation power => a 50 qubit computer has 2⁵⁰ computational power than 1 qubit computer.

Error rate. The challenge, however, is error rate. Random fluctuations, from extra heat in qubits for example, can change the state of a qubit and derail the calculation. So, quantum computing can only live up to its potential if all the qubits work “in coherence.”

This is a problem because a high error rate takes away any benefits of using a quantum computer.

(Source and thanks: Quanta Magazine)

Reasons for optimism and pessimism. 2017 was a year that brought reasons for optimism with it. IBM announced that they had created a 50 qubit quantum computer. Google and Microsoft have also been investing heavily. In addition, IBM has made a 5 qubit computer available to researchers since 2016 for free.

(Source and thanks to: Technology Review)

While all this is good, there hasn’t yet emerged a solution to get around the error rate. While some researchers believe error rates will be the reason quantum computers will never make it to the mainstream, researchers around the world are working hard on the problem. Some believe the solution will be finding a way to work with the noise rather than to eliminate it. Could there, for example, be quantum algorithms that will enable us to generate results despite the noise? There is no guarantee we’ll find a solution however.

The next hairy question is which problems will quantum computers help us solve. While there’s a debate here, it is clear it won’t be a cure all for all sorts of problems. Besides if a quantum computer’s error rate is hard to predict and if its calculations can’t be checked, how can we conclude that the problem is solved right?

We have more questions than answers at this point.

Where does all this leave us? I really struggled with putting this post together. I postponed writing it for 2 straight weeks despite reading and watching the articles and video below at least a couple of times because I wasn’t sure I understood it well enough to write about it. I finally decided I’d ship my draft today no matter what.

That, in some ways, gets to the challenge with quantum computing. It is hard to understand how it works, why it is better and, thus, what it could do.

My layman’s synthesis, then, is as follows –

  • After nearly four decades of research, we’ve made a lot of progress in quantum computing in the past 3 years. This is thanks to our ability to build quantum computers with more potential computing power than any classical computer.
  • Quantum computers will not replace classical computers. Instead, we will use them to help us solve certain kinds of problems. For example, they may help us make progress in understanding the workings of complex chemical reactions which may, in turn, help us cure challenging diseases. They may also do better at complex optimization problems than the best deep learning algorithms.
  • However, realizing this potential requires us to figure out how to deal with the “coherence” problem that results in error rates.
  • There are various groups working on solving error rates. The solution might be to build algorithms that work despite them. As a result, getting more researchers and programmers to work on quantum computers is going to be key. That said, there is no guarantee that we will find a solution.

It feels like we’re still a couple of decades away from quantum computers hitting the mainstream. But, making such predictions on technology I barely understand is likely foolhardy anyway.

So, I’ll end with a note on the topic. All my reading led me to the conclusion that the way to think about the outlook on quantum computing is simply to think of it as a superposition of optimism and pessimism. :)


Links for additional reading

  • Shor’s algorithm to solve factorization with quantum computers — on Wikipedia
  • Hello Quantum World — on MIT Technology Review
  • Serious quantum computers are here — what are we going to do with them? — on MIT Technology Review
  • Outlook is cloudy on the era of quantum computing — on Quanta Magazine (a must read)
  • What sort of problems are quantum computers good for? — on Forbes
  • Quantum computing explained by an IBM researcher — on YouTube
  • The Exponential View — a newsletter by Azeem Azhar