Posts Tagged: hitting

Hitting the Books: NASA’s Kathy Sullivan and advances in orbital personal hygiene

For the first couple decades of its existence, NASA was the epitome of an Old Boys Club; its astronaut ranks pulled exclusively from the Armed Services’ test pilot programs which, at that time, were exclusively staffed by men. Glass ceilings weren’t the only things broken when Sally Ride, Judy Resnik, Kathy Sullivan, Anna Fisher, Margaret “Rhea” Seddon and Shannon Lucid were admitted to the program in 1978 — numerous spaceflight systems had to be reassessed to accommodate a more diverse workforce. In The Six: The Untold Story of America’s First Women Astronauts, journalist Loren Grush chronicles the numerous trials and challenges these women faced — from institutional sexism to enduring survival training to navigating the personal pressures that the public life of an astronaut entails — in their efforts to reach orbit.

the six cover
Scribner

Adapted from The Six: The Untold Story of America’s First Women Astronauts by Loren Grush. Copyright © 2023 by Loren Grush. Excerpted with permission by Scribner, a division of Simon & Schuster, Inc.


Above the Chisos Mountains sprawling across Big Bend National Park in West Texas, Kathy [Sullivan, PhD, third woman to fly in space and future head of the NOAA] sat in the back seat of NASA’s WB-57F reconnaissance aircraft as it climbed higher into the sky. The pilot, Jim Korkowski, kept his eye on the jet’s altimeter as they ascended. They’d just passed sixty thousand feet, and they weren’t done rising. It was a dizzyingly high altitude, but the plane was made to handle such extremes.

Inside the cockpit, both Kathy and Jim were prepared. They were fully outfitted in the air force’s high-altitude pressure suits. To the untrained observer, the gear looked almost like actual space suits. Each ensemble consisted of a bulky dark onesie, with thick gloves and a thick helmet. The combination was designed to apply pressure to the body as the high-altitude air thinned away and made it almost impossible for the human body to function.

The duo eventually reached their target height: 63,300 feet. At that altitude, their pressure suits were a matter of life and death. The surrounding air pressure was so low that their blood could start to boil if their bodies were left unprotected. But with the suits on, it was an uneventful research expedition. Kathy took images with a specialized infrared camera that could produce color photos, and she also scanned the distant terrain in various wavelengths of light.

They spent just an hour and a half over Big Bend, and the flight lasted just four hours in total. While it may have seemed a quick and easy flight, Kathy made history when she reached that final altitude above West Texas on July 1, 1979. In that moment, she flew higher than any woman ever had, setting an unofficial world aviation record.

The assignment to train with the WB-57 had scared her at first, but Kathy wound up loving those high-flying planes. “That was very fun, other than this little bit of vague concern that, ‘Hope this doesn’t mean I’m falling off the face of the Earth,’” Kathy said. The assignment took her on flights up north to Alaska and down south to Peru. As she’d hoped, she received full qualification to wear the air force’s pressure suits, becoming the first woman to do so. Soon, donning a full-body suit designed to keep her alive became second nature to her.

NASA officials had also sought her out to test a new piece of equipment they were developing for future Shuttle astronauts, one that would let people relieve themselves while in space. During the Apollo and Gemini eras, NASA developed a relatively complex apparatus for astronauts to pee in their flight suits. It was, in essence, a flexible rubber cuff that fit around the penis, which then attached to a collection bag. The condom-like cuffs came in “small,” “medium,” and “large” (though Michael Collins claimed the astronauts gave them their own terms: “extra large,” “immense,” and “unbelievable”). It was certainly not a foolproof system. Urine often escaped from beneath the sheath.

Cuffs certainly weren’t going to work once women entered the astronaut corps. While the Space Shuttle had a fancy new toilet for both men and women to use, the astronauts still needed some outlet for when they were strapped to their seats for hours, awaiting launch or reentry. And if one of the women was to do a spacewalk, she’d need some kind of device during those hours afloat. So, NASA engineers created the Disposable Absorption Containment Trunk (DACT). In its most basic form it was . . . a diaper. It was an easy fix in case astronauts needed to urinate while out of reach of the toilet. It was designed to absorb fecal matter, too, though the women probably opted to wait until they reached orbit for that.

Kathy was the best person to test it out. Often during her high-altitude flights, she’d be trapped in her pressure suit for hours on end, creating the perfect testing conditions to analyze the DACT’s durability. It worked like a charm. And although the first male Shuttle fliers stuck to the cuffs, eventually the DACT became standard equipment for everyone.

After accumulating hundreds of hours in these pressure suits, Kathy hoped to leverage her experience into a flight assignment, one that might let her take a walk outside the Space Shuttle one day. As luck would have it, she ran into Bruce McCandless II in the JSC gym one afternoon. He was the guy to know when it came to spacewalks. NASA officials had put him in charge of developing all the spacewalk procedures and protocols, and at times he seemed to live in the NASA pools. Plus, he was always conscripting one of Kathy’s classmates to do simulated runs with him in the tanks. Kathy wanted to be next. Projecting as much confidence as she could, she asked him to consider her for his next training run.

It worked. Bruce invited Kathy to accompany him to Marshall Space Flight Center in Alabama to take a dive in the tank there. The two would be working on spacewalk techniques that might be used one day to assemble a space station. However, the Space Shuttle suits still weren’t ready to use yet. Kathy had to wear Apollo moonwalker Pete Conrad’s suit, just like Anna had done during her spacewalk simulations. But while the suit swallowed tiny Anna, it was just slightly too small for Kathy, by about an inch. When she put it on, the suit stabbed her shoulders, while parts of it seemed to dig into her chest and back. She tried to stand up and nearly passed out. It took all her strength to walk over to the pool before she flopped into the tank. In the simulated weightless environment, the pain immediately evaporated. But it was still a crucial lesson in space-suit sizes. The suits have to fit their wearers perfectly if the spacewalk is going to work. 

The session may have started off painfully, but once she began tinkering with tools and understanding how to maneuver her arms to shift the rest of her body, she was hooked. She loved spacewalking so much that she’d go on to do dozens more practice dives throughout training.

But it wasn’t enough to practice in the pool. She wanted to go orbital. 

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-six-loren-grush-scribner-143032524.html?src=rss

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: Beware the Tech Bro who comes bearing gifts

American entrepreneurs have long fixated on extracting the maximum economic value out of, well really, any resource they can get their hands on — from Henry Ford’s assembly line to Tony Hsieh’s Zappos Happiness Experience Form. The same is true in the public sector where some overambitious streamlining of Texas’ power grid contributed to the state’s massive 2021 winter power crisis that killed more than 700 people. In her new book, the riveting Optimal Illusions: The False Promise of Optimization, UC Berkeley applied mathematician and author, Coco Krumme, explores our historical fascination with optimization and how that pursuit has often led to unexpected and unwanted consequences in the systems we’re streamlining. 

In the excerpt below, Krumme explores the recent resurgence of interest in Universal Basic (or Guaranteed) Income and the contrasting approaches to providing UBI between tech evangelists like Sam Altman and Andrew Yang, and social workers like Aisha Nyandoro, founder of the Magnolia Mother’s Trust, in how to address the difficult questions of deciding who should receive the financial support, and how much.

blue background stylized iceberg with white writing
Riverhead Books

Excerpted from Optimal Illusions: The False Promise of Optimization by Coco Krumme. Published by Riverhead Books. Copyright © 2023 by Coco Krumme. All rights reserved.


False Gods

California, they say, is where the highway ends and dreams come home to roost. When they say these things, their eyes ignite: startup riches, infinity pools, the Hollywood hills. The last thing on their minds, of course, is the town of Stockton.

Drive east from San Francisco and, if traffic cooperates, you’ll be there in an hour and a half or two, over the long span of slate‑colored bay, past the hulking loaders at Oakland’s port, skirting rich suburbs and sweltering orchards and the government labs in Livermore, the military depot in Tracy, all the way to where brackish bay waters meet the San Joaquin River, where the east‑west highways connect with Interstate 5, in a tangled web of introductions that ultimately pitches you either north toward Seattle or south to LA.

Or you might decide to stay in Stockton, spend the night. There’s a slew of motels along the interstate: La Quinta, Days Inn, Motel 6. Breakfast at Denny’s or IHOP. Stockton once had its place in the limelight as a booming gold‑rush supply point. In 2012, the city filed for bankruptcy, the largest US city until then to do so (Detroit soon bested it in 2013). First light reveals a town that’s neither particularly rich nor desperately poor, hitched taut between cosmopolitan San Francisco on one side and the agricultural central valley on the other, in the middle, indistinct, suburban, and a little sad.

This isn’t how the story was supposed to go. Optimization was supposed to be the recipe for a more perfect society. When John Stuart Mill aimed for the greater good, when Allen Gilmer struck out to map new pockets of oil, when Stan Ulam harnessed a supercomputer to tally possibilities: it was in service of doing more, and better, with less. Greater efficiency was meant to be an equilibrating force. We weren’t supposed to have big winners and even bigger losers. We weren’t supposed to have a whole sprawl of suburbs stuck in the declining middle.

We saw how overwrought optimizations can suddenly fail, and the breakdown of optimization as the default way of seeing the world can come about equally fast. What we face now is a disconnect between the continued promises of efficiency, the idea that we can optimize into perpetuity, and the reality all around: the imperfect world, the overbooked schedules, the delayed flights, the institutions in decline. And we confront the question: How can we square what optimization promised with what it’s delivered?

Sam Altman has the answer. In his mid-thirties, with the wiry, frenetic look of a college student, he’s a young man with many answers. Sam’s biography reads like a leaderboard of Silicon Valley tropes and accolades: an entrepreneur, upper‑middle‑class upbringing, prep school, Stanford Computer Science student, Stanford Computer Science dropout, where dropping out is one of the Valley’s top status symbols. In 2015, Sam was named a Forbes magazine top investor under age thirty. (That anyone bothers to make a list of investors in their teens and twenties says as much about Silicon Valley as about the nominees. Tech thrives on stories of overnight riches and the mythos of the boy genius.)

Sam is the CEO and cofounder, along with electric‑car‑and‑rocket‑ship‑magnate Elon Musk, of OpenAI, a company whose mission is “to ensure that artificial general intelligence benefits all of humanity.” He is the former president of the Valley’s top startup incubator, Y Combinator, was interim CEO of Reddit, and is currently chairman of the board of two nuclear‑energy companies, Helion and Okto. His latest venture, Worldcoin, aims to scan people’s eyeballs in exchange for cryptocurrency. As of 2022, the company had raised $ 125 million of funding from Silicon Valley investors.

But Sam doesn’t rest on, or even mention, his laurels. In conversation, he is smart, curious, and kind, and you can easily tell, through his veneer of demure agreeableness, that he’s driven as hell. By way of introduction to what he’s passionate about, Sam describes how he used a spreadsheet to determine the seven or so domains in which he could make the greatest impact, based on weighing factors such as his own skills and resources against the world’s needs. Sam readily admits he can’t read emotions well, treats most conversations as logic puzzles, and not only wants to save the world but believes the world’s salvation is well within reach.

A 2016 profile in The New Yorker sums up Sam like this: “His great weakness is his utter lack of interest in ineffective people.”

Sam has, however, taken an interest in Stockton, California.

Stockton is the site of one of the most publicized experiments in Universal Basic Income (UBI), a policy proposal that grants recipients a fixed stipend, with no qualifications and no strings attached. The promise of UBI is to give cash to those who need it most and to minimize the red tape and special interests that can muck up more complex redistribution schemes. On Sam’s spreadsheet of areas where he’d have impact, UBI made the cut, and he dedicated funding for a group of analysts to study its effects in six cities around the country. While he’s not directly involved in Stockton, he’s watching closely. The Stockton Economic Empowerment Demonstration was initially championed by another tech wunderkind, Facebook cofounder Chris Hughes. The project gave 125 families $ 500 per month for twenty‑four months. A slew of metrics was collected in order to establish a causal relationship between the money and better outcomes.

UBI is nothing new. The concept of a guaranteed stipend has been suggested by leaders from Napoleon to Martin Luther King Jr. The contemporary American conception of UBI, however, has been around just a handful of years, marrying a utilitarian notion of societal perfectibility with a modern‑day faith in technology and experimental economics.

Indeed, economists were among the first to suggest the idea of a fixed stipend, first in the context of the developing world and now in America. Esther Duflo, a creative star in the field and Nobel Prize winner, is known for her experiments with microloans in poorer nations. She’s also unromantic about her discipline, embracing the concept of “economist as plumber.” Duflo argues that the purpose of economics is not grand theories so much as on‑the‑ground empiricism. Following her lead, the contemporary argument for UBI owes less to a framework of virtue and charity and much more to the cold language of an econ textbook. Its benefits are described in terms of optimizing resources, reducing inequality, and thereby maximizing societal payoff.

The UBI experiments under way in several cities, a handful of them funded by Sam’s organization, have data‑collection methods primed for a top‑tier academic publication. Like any good empiricist, Sam spells out his own research questions to me, and the data he’s collecting to test and analyze those hypotheses.

Several thousand miles from Sam’s Bay Area office, a different kind of program is in the works. When we speak by phone, Aisha Nyandoro bucks a little at my naive characterization of her work as UBI. “We don’t call it universal basic income,” she says. “We call it guaranteed income. It’s targeted. Invested intentionally in those discriminated against.” Aisha is the powerhouse founder of the Magnolia Mother’s Trust, a program that gives a monthly stipend to single Black mothers in Jackson, Mississippi. The project grew out of her seeing the welfare system fail miserably for the very people it purported to help. “The social safety net is designed to keep families from rising up. Keep them teetering on edge. It’s punitive paternalism. The ‘safety net’ that strangles.”

Bureaucracy is dehumanizing, Aisha says, because it asks a person to “prove you’re enough” to receive even the most basic of assistance. Magnolia Mother’s Trust is unique in that it is targeted at a specific population. Aisha reels off facts. The majority of low‑income women in Jackson are also mothers. In the state of Mississippi, one in four children live in poverty, and women of color earn 61 percent of what white men make. Those inequalities affect the community as a whole. In 2021, the trust gave $ 1,000 per month to one hundred women. While she’s happy her program is gaining exposure as more people pay attention to UBI, Aisha doesn’t mince words. “I have to be very explicit in naming race as an issue,” she says.

Aisha’s goal is to grow the program and provide cash, without qualifications, to more mothers in Jackson. Magnolia Mother’s Trust was started around the same time as the Stockton project, and the nomenclature of guaranteed income has gained traction. One mother in the program writes in an article in Ms. magazine, “Now everyone is talking about guaranteed income, and it started here in Jackson.” Whether or not it all traces back to Jackson, whether the money is guaranteed and targeted or more broadly distributed, what’s undeniable is that everyone seems to be talking about UBI.

Influential figures, primarily in tech and politics, have piled on to the idea. Jack Dorsey, the billionaire founder of Twitter, with his droopy meditation eyes and guru beard, wants in. In 2020, he donated $ 15 million to experimental efforts in thirty US cities.

And perhaps the loudest bullhorn for the idea has been wielded by Andrew Yang, another product of Silicon Valley and a 2020 US presidential candidate. Yang is an earnest guy, unabashedly dorky. Numbers drive his straight‑talking policy. Blue baseball caps for his campaign are emblazoned with one short word: MATH.

UBI’s proponents see the potential to simplify the currently convoluted American welfare system, to equilibrate an uneven playing field. By decoupling basic income from employment, it could free some people up to pursue work that is meaningful.

And yet the concept, despite its many proponents, has managed to draw ire from both ends of the political spectrum. Critics on the right see UBI as an extension of the welfare state, as further interference into free markets. Left‑leaning critics bemoan its “inefficient” distribution of resources: Why should high earners get as much as those below the poverty line? Why should struggling individuals get only just enough to keep them, and the capitalist system, afloat?

Detractors on both left and right default to the same language in their critiques: that of efficiency and maximizing resources. Indeed, the language of UBI’s critics is all too similar to the language of its proponents, with its randomized control trials and its view of society as a closed economic system. In the face of a disconnect between what optimization promised and what it delivered, the proposed solution involves more optimizing.

Why is this? What if we were to evaluate something like UBI outside the language of efficiency? We might ask a few questions differently. What if we relaxed the suggestion that dollars can be transformed by some or another equation into individual or societal utility? What if we went further than that and relaxed the suggestion of measuring at all, as a means of determining the “best” policy? What if we put down our calculators for a moment and let go of the idea that politics is meant to engineer an optimal society in the first place? Would total anarchy ensue?

Such questions are difficult to ask because they don’t sound like they’re getting us anywhere. It’s much easier, and more common, to tackle the problem head‑on. Electric‑vehicle networks such as Tesla’s, billed as an alternative to the centralized oil economy, seek to optimize where charging stations are placed, how batteries are created, how software updates are sent out — and by extension, how environmental outcomes take shape. Vitamins fill the place of nutrients leached out of foods by agriculture’s maximization of yields; these vitamins promise to optimize health. Vertical urban farming also purports to solve the problems of industrial agriculture, by introducing new optimizations in how light and fertilizers are delivered to greenhouse plants, run on technology platforms developed by giants such as SAP. A breathless Forbes article explains that the result of hydroponics is that “more people can be fed, less precious natural resources are used, and the produce is healthier and more flavorful.” The article nods only briefly to downsides, such as high energy, labor, and transportation costs. It doesn’t mention that many grains don’t lend themselves easily to indoor farming, nor the limitations of synthetic fertilizers in place of natural regeneration of soil.

In working to counteract the shortcomings of optimization, have we only embedded ourselves deeper? For all the talk of decentralized digital currencies and local‑maker economies, are we in fact more connected and centralized than ever? And less free, insofar as we’re tied into platforms such as Amazon and Airbnb and Etsy? Does our lack of freedom run deeper still, by dint of the fact that fewer and fewer of us know exactly what the algorithms driving these technologies do, as more and more of us depend on them? Do these attempts to deoptimize in fact entrench the idea of optimization further?

A 1952 novel by Kurt Vonnegut highlights the temptation, and also the threat, of de-optimizing. Player Piano describes a mechanized society in which the need for human labor has mostly been eliminated. The remaining workers are those engineers and managers whose purpose is to keep the machines online. The core drama takes place at a factory hub called Ilium Works, where “Efficiency, Economy, and Quality” reign supreme. The book is prescient in anticipating some of our current angst — and powerlessness — about optimization’s reach.

Paul Proteus is the thirty‑five‑year‑old factory manager of the Ilium Works. His father served in the same capacity, and like him, Paul is one day expected to take over as leader of the National Manufacturing Council. Each role at Ilium is identified by a number, such as R‑127 or EC‑002. Paul’s job is to oversee the machines.

At the time of the book’s publication, Vonnegut was a young author disillusioned by his experiences in World War II and disheartened as an engineering manager at General Electric. Ilium Works is a not‑so‑thinly‑veiled version of GE. As the novel wears on, Paul tries to free himself, to protest that “the main business of humanity is to do a good job of being human beings . . . not to serve as appendages to machines, institutions, and systems.” He seeks out the elusive Ghost Shirt Society with its conspiracies to break automation, he attempts to restore an old homestead with his wife. He tries, in other words, to organize a way out of the mechanized world.

His attempts prove to be in vain. Paul fails and ends up mired in dissatisfaction. The machines take over, riots ensue, everything is destroyed. And yet, humans’ love of mechanization runs deep: once the machines are destroyed, the janitors and technicians — a class on the fringes of society — quickly scramble to build things up again. Player Piano depicts the outcome of optimization as societal collapse and the collapse of meaning, followed by the flimsy rebuilding of the automated world we know.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-optimal-illusions-coco-krumme-riverhead-books-143012184.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: Meet Richard Akrwright, the world’s first tech titan

You didn’t actually believe all those founder’s myths about tech billionaires like Bezos, Jobs and Musk pulling themselves up by their bootstraps from some suburban American garage, did you? In reality, our corporate kings have been running the same playbook since the 18th century when Lancashire’s own Richard Arkwright wrote it. Arkwright is credited with developing a means of forming cotton fully into thread — technically he didn’t actually invent or design the machine, but developed the overarching system in which it could be run at scale — and spinning that success into financial fortune. Never mind the fact that his 24-hour production lines were operated by boys as young as seven pulling 13-hour shifts.   

In Blood in the Machine: The Origins of the Rebellion Against Big Tech — one of the best books I’ve read this year — LA Times tech reporter Brian Merchant lays bare the inhumane cost of capitalism wrought by the industrial revolution and celebrates the workers who stood against those first tides of automation: the Luddites. 

blockprint of two luddites beating on an old timey machine with hammers on a faux aged paper background with red block book title lettering, black author lettering
Hachette Book Group

Excerpted from Blood in the Machine: The Origins of the Rebellion Against Big Tech by Brian Merchant. Published by Hachette Book Group. Copyright © 2023 by Brian Merchant. All rights reserved.


The first tech titans were not building global information networks or commercial space rockets. They were making yarn and cloth. 

A lot of yarn, and a lot of cloth. Like our modern-day titans, they started out as entrepreneurs. But until the nineteenth century, entrepreneurship was not a cultural phenomenon. Businessmen took risks, of course, and undertook novel efforts to increase their profits. Yet there was not a popular conception of the heroic entrepreneur, of the adventuring businessman, until long after the birth of industrial capitalism. The term itself was popularized by Jean-Baptiste Say, in his 1803 work A Treatise on Political Economy. An admirer of Adam Smith’s, Say thought that The Wealth of Nations was missing an account of the individuals who bore the risk of starting new business; he called this figure the entrepreneur, translating it from the French as “adventurer” or “undertaker.” 

For a worker, aspiring to entrepreneurship was different than merely seeking upward mobility. The standard path an ambitious, skilled weaver might pursue was to graduate from apprentice to journeyman weaver, who rented a loom or worked in a shop, to owning his own loom, to becoming a master weaver and running a small shop of his own that employed other journeymen. This was customary. 

In the eighteenth and nineteenth centuries, as now in the twenty-first century, entrepreneurs saw the opportunity to use technology to disrupt longstanding customs in order to increase efficiencies, output, and personal profit. There were few opportunities for entrepreneurship without some form of automation; control of technologies of production grants its owner a chance to gain advantage or take pay or market share from others. In the past, like now, owners started small businesses at some personal financial risk, whether by taking out a loan to purchase used handlooms and rent a small factory space, or by using inherited capital to procure a steam engine and a host of power looms.

The most ambitious entrepreneurs tapped untested technologies and novel working arrangements, and the most successful irrevocably changed the structure and nature of our daily lives, setting standards that still exist today. The least successful would go bankrupt, then as now. 

In the first century of the Industrial Revolution, one entrepreneur looms above the others, and has a strong claim on the mantle of the first of what we’d call a tech titan today. Richard Arkwright was born to a middle-class tailor’s family and originally apprenticed as a barber and wigmaker. He opened a shop in the Lancashire city of Bolton in the 1760s. There, he invented a waterproof dye for the wigs that were in fashion at the time, and traveled the country collecting hair to make them. In his travels across the Midlands, he met spinners and weavers, and became familiar with the machinery they used to make cotton garments. Bolton was right in the middle of the Industrial Revolution’s cotton hub hotspot. 

Arkwright took the money he made from the wigs, plus the dowry from his second marriage, and invested it in upgraded spinning machinery. “The improvement of spinning was much in the air, and many men up and down Lancashire were working at it,” Arkwright’s biographer notes. James Hargreaves had invented the spinning jenny, a machine that automated the process of spinning cotton into a weft— halfway into yarn, basically— in 1767. Working with one of his employees, John Kay, Arkwright tweaked the designs to spin cotton entirely into yarn, using water or steam power. Without crediting Kay, Arkwright patented his water frame in 1769 and a carding engine in 1775, and attracted investment from wealthy hosiers in Nottingham to build out his operation. He built his famous water-powered factory in Cromford in 1771. 

His real innovation was not the machinery itself; several similar machines had been patented, some before his. His true innovation was creating and successfully implementing the system of modern factory work. 

“Arkwright was not the great inventor, nor the technical genius,” as the Oxford economic historian Peter Mathias explains, “but he was the first man to make the new technology of massive machinery and power source work as a system— technical, organizational, commercial— and, as a proof, created the first great personal fortune and received the accolade of a knighthood in the textile industry as an industrialist.” Richard Arkwright Jr., who inherited his business, became the richest commoner in England. 

Arkwright père was the first start‑up founder to launch a unicorn company we might say, and the first tech entrepreneur to strike it wildly rich. He did so by marrying the emergent technologies that automated the making of yarn with a relentless new work regime. His legacy is alive today in companies like Amazon, which strive to automate as much of their operations as is financially viable, and to introduce highly surveilled worker-productivity programs. 

Often called the grandfather of the factory, Arkwright did not invent the idea of organizing workers into strict shifts to produce goods with maximal efficiency. But he pursued the “manufactory” formation most ruthlessly, and most vividly demonstrated the practice could generate huge profits. Arkwright’s factory system, which was quickly and widely emulated, divided his hundreds of workers into two overlapping thirteen-hour shifts. A bell was rung twice a day, at 5 a.m. and 5 p.m. The gates would shut and work would start an hour later. If a worker was late, they sat the day out, forfeiting that day’s pay. (Employers of the era touted this practice as a positive for workers; it was a more flexible schedule, they said, since employees no longer needed to “give notice” if they couldn’t work. This reasoning is reminiscent of that offered by twenty-first-century on‑demand app companies.) For the first twenty-two years of its operation, the factory was worked around the clock, mostly by boys like Robert Blincoe, some as young as seven years old. At its peak, two-thirds of the 1,100-strong workforce were children. Richard Arkwright Jr. admitted in later testimony that they looked “extremely dissipated, and many of them had seldom more than a few hours of sleep,” though he maintained they were well paid. 

The industrialist also built on‑site housing, luring whole families from around the country to come work his frames. He gave them one week’s worth of vacation a year, “but on condition that they could not leave the village.” Today, even our most cutting-edge consumer products are still manufactured in similar conditions, in imposing factories with on‑site dormitories and strictly regimented production processes, by workers who have left home for the job. Companies like Foxconn operate factories where the regimen can be so grueling it has led to suicide epidemics among the workforce. 

The strict work schedule and a raft of rules instilled a sense of discipline among the laborers; long, miserable shifts inside the factory walls were the new standard. Previously, of course, similar work was done at home or in small shops, where shifts were not so rigid or enforced. 

Arkwright’s “main difficulty,” according to the early business theorist Andrew Ure, did not “lie so much in the invention of a proper mechanism for drawing out and twisting cotton into a continuous thread, as in [. . .] training human beings to renounce their desultory habits of work and to identify themselves with the unvarying regularity of the complex automation.” This was his legacy. “To devise and administer a successful code of factory discipline, suited to the necessities of factory diligence, was the Herculean enterprise, the noble achievement of Arkwright,” Ure continued. “It required, in fact, a man of a Napoleon nerve and ambition to subdue the refractory tempers of workpeople.” 

Ure was hardly exaggerating, as many workers did in fact view Arkwright as akin to an invading enemy. When he opened a factory in Chorley, Lancashire, in 1779, a crowd of stockingers and spinners broke in, smashed the machines, and burned the place to the ground. Arkwright did not try to open another mill in Lancashire. 

Arkwright also vigorously defended his patents in the legal system. He collected royalties on his water frame and carding engine until 1785, when the court decided that he had not actually invented the machines but had instead copied their parts from other inventors, and threw the patents out. By then, he was astronomically wealthy. Before he died, he would be worth £500,000, or around $ 425 million in today’s dollars, and his son would expand and entrench his factory empire. 

The success apparently went to his head— he was considered arrogant, even among his admirers. In fact, arrogance was a key ingredient in his success: he had what Ure described as “fortitude in the face of public opposition.” He was unyielding with critics when they pointed out, say, that he was employing hundreds of children in machine-filled rooms for thirteen hours straight. That for all his innovation, the secret sauce in his groundbreaking success was labor exploitation. 

In Arkwright, we see the DNA of those who would attain tech titanhood in the ensuing decades and centuries. Arkwright’s brashness rhymes with that of bullheaded modern tech executives who see virtue in a willingness to ignore regulations and push their workforces to extremes, or who, like Elon Musk, would gleefully wage war with perceived foes on Twitter rather than engage any criticism of how he runs his businesses. Like Steve Jobs, who famously said, “We’ve always been shameless about stealing great ideas,” Arkwright surveyed the technologies of the day, recognized what worked and could be profitable, lifted the ideas, and then put them into action with an unmatched aggression. Like Jeff Bezos, Arkwright hypercharged a new mode of factory work by finding ways to impose discipline and rigidity on his workers, and adapting them to the rhythms of the machine and the dictates of capital— not the other way around. 

We can look back at the Industrial Revolution and lament the working conditions, but popular culture still lionizes entrepreneurs cut in the mold of Arkwright, who made a choice to employ thousands of child laborers and to institute a dehumanizing system of factory work to increase revenue and lower costs. We have acclimated to the idea that such exploitation was somehow inevitable, even natural, while casting aspersions on movements like the Luddites as being technophobic for trying to stop it. We forget that working people vehemently opposed such exploitation from the beginning. 

Arkwright’s imprint feels familiar to us, in our own era where entrepreneurs loom large. So might a litany of other first-wave tech titans. Take James Watt, the inventor of the steam engine that powered countless factories in industrial England. Once he was confident in his product, much like a latter-day Bill Gates, Watts sold subscriptions for its use. With his partner, Matthew Boulton, Watts installed the engine and then collected annual payments that were structured around how much the customer would save on fuel costs compared to the previous engine. Then, like Gates, Watts would sue anyone he thought had violated his patent, effectively winning himself a monopoly on the trade. The Mises Institute, a libertarian think tank, argues that this had the effect of constraining innovation on the steam engine for thirty years. 

Or take William Horsfall or William Cartwright. These were men who were less innovative than relentless in their pursuit of disrupting a previous mode of work as they strove to monopolize a market. (The word innovation, it’s worth noting, carried negative connotations until the mid-twentieth century or so; Edmund Burke famously called the French Revolution “a revolt of innovation.”) They can perhaps be seen as precursors to the likes of Travis Kalanick, the founder of Uber, the pugnacious trampler of the taxi industry. Kalanick’s business idea— that it would be convenient to hail a taxi from your smartphone— was not remarkably inventive. But he had intense levels of self-determination and pugnacity, which helped him overrun the taxi cartels and dozens of cities’ regulatory codes. His attitude was reflected in Uber’s treatment of its drivers, who, the company insists, are not employees but independent contractors, and in the endemic culture of harassment and mistreatment of the women on staff. 

These are extreme examples, perhaps. But to disrupt long-held norms for the promise of extreme rewards, entrepreneurs often pursue extreme actions. Like the mill bosses who shattered 19th-century standards by automating cloth-making, today’s start‑up founders aim to disrupt one job category after another with gig work platforms or artificial intelligence, and encourage others to follow their lead. There’s a reason Arkwright and his factories were both emulated and feared. Even two centuries later, many tech titans still are.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-blood-in-the-machine-brian-merchant-hachette-book-group-143056410.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: The programming trick that gave us DOOM multiplayer

Since its release in 1993, id Software’s DOOM franchise has become one of modern gaming’s most easily recognizable IPs. The series has sold more than 10 million copies to date and spawned myriad RPG spinoffs, film adaptations and even a couple tabletop board games. But the first game’s debut turned out to be a close thing, id Software cofounder John Romero describes in an excerpt from his new book DOOM GUY: Life in First Person. With a mere month before DOOM was scheduled for release in December 1993, the iD team found itself still polishing and tweaking lead programmer John Carmack’s novel peer-to-peer multiplayer architecture, ironing out level designs — at a time when the studio’s programmers were also its QA team — and introducing everybody’s favorite killer synonym to the gamer lexicon. 

Its the title and author name in Doom font
Abrams Press

Excerpted from DOOM GUY: Life in First Person by John Romero. Copyright © 2023 by John Romero. Published and reprinted by permission of Abrams Press, an imprint of ABRAMS. All rights reserved.


In early October, we were getting close to wrapping up the game, so progress quickened. On October 4, 1993, we issued the DOOM beta press release version, a build of the game we distributed externally to journalists and video game reviewers to allow them to try the game before its release. Concerned about security and leaks, we coded the beta to stop running on DOS systems after October 31, 1993. We still had useless pickups in the game, like the demonic daggers, demon chests, and other unholy items. I decided to get rid of those things because they made no sense to the core of the game and they rewarded the player with a score, which was a holdover from Wolfenstein 3-D. I removed the concept of having lives for the same reason. It was enough to have to start the level over after dying.

There was still one missing piece from the game, and it was a substantial one. We hadn’t done anything about the multiplayer aspect. In modern game development, multiplayer would be a feature factored in from day one, and architected accordingly, in an integrated fashion. Not with DOOM. It was November, and we were releasing in a month.

I brought it up to Carmack. “So when are we going to make multiplayer mode?”

The short answer was that Carmack was ready to take it on. Looking from the outside in, I suspect some might wonder if I wasn’t just more than a bit concerned since we were hoping to ship in 1993. After all, John had never programmed a multiplayer game before. The truth is that I never had a doubt, not for a second. Back in March, Carmack had already done some innovative network programming in DoomEd. He wanted to play around with the distributed objects system in NeXT-STEP, so he added the ability to allow multiple people who were running DoomEd to edit the same level. I could see him drawing lines and placing objects on my screen from his computer. Then, I’d add to his room by making a hallway, and so on.

For multiplayer, Carmack’s plan was to explore peer-to-peer networking. It was the “quick and dirty” solution instead of a client-server model. Instead of one central computer controlling and monitoring all the action between two to four players, each computer would run the game and sync up with the others. Basically, the computers send each other updates at high speed over the local network. The speed of Carmack’s network programming progress was remarkable. He had some excellent books on networking, and fortunately, those books were clearly written and explained the process of using IPX* well. In a few hours, he was communicating between two computers, getting the IPX protocol running so he could send information packets to each computer. I’d worked with him for three years and was used to seeing incredible things on his screen, but this was awe inspiring, even for him. In a matter of hours, he got two PCs talking to each other through a command-line-based tool, which proved he could send information across the network. It was the foundation needed to make the game network-capable. It was great for two players, and good for four, so we capped it at that. We were still on track to deliver on our promise of the most revolutionary game in history before the end of the year.

Carmack called me into his office to tell me he had it working. Both PCs in his office had the game open, and they were syncing up with two characters facing one another. On one PC, Carmack veered his character to the right. On the other monitor, that same character, appearing in third person, moved to the left. It was working!

“Oh my God!” I yelled, throwing in some other choice words to convey my amazement. “That is fucking incredible.”

When I’d first truly visualized the multiplayer experience, I was building E1M7. I was playing the game and imagined seeing two other players firing rockets at each other. At the time, I thought, “This is going to be astonishing. There is nothing like this. This is going to be the most amazing game planet Earth has ever seen.” Now, the moment had finally arrived.

I rushed to my computer and opened the game, connecting to Carmack’s computer.

When his character appeared on screen, I blasted him out of existence, screaming with delight as I knocked “John” out of the game with a loud, booming, bloody rocket blast. It was beyond anything I had ever experienced before and even better than I imagined it could be.

It was the future, and it was on my screen.

“This is fucking awesome!” I yelled. “This is the greatest thing ever!”

I wasn’t kidding. This was the realization of everything we put into the design months earlier. I knew DOOM would be the most revolutionary game in history, but now, it was also the most fun, all-consuming game in history. Now that all the key elements of our original design were in place, it was obvious. DOOM blew away every other game I’d ever played. From that moment on, if I wasn’t playing DOOM or working on DOOM, I was thinking about DOOM.

Kevin, Adrian, and Jay began running the game in multiplayer mode, too, competing to blow away monsters and each other. They were yelling just as much as I did, cheering every execution, groaning when they were killed and had to respawn. I watched them play. I saw the tension in their bodies as they navigated the dark, detailed world we’d created. They were hunters and targets, engaged in a kill-or-be-killed battle, not just with monsters, but with other, real people. Players were competing in real time with other people in a battle to survive. I thought of boxing or an extreme wrestling match, where you go in a cage to fight. This was much more violent, more deadly. It was all simulated, of course, but in the moment, it felt immediate. It was a new gaming experience, and I searched for a way to describe it.

“This is deathmatch,” I said. The team latched onto the name. It instantly articulated the sinister, survival vibe at the heart of DOOM.

In mid-November, we buckled down, getting in the “closing zone,” where you begin finalizing all areas of the game one by one. Now that Carmack had multiplayer networking figured out, we needed to fine-tune the gameplay and functionality, delivering two multiplayer modes—one in which players work together to kill monsters and demons, and the other where players try to kill each other (usually without monsters around). The first mode was called co-op, short for cooperative. The second, of course, was deathmatch.

Another important word needed to be coined. Deathmatch was all about getting the highest kill count in a game to be judged the winner. What would we call each kill? Well, we could call it a kill, but that felt like a less creative solution to me. Why don’t we have our own word? I went to the art room to discuss this with Kevin and Adrian.

“Hey guys, for each kill in a deathmatch we need a word for it that is not ‘kill,’” I said.

Kevin said, “Well, maybe we could use the word ‘frag.’”

“That sounds like a cool word, but what does it mean?” I asked.

“In the Vietnam War,” Kevin explained, “if a sergeant told his fire team to do something horrifically dangerous, instead of agreeing to it, they would throw a fragmentation grenade at the sergeant and call it friendly fire. The explanation was ‘Someone fragged the sarge!’”

“So, in a deathmatch we’re all fragging each other!” I said.

“Exactly.”

And that is how “frag” entered the DOOM lexicon. 

The introduction of deathmatch and co-op play profoundly affected the possibility space of gameplay in the levels. Crafting an enjoyable level for single-player mode with lots of tricks and traps was complex enough, but with the addition of multiplayer we had to be aware of other players in the level at the same time, and we had to make sure the single-player-designed level was fun to play in these new modes. Our levels were doing triple duty, and we had little time to test every possible situation, so we needed some simple rules to ensure quality. Since multiplayer gameplay was coming in quickly near the end of development, I had to define all the gameplay rules for co-op and deathmatch. We then had to modify every game map so that all modes worked in all difficulty levels. These are the rules I came up with quickly to help guide level quality:

  • Multiplayer Rule 1: A player should not be able to get stuck in an area without the possibility of respawning.

  • Multiplayer Rule 2: Multiple players (deathmatch or co-op mode) require more items; place extra health, ammo, and powerups.

  • Multiplayer Rule 3: Try to evenly balance weapon locations in deathmatch.

  • Multiplayer Rule 4: In deathmatch mode, try to place all the weapons in the level regardless of which level you’re in.

Additionally, we had to make all the final elements for the game: the intermissions and various menus had to be designed, drawn, and coded; the installation files needed to be created, along with the text instruction files, too. We also had to write code to allow gamers to play these multiplayer modes over their modems, since that was the hardware many people had in 1993. Compared to our previous games, the development pace on DOOM had been relatively relaxed, but in November our to-do list was crowded. Fortunately, everything fell into place. The last job for everyone was to stress-test DOOM.

Preparing for release, we knew we needed someone to handle our customer support, so earlier in the year, we’d hired Shawn Green, who quit his job at Apogee to join us. Throughout development, at every new twist and turn, we kept Shawn up to date. He had to know the game inside out to assist gamers should any issues arise. Shawn also helped us by testing the game as it went through production.

I noted earlier that id Software never had a Quality Assurance team to test our releases. For three years, John, Tom, and I doubled as the id QA team. We played our games on our PCs, pounding multiple keys, literally banging on keyboards to see if our assaults could affect the game. On the verge of release, and with more people than ever before in the office, we spent thirty hours playing DOOM in every way we could think of—switching modes, hitting commands—running the game on every level in every game mode we had, using every option we added to the game to see if there were any glitches.

Things were looking good. We decided to run one last “burn-in” test, a classic test for games where the developers turn the game on and let it run overnight. We ran DOOM on every machine in the office. The plan was to let it run for hours to see if anything bad happened. After about two hours of being idle, the game froze on a couple screens. The computers seemed to be okay—if you hit “escape” the menu came up—but the game stopped running.

We hadn’t seen a bug like this during development, but Carmack was on the case. He was thinking and not saying a word, evidently poring over the invisible engine map in his head. Ten minutes passed before he figured it out. He concluded that we were using the timing chip in the PC to track the refresh of the screen and process sound, but we weren’t clearing the timing chip counter when the game started, which was causing the glitch. Ironically, this logic had been part of the engine from day one, so it was surprising we hadn’t noticed it before.

He sat down at his computer, fixed the bug, and made a new build of the game. We put the update on all the machines and held our breath for the next two hours.

Problem solved.

That was the last hurdle. We were ready to launch. That day, December 10, would be DOOM Day.

***

* IPX is an acronym for Internetwork Packet Exchange. In sum, it is a way in which computers can talk to one another.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-doom-guy-john-romero-abrams-press-143005383.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Some OnePlus smartphones are nearly 20 percent off, hitting record low prices

A pair of popular OnePlus smartphones just went on sale, hitting record low prices for both. The company’s flagship OnePlus 11 5G went down from $ 700 to $ 600, a savings of nearly 20 percent. The budget-friendly OnePlus Nord N30 5G got even, well, friendlier with a $ 50 discount, dropping the cost to $ 250 from $ 300. If you’re shopping for a smartphone, this is a good time to take the plunge.

We praised the OnePlus 11 as a “back-to-basics flagship smartphone,” noting its gorgeous 120Hz 6.6-inch OLED display, the fantastic battery life, 100W quick-charging and improved camera system when compared to its predecessor. In other words, the 11 was already a bargain at $ 800, as modern iPhones and Samsung phones cost upwards of $ 1,000. Today’s sale makes the bargain even harder to resist.

The OnePlus Nord N30 takes a more modest approach, as this is absolutely a low-priced smartphone rather than a flagship. However, it’s one of the best budget-friendly phones around and a great choice for anyone looking for a no-frills device that gets the job done. The specs are fantastic for the price, with a Snapdragon 695 processor, 8GB of RAM, 128GB of storage and a crisp 120Hz IPS display. Not many cheap phones can match this set of features.

These phones aren’t perfect, as the N30 lacks waterproofing and the 11 isn’t the most exciting flagship model in the world, but the list of pros far outweigh any list of cons. OnePlus isn’t widely available at retail outlets, so this sale is reserved for Amazon and Best Buy.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/some-oneplus-smartphones-are-nearly-20-percent-off-hitting-record-low-prices-184540056.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: Why we haven’t made the ‘Citizen Kane’ of gaming

Steven Spielberg’s wholesome sci-fi classic, E.T. the Extra-Terrestrial, became a cultural touchstone following its release in 1982. The film’s hastily-developed (as in, “you have five weeks to get this to market”) Atari 2600 tie-in game became a cultural touchstone for entirely different reasons.

In his new book, The Stuff Games Are Made Of, experimental game maker and assistant professor in design and computation arts at Concordia University in Montreal, Pippin Barr deconstructs the game design process using an octet of his own previous projects to shed light on specific aspects of how games could better be put together. In the excerpt below, Dr. Barr muses in what makes good cinema versus games and why the storytelling goals of those two mediums may not necessarily align.

I don't know, it's some geometric shapes on a desert?
MIT Press

Excerpted from The Stuff Games Are Made Of by Pippin Barr. Reprinted with permission from The MIT Press. Copyright 2023.


In the Atari 2600 video game version of the film E.T. the Extra-Terrestrial (Spielberg 1982), also called E. T. the Extra-Terrestrial (Atari 1982), the defining experience is falling into a pit. It’s cruelly fitting, then, that hundreds of thousands of the game’s physical cartridges were buried in a landfill in 1983. Why? It was one of the most spectacular failures in video game history. Why? It’s often put front and center as the worst game of all time. Why? Well, when you play it, you keep falling into a pit, among other things …

But was the video game E.T. so terrible? In many ways it was a victim of the video game industry’s voracious hunger for “sure fire” blockbusters. One strategy was to adapt already-popular movies like Raiders of the Lost Ark or, yes, E.T. the Extra-Terrestrial. Rushed to market with a development time of only five weeks, the game inevitably lacked the careful crafting of action-oriented gameplay backed by audience testing that other Atari titles had. I would argue, though, that its creator, Howard Scott Warshaw, found his way into a more truthful portrayal of the essence of the film than you might expect.

Yes, in the game E.T. is constantly falling into pits as he flees scientists and government agents. Yes, the game is disorienting in terms of understanding what to do, with arcane symbols and unclear objectives. But on the other hand, doesn’t all that make for a more poignant portrayal of E.T.’s experience, stranded on an alien planet, trying to get home? What if E.T. the Extra-Terrestrial is a good adaptation of the film, and just an unpopular video game?

The world of video games has admired the world of film from the beginning. This has led to a long-running conversation between game design and the audiovisual language of cinema, from cutscenes to narration to fades and more. In this sense, films are one of the key materials games are made of. However, even video games’ contemporary dominance of the revenue competition has not been quite enough to soothe a nagging sense that games just don’t measure up. Roger Ebert famously (and rather overbearingly) claimed that video games could “never be art,” and although we can mostly laugh about it now that we have games like Kentucky Route Zero and Disco Elysium, it still hurts. What if Ebert was right in the sense that video games aren’t as good at being art as cinema is?

Art has seldom been on game studios’ minds in making film adaptations. From Adventures of Tron for the Atari 2600 to Toy Story Drop! on today’s mobile devices, the video game industry has continually tried for instant brand recognition and easy sales via film. Sadly, the resulting games tend just to lay movie visuals and stories over tried-and-true game genres such as racing, fighting, or match 3. And the search for films that are inherently “video game-y” hasn’t helped much either. In Marvel’s Spider-Man: Miles Morales, Spider-Man ends up largely as a vessel for swinging and punching, and you certainly can’t participate in Miles’s inner life. So what happened to the “Citizen Kane of video games”?

A significant barrier has been game makers’ obsession with the audiovisual properties of cinema, the specific techniques, rather than some of the deeply structural or even philosophical opportunities. Film is exciting because of the ways it unpacks emotion, represents space, deploys metaphor, and more. To leverage the stuff of cinema, we need to take a close look at these other elements of films and explore how they might become the stuff of video games too. One way to do that in an organized way is to focus on adaptation, which is itself a kind of conversation between media that inevitably reveals much about both. And if you’re going to explore film adaptation to find the secret recipe, why not go with the obvious? Why not literally make Citizen Kane (Welles 1941) into a video game? Sure, Citizen Kane is not necessarily the greatest film of all time, but it certainly has epic symbolic value. Then again, Citizen Kane is an enormous, complex film with no car chases and no automatic weapons. Maybe it’s a terrible idea.

As video games have ascended to a position of cultural and economic dominance in the media landscape, there has been a temptation to see film as a toppled Caesar, with video games in the role of a Mark Antony who has “come to bury cinema, not to praise it.” But as game makers, we haven’t yet mined the depths offered by cinema’s rich history and its exciting contemporary voices. Borrowing cinema’s visual language of cameras, points of view, scenes, and so on was a crucial step in figuring out how video games might be structured, but the stuff of cinema has more to say than that. Citizen Kane encourages us to embrace tragedy and a quiet ending. The Conversation shows us that listening can be more powerful than action. Beau Travail points toward the beauty of self-expression in terrible times. Au Hasard Balthazar brings the complex weight of our own responsibilities to the fore.

There’s nothing wrong with an action movie or an action video game, but I suggest there’s huge value in looking beyond the low-hanging fruit of punch-ups and car chases to find genuinely new cinematic forms for the games we play. I’ll never play a round of Combat in the same way, thanks to the specter of Travis Bickle psyching himself up for his fight against the world at large. It’s time to return to cinema in order to think about what video games have been and what they can be. Early attempts to adapt films into games were perhaps “notoriously bad” (Fassone 2020), but that approach remains the most direct way for game designers to have a conversation with the cinematic medium and to come to terms with its potential. Even if we accept the idea that E.T. was terrible, which I don’t, it was also different and new.

This is bigger than cinema, though, because we’re really talking about adaptation as a form of video game design. While cinema (and television) is particularly well matched, all other media from theater to literature to music are teeming with ideas still untried in the youthful domain of video games. One way to fast-track experimentation is of course to adapt plays, poems, and songs. To have those conversations. There can be an air of disdain for adaptations compared to originals, but I’m with Linda Hutcheon (2012, 9) who asserts in A Theory of Adaptation that “an adaptation is a derivation that is not derivative — a work that is second without being secondary.” As Jay Bolter and Richard Grusin (2003, 15) put it, “what is new about new media comes from the particular ways in which they refashion older media.” This is all the more so when the question is how to adapt a specific work in another medium, where, as Hutcheon claims, “the act of adaptation always involves both (re-)interpretation and then (re-)creation.” That is, adaptation is inherently thoughtful and generative; it forces us to come to terms with the source materials in such a direct way that it can lay our design thinking bare—the conversation is loud and clear. As we’ve seen, choosing films outside the formulas of Hollywood blockbusters is one way to take that process of interpretation and creation a step further by exposing game design to more diverse cinematic influences.

Video games are an incredible way to explore not just the spaces we see on-screen, but also “the space of the mind.” When a game asks us to act as a character in a cinematic world, it can also ask us to think as that character, to weigh our choices with the same pressures and history they are subject to. Hutcheon critiques games’ adaptive possibilities on the grounds that their programming has “an even more goal- directed logic than film, with fewer of the gaps that film spectators, like readers, fill in to make meaning.” To me, this seems less like a criticism and more like an invitation to make that space. Quiet moments in games, as in films, may not be as exhilarating as a shoot-out, but they can demand engagement in a way that a shoot-out can’t. Video games are ready for this.

The resulting games may be strange children of their film parents, but they’ll be interesting children too, worth following as they grow up. Video game film adaptations will never be films, nor should they be—they introduce possibilities that not only recreate but also reimagine cinematic moments. The conversations we have with cinema through adaptation are ways to find brand new ideas for how to make games. Even the next blockbuster.

Yeah, cinema, I’m talkin’ to you.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-stuff-games-are-made-of-pippin-barr-mit-press-143054954.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: The thirty-year quest to make WiFi a connectivity reality

The modern world of consumer tech wouldn’t exist as we know it if not for the near-ubiquitous connectivity that Wi-Fi internet provides. It serves as the wireless link bridging our mobile devices and smart home appliances, enabling our streaming entertainment and connecting us to the global internet. 

In his new book, Beyond Everywhere: How Wi-Fi Became the World’s Most Beloved Technology, Greg Ennis, who co-authored the proposal that became the technical basis for WiFi technology before founding the Wi-Fi Alliance and serving as its VP of Technology for a quarter century, guides readers on the fascinating (and sometimes frustrating) genesis of this now everyday technology. In the excerpt below, Ennis recounts the harrowing final days of pitching and presentations before ultimately convincing the IEEE 802.11 Wireless LAN standards committee to adopt their candidate protocol as well as examine the outside influence that Bob Metcalf — inventor of both Ethernet, the standard, and 3Com, the tech company — had on Wi-Fi’s eventual emergence.

white writing on a blue background the V is a WiFi signal strength indicator
Post Hill Press

Excerpted from Beyond Everywhere: How Wi-Fi Became the World’s Most Beloved Technology (c) 2023 by Greg Ennis. Published by Post Hill Press. Used with permission.


With our DFWMAC foundation now chosen, the work for the IEEE committee calmed down into a deliberate process for approving the actual text language for the standard. There were still some big gaps that needed to be filled in—most important being an encryption scheme—but the committee settled into a routine of developing draft versions of the MAC sections of the ultimate standard document. At the January 1994 meeting in San Jose, I was selected to be Technical Editor of the entire (MAC+PHY) standard along with Bob O’Hara, and the two of us would continue to serve as editors through the first publication of the final standard in 1997. 

The first draft of the MAC sections was basically our DFWMAC specification reformatted into the IEEE template. The development of the text was a well-established process within IEEE standards committees: as Bob and I would complete a draft, the members of the committee would submit comments, and at the subsequent meeting, there would be debates and decisions on improvements to the text. There were changes made to the packet formats, and detailed algorithmic language was developed for the operations of the protocol, but by and large, the conceptual framework of DFWMAC was left intact. In fact, nearly thirty years after DFWMAC was first proposed, its core ideas continue to form the foundation for Wi-Fi.

 While this text-finalization process was going on, the technology refused to stand still. Advances in both radio communications theory and circuit design meant that higher speeds might be possible beyond the 2-megabit maximum in the draft standard. Many companies within the industry were starting to look at higher speeds even before the original standard was finally formally adopted in 1997. Achieving a speed greater than 10 megabits — comparable to standard Ethernet — had become the wireless LAN industry’s Holy Grail. The challenge was to do this while staying within the FCC’s requirements — something that would require both science and art. 

Faster is always better, of course, but what was driving the push for 10 megabits? What wireless applications were really going to require 10-megabit speeds? The dominant applications for wireless LANs in the 1990s were the so-called “verticals” — for example, Symbol’s installations that involved handheld barcode scanners for inventory management. Such specialized wireless networks were installed by vertically integrated system providers offering a complete service package, including hardware, software, applications, training, and support, hence the “vertical” nomenclature. While 10-megabit speeds would be nice for these vertical applications, it probably wasn’t necessary, and if the cost were to go up, such speeds wouldn’t be justifiable. So instead, it would be the so-called “horizontal” market — wireless connectivity for general purpose computers — that drove this need for speed. In particular, the predominantly Ethernet-based office automation market, with PCs connected to shared printers and file servers, was seen as requiring faster speeds than the IEEE standard’s 2 megabits.

Bob Metcalfe is famous in the computer industry for three things: Ethernet, Metcalfe’s Law, and 3Com. He co-invented Ethernet; that’s simple enough and would be grounds for his fame all by itself. Metcalfe’s Law— which, of course, is not actually a law of physics but nonetheless seems to have real explanatory power— states that the value of a communication technology is proportional to the square of the number of connected devices. This intuitively plausible “law” explains the viral snowball effect that can result from the growing popularity of a network technology. But it would be Metcalfe’s 3Com that enters into our Wi-Fi story at this moment. 

Metcalfe invented Ethernet while working at PARC, the Xerox Palo Alto Research Center. PARC played a key role in developing many of the most important technologies of today, including window-based graphic computer interfaces and laser printing, in addition to Ethernet. But Xerox is famous for “Fumbling the Future,” also the title of a 1999 book documenting how “Xerox invented, then ignored, the first personal computer,” since the innovations developed at PARC generally ended up being commercialized not by Xerox but by Apple and others. Not surprisingly, Metcalfe decided he needed a different company to take his Ethernet invention to the market, and in 1979, he formed 3Com with some partners.

This was the same year I joined Sytek, which had been founded just a couple of months prior. Like 3Com, Sytek focused on LAN products, although based on broadband cable television technology in contrast to 3Com’s Ethernet. But whereas Sytek concentrated on hardware, 3Com decided to also develop their own software supporting new LAN-based office applications for shared PC access to data files and printers. With these software products in combination with their Ethernet technology, 3Com became a dominant player in the booming office automation market during the nineties that followed the introduction of personal computers. Bob Metcalfe was famously skeptical about wireless LANs. In the August 16, 1993, issue of InfoWorld, he wrote up his opinion in a piece entitled “Wireless computing will flop — permanently”:

This isn’t to say there won’t be any wireless computing. Wireless mobile computers will eventually be as common as today’s pipeless mobile bathrooms. Porta-potties are found on planes and boats, on construction sites, at rock concerts, and other places where it is very inconvenient to run pipes. But bathrooms are still predominantly plumbed. For more or less the same reasons, computers will stay wired.

Was his comparison of wireless to porta-potties just sour grapes? After all, this is coming from the inventor of Ethernet, the very archetype of a wired network. In any event, we were fortunate that Metcalfe was no longer involved with 3Com management in 1996 — because 3Com now enters our story as a major catalyst for the development of Wi-Fi. 

3Com’s strategy for wireless LANs was naturally a subject of great interest, as whatever direction they decided to take was going to be a significant factor in the market. As the premier Ethernet company with a customer base that was accustomed to 10-megabit speeds, it was clear that they wouldn’t take any steps unless the wireless speeds increased beyond the 2 megabits of the draft IEEE standard. But might they decide to stay out of wireless completely, like Bob Metcalfe counselled, to focus on their strong market position with wired Ethernet? And if they did decide to join the wireless world, would they develop their own technology to accomplish this? Or would they partner with an existing wireless developer? The task of navigating 3Com through this twisted path would fall to a disarmingly boyish business development whiz named Jeff Abramowitz, who approached me one afternoon quite unexpectedly. 

Jeff tapped me on the shoulder at an IEEE meeting. “Hey, Greg, can I talk with you for a sec?” he whispered, and we both snuck quietly out of the meeting room. “Just wondering if you have any time available to take on a new project.” He didn’t even give me a chance to respond before continuing with a smile: “10 megabits. Wireless Ethernet.” The idea of working with the foremost Ethernet company on a high-speed version of 802.11 obviously enticed me, and I quickly said, “Let’s get together next week.”

He told me that they had already made some progress towards an internally developed implementation, but that in his opinion, it was more promising for them to partner with one of the major active players. 3Com wanted to procure a complete system of  wireless LAN products that they could offer to their customer base, comprising access points and plug-in adapters (“client devices”) for both laptops and desktops. There would need to be a Request for Proposal developed, which would, of course, include both technical and business requirements, and Jeff looked to me to help formulate the technical requirements. The potential partners included Symbol, Lucent, Aironet, InTalk, and Harris Semiconductor, among others, and our first task was to develop this RFP to send out to these companies. 

Symbol should need no introduction, having been my client and having played a major role in the development of the DFWMAC protocol that was selected as the foundation for the 802.11 standard. Lucent may sound like a new player, but in fact, this is simply our NCR Dutch colleagues from Utrecht — including Wim, Cees, Vic, and Bruce — under a new corporate name, NCR having been first bought by AT&T and then spun off into Lucent. Aironet is similarly an old friend under a new name — back at the start of our story, we saw that the very first wireless LAN product approved by the FCC was from a Canadian company called Telesystems, which eventually was merged into Telxon, with Aironet then being the result of a 1994 spinoff focusing on the wireless LAN business. And in another sign of the small-world nature of the wireless LAN industry at this time, my DFWMAC co-author, Phil Belanger, had moved from Xircom to Aironet in early 1996. 

The two companies here who are truly new to our story are InTalk and Harris. InTalk was a small startup founded in 1996 in Cambridge, England (and then subsequently acquired by Nokia), whose engineers were significant contributors to the development of the final text within the 802.11 standard. Harris Corporation was a major defense contractor headquartered in Melbourne, Florida, who leveraged their radio system design experience into an early wireless LAN chip development project. Since they were focused on being a chip supplier rather than an equipment manufacturer, we didn’t expect them to submit their own proposal, but it was likely that other responders would incorporate their chips, so we certainly viewed them as an important player. 

Over the first couple of months in 1997, Jeff and I worked up a Request for Proposal for 3Com to send out, along with a 3Com engineer named David Fisher, and by March we were able to provide the final version to various candidate partners. Given 3Com’s position in the general LAN market, the level of interest was high, and we indeed got a good set of proposals back from the companies we expected, including Symbol, Lucent, InTalk, and Aironet. These companies, along with Harris, quickly became our focus, and we began a process of intense engagement with all of them over the next several months, building relationships in the process that a year later would ultimately lead to the formation of the Wi-Fi Alliance. 

Bob Metcalfe’s wireless skepticism had been soundly rejected by the very company he founded, with 3Com instead adopting the mantle of wireless evangelism. And Wireless Ethernet, soon to be christened Wi-Fi, was destined to outshine its wired LAN ancestor.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-beyond-everywhere-greg-ennis-post-hill-press-143010153.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: In England’s industrial mills, even the clocks worked against you

America didn’t get around to really addressing child labor until the late ’30s when Roosevelts New Deal took hold and the Public Contracts Act raised the minimum age to 16. Before then, kids could often look forward to spending the majorities of their days doing some of the most dangerous and delicate work required on the factory floor. It’s something today’s kids can look forward to as well.

InHands of Time: A Watchmaker’s History, venerated watchmaker Rebecca Struthers explores how the practice and technology of timekeeping has shaped and molded the modern world through her examination of history’s most acclaimed timepieces. In the excerpt below, however, we take a look at 18th- and 19th-century Britain where timekeeping was used as a means of social coercion in keeping both adult and child workers pliant and productive.

it looks like the inner workings of an intricate timepiece with the title written around the outer bezel edge
HarperCollins

Excerpted fromHands of Time: A Watchmaker’s History by Rebecca Struthers. Published by Harper. Copyright © 2023 by Rebecca Struthers. All rights reserved.


Although Puritanism had disappeared from the mainstream in Europe by the time of the Industrial Revolution, industrialists, too, preached redemption through hard work — lest the Devil find work for idle hands to do. Now, though, the goal was productivity as much as redemption, although the two were often conveniently conflated. To those used to working by the clock, the provincial workers’ way of time appeared lazy and disorganized and became increasingly associated with unchristian, slovenly ways. Instead ‘time thrift’ was promoted as a virtue, and even as a source of health. In 1757, the Irish statesman Edmund Burke argued that it was ‘excessive rest and relaxation [that] can be fatal producing melancholy, dejection, despair, and often self-murder’ while hard work was ‘necessary to health of body and mind’.

Historian E.P. Thompson, in his famous essay ‘Time, Work-Discipline and Industrial Capitalism’, poetically described the role of the watch in eighteenth-century Britain as ‘the small instrument which now regulated the rhythms of industrial life’. It’s a description that, as a watchmaker, I particularly enjoy, as I’m often ‘regulating’ the watches I work on — adjusting the active hairspring length to get the watch running at the right rate — so they can regulate us in our daily lives. For the managerial classes, however, their watches dictated not just their own lives but also those of their employees.

In 1850 James Myles, a factory worker from Dundee, wrote a detailed account of his life working in a spinning mill. James had lived in the countryside before relocating to Dundee with his mother and siblings after his father was sentenced to seven years’ transportation to the colonies for murder. James was just seven years old when he managed to get a factory job, a great relief to his mother as the family were already starving. He describes stepping into ‘the dust, the din, the work, the hissing and roaring of one person to another’. At a nearby mill the working day ran for seventeen to nineteen hours and mealtimes were almost dispensed with in order to eke the very most out of their workers’ productivity, ‘Women were employed to boil potatoes and carry them in baskets to the different flats; and the children had to swallow a potato hastily … On dinners cooked and eaten as I have described, they had to subsist till half past nine, and frequently ten at night.’ In order to get workers to the factory on time, foremen sent men round to wake them up. Myles describes how ‘balmy sleep had scarcely closed their urchin eyelids, and steeped their infant souls in blessed forgetfulness, when the thumping of the watchmen’s staff on the door would rouse them from repose, and the words “Get up; it’s four o’clock,” reminded them they were factory children, the unprotected victims of monotonous slavery.’

Human alarm clocks, or ‘knocker-uppers’, became a common sight in industrial cities.* If you weren’t in possession of a clock with an alarm (an expensive complication at the time), you could pay your neighborhood knocker-upper a small fee to tap on your bedroom windows with a long stick, or even a pea shooter, at the agreed time. Knocker-uppers tried to concentrate as many clients within a short walking distance as they could, but were also careful not to knock too hard in case they woke up their customer’s neighbors for free. Their services became more in demand as factories increasingly relied on shift work, expecting people to work irregular hours.

Once in the workplace, access to time was often deliberately restricted and could be manipulated by the employer. By removing all visible clocks other than those controlled by the factory, the only person who knew what time the workers had started and how long they’d been going was the factory master. Shaving time off lunch and designated breaks and extending the working day for a few minutes here and there was easily done. As watches started to become more affordable, those who were able to buy them posed an unwelcome challenge to the factory master’s authority.

An account from a mill worker in the mid-nineteenth century describes how: ‘We worked as long as we could see in the summer time, and I could not say what hour it was when we stopped. There was nobody but the master and the master’s son who had a watch, and we did not know the time. There was one man who had a watch … It was taken from him and given into the master’s custody because he had told the men the time of day …’

James Myles tells a similar story: ‘In reality there were no regular hours: masters and managers did with us as they liked. The clocks at factories were often put forward in the morning and back at night, and instead of being instruments for the measurement of time, they were used as cloaks for cheatery and oppression. Though it is known among the hands, all were afraid to speak, and a workman then was afraid to carry a watch, as it was no uncommon event to dismiss anyone who presumed to know too much about the science of Horology.’

Time was a form of social control. Making people start work at the crack of dawn, or even earlier, was seen as an effective way to prevent working-class misbehavior and help them to become productive members of society. As one industrialist explained, ‘The necessity of early rising would reduce the poor to a necessity of going to Bed bedtime; and thereby prevent the Danger of Midnight revels.’ And getting the poor used to temporal control couldn’t start soon enough. Even children’s anarchic sense of the present should be tamed and fitted to schedule. In 1770 English cleric William Temple had advocated that all poor children should be sent from the age of four to workhouses, where they would also receive two hours of schooling a day. He believed that there was:

considerable use in their being, somehow or other, constantly employed for at least twelve hours a day, whether [these four-year-olds] earn their living or not; for by these means, we hope that the rising generation will be so habituated to constant employment that it would at length prove agreeable and entertaining to them …

Because we all know how entertaining most four-year-olds would find ten hours of hard labor followed by another two of schooling. In 1772, in an essay distributed as a pamphlet entitled A View of Real Grievances, an anonymous author added that this training in the ‘habit of industry’ would ensure that, by the time a child was just six or seven, they would be ‘habituated, not to say naturalized to Labour and Fatigue.’ For those readers with young children looking for further tips, the author offered examples of the work most suited to children of ‘their age and strength’, chief being agriculture or service at sea. Appropriate tasks to occupy them include digging, plowing, hedging, chopping wood and carrying heavy things. What could go wrong with giving a six-year-old an ax or sending them off to join the navy?

The watch industry had its own branch of exploitative child labour in the form of what is known as the Christchurch Fusee Chain Gang. When the Napoleonic Wars caused problems with the supply of fusee chains, most of which came from Switzerland, an entrepreneurial clockmaker from the south coast of England, called Robert Harvey Cox, saw an opportunity. Making fusee chains isn’t complicated, but it is exceedingly fiddly. The chains, similar in design to a bicycle chain, are not much thicker than a horse’s hair, and are made up of links that are each stamped by hand and then riveted together. To make a section of chain the length of a fingertip requires seventy-fi ve or more individual links and rivets; a complete fusee chain can be the length of your hand. One book on watchmaking calls it ‘the worst job in the world’. Cox, however, saw it as perfect labor for the little hands of children and, when the Christchurch and Bournemouth Union Workhouse opened in 1764 down the road from him to provide accommodation for the town’s poor, he knew where to go looking. At its peak, Cox’s factory employed around forty to fifty children, some as young as nine, under the pretext of preventing them from being a financial burden. Their wages, sometimes less than a shilling a week (around £3 today), were paid directly to their workhouse. Days were long and, although they appear to have had some kind of magnification to use, the work could cause headaches and permanent damage to their eyesight. Cox’s factory was followed by others, and Christchurch, this otherwise obscure market town on the south coast, would go on to become Britain’s leading manufacturer of fusee chains right up until the outbreak of the First World War in 1914.

The damage industrial working attitudes to time caused to poor working communities was very real. The combination of long hours of hard labor, in often dangerous and heavily polluted environments, with disease and malnutrition caused by abject poverty, was toxic. Life expectancy in some of the most intensive manufacturing areas of Britain was incredibly low. An 1841 census of the Black Country parish of Dudley in the West Midlands found that the average was just sixteen years and seven months.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-hands-of-time-rebecca-struthers-harper-143034889.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: The dangerous real-world consequences of our online attention economy

If reality television has taught us anything, it’s there’s not much people won’t do if offered enough money and attention. Sometimes, even just the latter. Unfortunately for the future prospects of our civilization, modern social media has focused upon those same character foibles and optimized them at a global scale, sacrifices at the altar of audience growth and engagement. In Outrage Machine, writer and technologist Tobias Rose-Stockwell, walks readers through the inner workings of these modern technologies, illustrating how they’re designed to capture and keep our attention, regardless of what they have to do in order to do it. In the excerpt below, Rose-Stockwell examines the human cost of feeding the content machine through a discussion on YouTube personality Nikocado Avocado’s rise to internet stardom.

 

lots of angry faces, black text white background
Legacy Lit

Excerpted from OUTRAGE MACHINE: How Tech Amplifies Discontent, Disrupts Democracy—And What We Can Do About It by Tobias Rose-Stockwell. Copyright © 2023 by Tobias Rose-Stockwell. Reprinted with permission of Legacy Lit. All rights reserved.


This Game Is Not Just a Game

Social media can seem like a game. When we open our apps and craft a post, the way we look to score points in the form of likes and followers distinctly resembles a strange new playful competition. But while it feels like a game, it is unlike any other game we might play in our spare time.

The academic C. Thi Nguyen has explained how games are different: “Actions in games are screened off, in important ways, from ordinary life. When we are playing basketball, and you block my pass, I do not take this to be a sign of your long-term hostility towards me. When we are playing at having an insult contest, we don’t take each other’s speech to be indicative of our actual attitudes or beliefs about the world.” Games happen in what the Dutch historian Johan Huizinga famously called “the magic circle”— where the players take on alternate roles, and our actions take on alternate meanings.

With social media we never exit the game. Our phones are always with us. We don’t extricate ourselves from the mechanics. And since the goal of the game designers of social media is to keep us there as long as possible, it’s an active competition with real life. With a constant type of habituated attention being pulled into the metrics, we never leave these digital spaces. In doing so, social media has colonized our world with its game mechanics.

Metrics are Money

While we are paid in the small rushes of dopamine that come from accumulating abstract numbers, metrics also translate into hard cash. Acquiring these metrics don’t just provide us with hits of emotional validation. They are transferable into economic value that is quantifiable and very real.

It’s no secret that the ability to consistently capture attention is an asset that brands will pay for. A follower is a tangible, monetizable asset worth money. If you’re trying to purchase followers, Twitter will charge you between $ 2 and $ 4 to acquire a new one using their promoted accounts feature.

If you have a significant enough following, brands will pay you to post sponsored items on their behalf. Depending on the size of your following in Instagram, for instance, these payouts can range from $ 75 per post (to an account with two thousand followers), up to hundreds of thousands of dollars per post (for accounts with hundreds of thousands of followers).

Between 2017 and 2021, the average cost for reaching a thousand Twitter users (the metric advertisers use is CPM, or cost per mille) was between $ 5 and $ 7. It costs that much to get a thousand eyeballs on your post. Any strategies that increase how much your content is shared also have a financial value.

Let’s now bring this economic incentive back to Billy Brady’s accounting of the engagement value of moral outrage. He found that adding a single moral or emotional word to a post on Twitter increased the viral spread of that content by 17 percent per word. All of our posts to social media exist in a marketplace for attention — they vie for the top of our followers’ feeds. Our posts are always competing against other people’s posts. If outraged posts have an advantage in this competition, they are literally worth more money.

For a brand or an individual, if you want to increase the value of a post, then including moral outrage, or linking to a larger movement that signals its moral conviction, might increase the reach of that content by at least that much. Moreover, it might actually improve the perception and brand affinity by appealing to the moral foundations of the brand’s consumers and employees, increasing sales and burnishing their reputation. This can be an inherently polarizing strategy, as a company that picks a cause to support, whose audience is morally diverse, might then alienate a sizable percentage of their customer base who disagree with that cause. But these economics can also make sense — if a company knows enough about its consumers’ and employees’ moral affiliations — it can make sure to pick a cause-sector that’s in line with its customers.

Since moral content is a reliable tool for capturing attention, it can also be used for psychographic profiling for future marketing opportunities. Many major brands do this with tremendous success — creating viral campaigns that utilize moral righteousness and outrage to gain traction and attention among core consumers who have a similar moral disposition. These campaigns also often get a secondary boost due to the proliferation of pile- ons and think pieces discussing these ad spots. Brands that moralize their products often succeed in the attention marketplace.

This basic economic incentive can help to explain how and why so many brands have begun to link themselves with online cause-related issues. While it may make strong moral sense to those decision-makers, it can make clear economic sense to the company as a whole as well. Social media provides measurable financial incentives for companies to include moral language in their quest to burnish their brands and perceptions.

But as nefarious as this sounds, moralization of content is not always the result of callous manipulation and greed. Social metrics do something else that influences our behavior in pernicious ways.

Audience Capture

In the latter days of 2016, I wrote an article about how social media was diminishing our capacity for empathy. In the wake of that year’s presidential election, the article went hugely viral, and was shared with several million people. At the time I was working on other projects full time. When the article took off, I shifted my focus away from the consulting work I had been doing for years, and began focusing instead on writing full time. One of the by-products of that tremendous signal from this new audience is the book you’re reading right now.

A sizable new audience of strangers had given me a clear message: This was important. Do more of it. When many people we care about tell us what we should be doing, we listen.

This is the result of “audience capture”: how we influence, and are influenced by those who observe us. We don’t just capture an audience — we are also captured by their feedback. This is often a wonderful thing, provoking us to produce more useful and interesting works. As creators, the signal from our audience is a huge part of why we do what we do.

But it also has a dark side. The writer Gurwinder Boghal has explained the phenomena of audience capture for influencers illustrating the story of a young YouTuber named Nicholas Perry. In 2016, Perry began a You- Tube channel as a skinny vegan violinist. After a year of getting little traction online, he abandoned veganism, citing health concerns, and shifted to uploading mukbang (eating show) videos of him trying different foods for his followers. These followers began demanding more and more extreme feats of food consumption. Before long, in an attempt to appease his increasingly demanding audience, he was posting videos of himself eating whole fast-food menus in a single sitting.

He found a large audience with this new format. In terms of metrics, this new format was overwhelmingly successful. After several years of following his audience’s continued requests, he amassed millions of followers, and over a billion total views. But in the process, his online identity and physical character changed dramatically as well. Nicholas Perry became the personality Nikocado — an obese parody of himself, ballooning to more than four hundred pounds, voraciously consuming anything his audience asked him to eat. Following his audience’s desires caused him to pursue increasingly extreme feats at the expense of his mental and physical health.

a horrifying before and after
Legacy Lit

Nicholas Perry, left, and Nikocado, right, after several years of building a following on YouTube. Source: Nikocado Avocado YouTube Channel.

Boghal summarizes this cross-directional influence.

When influencers are analyzing audience feedback, they often find that their more outlandish behavior receives the most attention and approval, which leads them to recalibrate their personalities according to far more extreme social cues than those they’d receive in real life. In doing this they exaggerate the more idiosyncratic facets of their personalities, becoming crude caricatures of themselves.

This need not only apply to influencers. We are signal-processing machines. We respond to the types of positive signals we receive from those who observe us. Our audiences online reflect back to us what their opinion of our behavior is, and we adapt to fit it. The metrics (likes, followers, shares, and comments) available to us now on social media allow for us to measure that feedback far more precisely than we previously could, leading to us internalizing what is “good” behavior.

As we find ourselves more and more inside of these online spaces, this influence becomes more pronounced. As Boghal notes, “We are all gaining online audiences.” Anytime we post to our followers, we are entering into a process of exchange with our viewers — one that is beholden to the same extreme engagement problems found everywhere else on social media.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-dangerous-real-world-consequences-of-our-online-attention-economy-143050602.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: Sputnik’s radio tech launched a revolution in bird migration research

“Birds fly South for the winter and North for the summer,” has historically proven to be only slightly less reliable a maxim than the sun always rising in the East and setting in the West. Humanity has been fascinated by the comings and goings of our avian neighbors for millennia, but the why’s and how’s of their transitory travel habits have remained largely a mystery until recent years. In Flight Paths, science author Rebecca Heisman details the fascinating history of modern bird migration research and the pioneering ornithologists that helped the field take off. In the excerpt below, Heisman recalls the efforts of Dr. Bill Cochran, a trailblazer in radio-tagging techniques, to track his airborne, and actively-transmitting, quarry across the Canadian border.        

flock of birds in flight over a blue ombre cover
HarperCollins

From Flight Paths, Copyright © 2023 By Rebecca Heisman. Reprinted here with permission of Harper, an imprint of HarperCollins Publishers


Follow That Beep

Swainson’s thrush looks a bit like a small brown version of its familiar cousin the American robin. Its gray-brown back contrasts with a pale, spotted chest and pale “spectacle” markings around its eyes. These thrushes are shy birds that forage for insects in the leaf litter on the forest floor, where they blend in with the dappled light and deep shadows. Birders know them by their fluting, upward-spiraling song, which fills the woods of Canada and the northern United States with ethereal music in summer. But they don’t live there year-round; they spend the winters in Mexico and northern South America, then return north to breed.

On the morning of May 13, 1973, a Swainson’s thrush pausing on its journey from its winter home to its summer home blundered into a mist net in east-central Illinois. The researchers who gently pulled it from the net went through all the usual rituals—weighing and measuring it, clasping a numbered metal band around its leg—but they added one unusual element: a tiny radio transmitter weighing just five- thousandths of an ounce. They carefully trimmed the feathers from a small patch on the bird’s back, then used eyelash glue to cement the transmitter, mounted on a bit of cloth, in place against the bird’s skin (Generations of ornithologists have learned exactly where to find the eyelash glue at their local cosmetics store. Designed to not irritate the delicate skin of the eyelids when attaching false eyelashes, it doesn’t irritate birds’ skin, either, and wears off after weeks or months.) 

When the thrush was released, it probably shuffled its feathers a few times as it got used to its new accessory, then returned to resting and foraging in preparation for continuing its trek. At only around 3 percent of the bird’s total body weight, the transmitter wouldn’t have impeded the bird noticeably as it went about its daily routine. Then, around 8:40 that evening, after the sun had dipped far enough below the horizon that the evening light was beginning to dim, the thrush launched itself into the air, heading northwest.

It would have had no way of knowing that it was being followed. Bill Cochran — the same engineer who, a decade and a half earlier, had rigged up a tape recorder with a bicycle axle and six thousand feet of tape so that Richard Graber could record a full night of nocturnal flight calls — had been waiting nearby in a converted Chevy station wagon with a large antenna poking out of a hole in the roof. When the thrush set out into the evening sky, Cochran and a student named Charles Welling were following on the roads below.

All they could see in the deepening night was the patch of highway illuminated by their headlights, but the sound of the wavering “beep . . . beep . . . beep” of the transmitter joined them to the thrush overhead as if by an invisible thread. They would keep at it for seven madcap nights, following the thrush for more than 930 miles before losing the signal for good in rural southern Manitoba on the morning of May 20.

Along the way, they would collect data on its altitude (which varied from 210 to 6,500 feet), air and ground speed (eighteen to twenty-seven and nine to fifty-two miles per hour, respectively, with the ground speed depending on the presence of headwinds or tailwinds), distance covered each night (65 to 233 miles), and, crucially, its heading. Because they were able to stick with the bird over such a long distance, Cochran and Welling were able to track how the precise direction the bird set out in each night changed as its position changed relative to magnetic north. The gradual changes they saw in its heading were consistent with the direction of magnetic north, providing some of the first real-world evidence that migrating songbirds use some sort of internal magnetic compass as one of their tools for navigation. Today Bill Cochran is a legend among ornithologists for his pioneering work tracking radio-tagged birds on their migratory odysseys. But it wasn’t birds that first drew him into the field of radio telemetry; it was the space race.

From Sputnik to Ducks

In October 1957, the Soviet Union launched the world’s first artificial satellite into orbit. Essentially just a metal sphere that beeped, Sputnik 1 transmitted a radio signal for three weeks before its battery died. (It burned up in the atmosphere in January 1958.) That signal could be picked up by anyone with a good radio receiver and antenna, and scientists and amateur radio enthusiasts alike tracked its progress around and around Earth.

It caused a sensation around the world — including in Illinois, where the University of Illinois radio astronomer George Swenson started following the signals of Sputnik 1 and its successors to learn more about the properties of Earth’s atmosphere. Around 1960, Swenson got permission to design a radio beacon of his own to be incorporated into a Discoverer satellite, the U.S. answer to the Sputnik program. In need of locals with experience in electrical engineering to work on the project, he recruited Bill Cochran (who still had not officially finished his engineering degree — he wouldn’t complete the last class until 1964) to assist.

Cochran, as you may recall, had spent the late 1950s working at a television station in Illinois while studying engineering on the side and spending his nights helping Richard Graber perfect his system for recording nocturnal flight calls. By 1960, no longer satisfied with flight calls alone as a means of learning about migration, Graber had procured a small radar unit and gotten Cochran a part-time job with the Illinois Natural History Survey helping operate it. But along the way, Cochran had apparently demonstrated “exceptional facility with transistor circuits,” which is what got him the job with Swenson. It was the transistor, invented in 1947, that ultimately made both the space race and wildlife telemetry possible.

The beating heart of a radio transmitter is the oscillator, usually a tiny quartz crystal. When voltage is applied to a crystal, it changes shape ever so slightly at the molecular level and then snaps back, over and over again. This produces a tiny electric signal at a specific frequency, but it needs to be amplified before being sent out into the world. Sort of like how a lever lets you turn a small motion into a bigger one, an amplifier in an electrical circuit turns a weak signal into a stronger one.

Before and during World War II, amplifying a signal required controlling the flow of electrons through a circuit using a series of vacuum-containing glass tubes. Vacuum tubes got the job done, but they were fragile, bulky, required a lot of power, and tended to blow out regularly; owners of early television sets had to be adept at replacing vacuum tubes to keep them working. In a transistor, the old-fashioned vacuum tube is replaced by a “semiconductor” material (originally germanium, and later silicon), allowing the flow of electrons to be adjusted up or down by tweaking the material’s conductivity. Lightweight, efficient, and durable, transistors quickly made vacuum tubes obsolete. Today they’re used in almost every kind of electric circuit. Several billion of them are transisting away inside the laptop I’m using to write this.

As transistors caught on in the 1950s, the U.S. Navy began to take a special interest in radio telemetry, experimenting with systems to collect and transmit real-time data on a jet pilot’s vital signs and to study the effectiveness of cold-water suits for sailors. These efforts directly inspired some of the first uses of telemetry for wildlife research. In 1957, scientists in Antarctica used the system from the cold-water suit tests to monitor the temperature of a penguin egg during incubation, while a group of researchers in Maryland borrowed some ideas from the jet pilot project and surgically implanted transmitters in woodchucks. [ed: Although harnesses, collars, and the like are also commonly used for tracking wildlife today, surgically implanting transmitters has its advantages, such as eliminating the chance that an external transmitter will impede an animal’s movements.] Their device had a range of only about twenty-five yards, but it was the first attempt to use radio telemetry to track animals’ movements. The Office of Naval Research even directly funded some of the first wildlife telemetry experiments; navy officials hoped that radio tracking “may help discover the bird’s secret of migration, which disclosure might, in turn, lead to new concepts for the development of advanced miniaturized navigation and detection systems.”

Cochran didn’t know any of this at the time. Nor did he know that the Discoverer satellites he and Swenson were building radio beacons for were, in fact, the very first U.S. spy satellites; he and Swenson knew only that the satellites’ main purpose was classified. Working with a minimal budget, a ten-pound weight limit, and almost no information about the rocket that would carry their creation, they built a device they dubbed Nora-Alice (a reference to a popular comic strip of the time) that launched in 1961. Cochran was continuing his side job with the Illinois Natural History Survey all the while, and eventually someone there suggested trying to use a radio transmitter to track a duck in flight.

“A mallard duck was sent over from the research station on the Illinois River,” Swenson later wrote in a coda to his reminiscences about the satellite project. “At our Urbana satellite-monitoring station, a tiny transistor oscillator was strapped around the bird’s breast by a metal band. The duck was disoriented from a week’s captivity, and sat calmly on the workbench while its signal was tuned in on the receiver. As it breathed quietly, the metal band periodically distorted and pulled the frequency, causing a varying beat note from the receiver.”

Swenson and Cochran recorded those distortions and variations on a chart, and when the bird was released, they found they could track its respiration and wing beats by the changes in the signal; when the bird breathed faster or beat its wings more frequently, the distortions sped up. Without even meaning to, they’d gathered some of the very first data on the physiology of birds in flight.

An Achievement of Another Kind

Bill Cochran enjoys messing with telemarketers. So, when he received a call from a phone number he didn’t recognize, he answered with a particularly facetious greeting.

“Animal shelter! We’re closed!”

“Uh . . . this is Rebecca Heisman, calling for Bill Cochran?”

“Who?”

“Is this Bill Cochran?”

“Yes, who are you?”

Once we established that he was in fact the radio telemetry legend Bill Cochran, not the animal shelter janitor he was pretending to be, and I was the writer whom he’d invited via email to give him a call, not a telemarketer, he told me he was busy but that I could call him back at the same time the next day.

Cochran was nearly ninety when we first spoke in the spring of 2021. Almost five decades had passed since his 1973 thrush-chasing odyssey, but story after story from the trek came back to him as we talked. He and Welling slept in the truck during the day when the thrush landed to rest and refuel, unwilling to risk a motel in case the bird took off again unexpectedly. While Welling drove, Cochran controlled the antenna. The base of the column that supported it extended down into the backseat of their vehicle, and he could adjust the antenna by raising, lowering, and rotating it, resembling a submarine crewman operating a periscope.

At one point, Cochran recalled, he and Welling got sick with “some kind of flu” while in Minnesota and, unable to find a doctor willing to see two eccentric out-of-towners on zero notice, just “sweated it out” and continued on. At another point during their passage through Minnesota, Welling spent a night in jail. They were pulled over by a small-town cop (Cochran described it as a speed trap but was adamant that they weren’t speeding, claiming the cop was just suspicious of the weird appearance of their tracking vehicle) but couldn’t stop for long or they would lose the bird. Welling stayed with the cop to sort things out while Cochran went on, and after the bird set down for the day, Cochran doubled back to pick him up.

“The bird got a big tailwind when it left Minnesota,” Cochran said. “We could barely keep up, we were driving over the speed limit on those empty roads — there aren’t many people in North Dakota — but we got farther and farther behind it, and finally by the time we caught up with it, it had already flown into Canada.”

Far from an official crossing point where they could legally enter Manitoba, they were forced to listen at the border as the signal faded into the distance. The next day they found a border crossing (heaven knows what the border agents made of the giant antenna on top of the truck) and miraculously picked up the signal again, only to have their vehicle start to break down. “It overheated and it wouldn’t run, so the next thing you know Charles is out there on the hood of the truck, pouring gasoline into the carburetor to keep it running,” Cochran recalled. “And every time we could find any place where there was a ditch with rainwater, we improvised something to carry water out of the ditch and pour it into the radiator. We finally managed to limp into a town to get repairs made.”

Cochran recruited a local pilot to take him up in a plane in one last attempt to relocate the radio-tagged bird and keep going, but to no avail. The chase was over. The data they had collected would be immortalized in a terse three-page scientific paper that doesn’t hint at all the adventures behind the numbers.

That 1973 journey wasn’t the first time Cochran and his colleagues had followed a radio-tagged bird cross-country, nor was it the last. After his first foray into wildlife telemetry at George Swenson’s lab, Cochran quickly became sought after by wildlife biologists throughout the region. He first worked with the Illinois Natural History Survey biologist Rexford Lord, who was looking for a more accurate way to survey the local cottontail rabbit population. Although big engineering firms such as Honeywell had already tried to build radio tracking systems that could be used with wildlife, Cochran succeeded where others had failed by literally thinking outside the box: instead of putting the transmitter components into a metal box that had to be awkwardly strapped to an animal’s back, he favored designs that were as small, simple, and compact as possible, dipping the assembly of components in plastic resin to seal them together and waterproof them. Today, as in Cochran’s time, designing a radio transmitter to be worn by an animal requires making trade-offs among a long list of factors: a longer antenna will give you a stronger signal, and a bigger battery will give you a longer-lasting tag, but both add weight. Cochran was arguably the first engineer to master this balancing act.

The transmitters Cochran created for Lord cost eight dollars to build, weighed a third of an ounce, and had a range of up to two miles. Attaching them to animals via collars or harnesses, Cochran and Lord used them to track the movements of skunks and raccoons as well as rabbits. Cochran didn’t initially realize the significance of what he’d achieved, but when Lord gave a presentation about their project at a 1961 mammalogy conference, he suddenly found himself inundated with job offers from biologists. Sharing his designs with anyone who asked instead of patenting them, he even let biologists stay in his spare room when they visited to learn telemetry techniques from him. When I asked him why he decided to go into a career in wildlife telemetry rather than sticking with satellites, he told me he was simply more interested in birds than in a job “with some engineering company making a big salary and designing weapons that’ll kill people.”

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-flight-paths-rebecca-heisman-harper-publishing-143053788.html?src=rss
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: The women who made ENIAC more than a weapon

After Mary Sears and her team had revolutionized the field of oceanography, but before Katherine G. Johnson, Dorothy Vaughan and Mary Jackson helped put John Glenn into orbit, a cadre of women programmers working for the US government faced an impossible task: train ENIAC, the world's first modern computer, to do more than quickly calculate artillery trajectories. Though successful — and without the aid of a guide or manual no less — their names and deeds were lost to the annals of history, until author Kathy Kleiman, through a Herculean research effort of her own, brought their stories to light in Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer.

Proving Grounds Cover
Grand Central Publishing

Excerpted from the book Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer by Kathy Kleiman. Copyright © 2022 by First Byte Productions, LLC. Reprinted with permission of Grand Central Publishing. All rights reserved.


Demonstration Day, February 15, 1946

The Moore School stood ready as people began to arrive by train and trolley. John and Pres, as well as the engineers and deans and professors of the university, wore their best suits and Army officers were in dress uniform with their medals gleaming. The six women wore their best professional skirt suits and dresses.

Kay and Fran manned the front door of the Moore School. As the scientists and technologists arrived, some from as far as Boston, the two women welcomed them warmly. They asked everyone to hang up their heavy winter coats on the portable coat racks that Moore School staff had left nearby. Then they directed them down the hall and around the corner to the ENIAC room.

Just before 11:00 a.m., Fran and Kay ran back to be in the ENIAC room when the demonstration began.

As they slid into the back of the room, everything was at the ready. At the front of the great ENIAC U, there was space for some speakers, a few rows of chairs, and plenty of standing room for invited guests and ENIAC team members. Across the room, Marlyn, Betty, and Jean stood in the back and the women smiled to each other. Their big moment was about to begin. Ruth stayed outside, pointing late arrivals in the right direction.

The room was packed and was filled with an air of anticipation and wonder as people saw ENIAC for the first time.

Demonstration Day started with a few introductions. Major General Barnes started with the BRL officers and Moore School deans and then presented John and Pres as the co-inventors. Then Arthur came to the front of the room and introduced himself as the master of ceremonies for the ENIAC events. He would run five programs, all using the remote control box he held in his hand.

The first program was an addition. Arthur hit one of the but-tons and the ENIAC whirled to life. Then he ran a multiplication. His expert audience knew that ENIAC was calculating it many times faster than any other machine in the world. Then he ran the table of squares and cubes, and then sines and cosines. So far, Demonstration Day was the same as the one two weeks earlier, and for this sophisticated audience, the presentation was pretty boring.

But Arthur was just getting started and the drama was about to begin. He told them that now he would run a ballistics trajectory three times on ENIAC.

He pushed the button and ran it once. The trajectory “ran beautifully,” Betty remembered. Then Arthur ran it again, a version of the trajectory without the punched cards printing, and it ran much faster. Punched cards actually slowed things down a little bit.

Then Arthur pointed everyone to the grids of tiny lights at the top of the accumulators and urged his attendees to look closely at them in the moments to come. He nodded to Pres, who stood against the wall, and suddenly Pres turned off the lights. In the black room, only a few small status lights were lit on the units of ENIAC. Everything else was in darkness.

With a click of the button, Arthur brought the ENIAC to life. For a dazzling twenty seconds, the ENIAC lit up. Those watching the accumulators closely saw the 100 tiny lights twinkle as they moved in a flash, first going up as the missile ascended to the sky, and then going down as it sped back to earth, the lights forever changing and twinkling. Those twenty seconds seemed at once an eternity and instantaneous.

Then the ENIAC finished, and darkness filled the room again. Arthur and Pres waited a moment, and then Pres turned on the lights and Arthur announced dramatically that ENIAC had just completed a trajectory faster than it would take a missile to leave the muzzle of artillery and hit its target. “Everybody gasped.”

Less than twenty seconds. This audience of scientists, technologists, engineers, and mathematicians knew how many hours it took to calculate a differential calculus equation by hand. They knew that ENIAC had calculated the work of a week in fewer than two dozen seconds. They knew the world had changed.

Climax complete, everyone in the room was beaming. The Army officers knew their risk had paid off. The ENIAC engineers knew their hardware was a success. The Moore School deans knew they no longer had to be worried about being embarrassed. And the ENIAC Programmers knew that their trajectory had worked perfectly. Years of work, effort, ingenuity, and creativity had come together in twenty seconds of pure innovation.

Some would later call this moment the birth of the “Electronic Computing Revolution.” Others would soon call it the birth of the Information Age. After those precious twenty seconds, no one would give a second look to the great Mark I electromechanical computer or the differential analyzer. After Demonstration Day, the country was on a clear path to general- purpose, programmable, all- electronic computing. There was no other direction. There was no other future. John, Pres, Herman, and some of the engineers fielded questions from the guests, and then the formal session finished. But no one wanted to leave. Attendees surrounded John and Pres, Arthur and Harold.

The women circulated. They had taken turns running punched cards through the tabulator and had stacks of trajectory printouts to share. They divided up the sheets and moved around the room to hand them out. Attendees were happy to receive a trajectory, a souvenir of the great moment they had just witnessed.

But no attendee congratulated the women. Because no guest knew what they had done. In the midst of the announcements and the introductions of Army officers, Moore School deans, and ENIAC inventors, the Programmers had been left out. “None of us girls were ever introduced as any part of it” that day, Kay noted later.

Since no one had thought to name the six young women who programmed the ballistics trajectory, the audience did not know of their work: thousands of hours spent learning the units of ENIAC, studying its “direct programming” method, breaking down the ballistics trajectory into discrete steps, writing the detailed pedaling sheets for the trajectory program, setting up their program on ENIAC, and learning ENIAC “down to a vacuum tube.” Later, Jean said, they “did receive a lot of compliments” from the ENIAC team, but at that moment they were unknown to the guests in the room.

And at that moment, it did not matter. They cared about the success of ENIAC and their team, and they knew they had played a role, a critical role, in the success of the day. This was a day that would go down in history, and they had been there and played an invaluable part.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: The mad science behind digging really huge holes

Sure you could replace the President with a self-aware roboclone, take the moon hostage, threaten to release a millennia-old Eldritch horror to wreak unspeakable terror upon the populace, or just blew up a few financial servers in your pursuit of global dominion, but a savvy supervillain knows that the true path to power is through holes — the deeper, the better. 

In the excerpt below from his newest book, author Ryan North spelunks into the issues surrounding extreme mining and how the same principles that brought us the Kola Superdeep Borehole could be leveraged to dominate humanity, or turn a tidy profit. And, if you're not digging the whole hole scheme, How to Take Over the World has designs for every wannabe Brain, from pulling the internet's proverbial plug to bioengineering a dinosaur army — even achieving immortality if the first few plans fail to pan out.

How to Take Over the World cover
Riverhead Books

From HOW TO TAKE OVER THE WORLD: Practical Schemes and Scientific Solutions for the Aspiring Supervillain by Ryan North published on March 15, 2022 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2022 Ryan North.


The world’s deepest hole, as of this writing, is the now-­abandoned Kola Superdeep Borehole, located on the Kola Peninsula in Russia, north of the Arctic Circle. It’s a hole 23 centimeters (cm) in diameter, and it was started in May 1970 with a target depth of 15,000m. By 1989, Soviet scientists had reached a depth of 12,262m, but they found they were unable to make further progress due to a few related issues. The first was that temperatures were increasing faster than they’d expected. They’d expected to encounter temperatures of around 100°C at that depth but encountered 180°C heat instead, which was damaging their equipment. That, combined with the type of rock found and the pressure at those depths, was causing the rock to behave in a way that was almost plastic. Whenever the drill bit was removed for maintenance or repair, rocks would move into the hole to fill it. Attempts to dig deeper were made for years, but no hole ever made it farther than 12,262m, and the scientists were forced to conclude that there was simply no technology available at the time that could push any deeper. The Soviet Union dissolved in 1991 in an unrelated event, drilling stopped in 1992, the site was shut down, and the surface-­level opening to the hole was welded closed in 1995. Today, the drill site is an abandoned and crumbling ruin, and that still-­world-record-­holding maximum depth, 12,262m, is less than 0.2% of the way to the Earth’s center, some 6,371 km below.

So, that’s a concern.

But that was back in the ’90s, and we humans have continued to dig holes since! The International Ocean Discovery Program (IODP) has a plan to dig through the thinner oceanic crust, hoping to break through to the mantle and recover the first sample of it taken in place — but this project, estimated to cost $ 1 billion USD, has not yet been successful. Still, a ship built for the project, the Chikyū, has briefly held the world record for deepest oceanic hole (7,740m below sea level!), until it was surpassed by the Deepwater Horizon drilling rig, which dug a hole 10,683m below sea level and then exploded.

The evidence here all points to one depressing conclusion: the deepest holes humanity has ever made don’t go nearly far enough, and they’ve already reached the point where things get too hot — and too plastic — to continue.

But these holes were all dug not by supervillains chasing lost gold but by scientists, a group largely constrained by their “ethical principles” and “socially accepted morals.” To a supervillain, the solution here is obvious. If the problem is that the rocks are so hot that they’re damaging equipment and flowing into the hole, why not simply make a hole wide enough that some slight movement isn’t catastrophic, and cool enough so the rocks are all hardened into place? Why not simply abandon the tiny, 23cm-­diameter boreholes of the Soviets and the similarly sized drill holes of the IODP, and instead think of something bigger? Something bolder?

Something like a colossal open-­pit mine?

Such a mine would minimize the effects of rocks shifting by giving them a lot more room to shift — and us a lot more time to react — before they become a problem. You could keep those rocks cool and rigid with one of the most convenient coolants we have: cold liquid water. On contact with hot rocks or magma, water turns to steam, carrying that heat up and away into the atmosphere, where it can disperse naturally — while at the same time cooling the rocks so that they remain both solid enough to drill and rigid enough to stay in place. It would take an incredible amount of water, but lucky for us, Earth’s surface is 71% covered with the stuff!

So if you build a sufficiently large open-­pit mine next to the ocean and use a dam to allow water to flow into the pit to cool the rocks as needed, then you’ll be the proud owner of a mine that allows you to reach greater depths, both literal and metaphorical, than anyone else in history! This scheme has the added benefit that, if we’re clever, we can use the steam that’s generated by cooling all that hot rock and magma to spin turbines, which could then generate more power for drilling. You’ll build a steam engine that’s powered by the primordial and nigh-inexhaustible heat of the Earth herself.

The exact dimensions of open-­pit mines vary depending on what’s being mined, but they’re all shaped like irregular cones, with the biggest part at ground level and the smallest part at the bottom of the pit. The open-­pit mine that’s both the world’s largest and deepest is the Bingham Canyon copper mine in Utah: it’s been in use since 1906, and in that time it has produced a hole in the Earth’s crust that’s 4km wide and 1.2km deep. Using those dimensions as a rough guide produces the following chart:

How to take over the world
Penguin Randomhouse

… and here we have another problem. Just reaching the bottom of the crust needs a hole over five times the length of the island of Manhattan, dozens of times wider than any other hole made by humanity, and easily large enough to be seen from space. Reaching the bottom of the lower mantle would require a hole so huge that its opening would encompass 75% of the Earth’s diameter, and to do the same with the outer and inner cores requires holes that are wider than the Earth itself.

Even if you could turn almost half the Earth into an open-­pit mine cooled by seawater, the steam created by cooling a pit that size would effectively boil the oceans and turn the Earth into a sauna, destroying the climate, collapsing food chains, and threatening all life on the planet — and that’s before you even reach the hostage-­taking phase, let alone the part where you plunder forbidden gold! Things get even bleaker once you take into account the responses from the governments you’d upset by turning their countries into hole; the almost inconceivable amount of time, energy, and money required to move that much matter; where you’d put all that rock once you dug it up; or the true, objective inability for anyone, no matter how well funded, ambitious, or self-­realized, to possibly dig a hole this huge.

So.

That’s another concern.

It pains me to say this, but… there is absolutely no way, given current technology, for anyone to dig a hole to the center of the Earth no matter how well funded they are, even if they drain the world’s oceans in the attempt. We have reached the point where your ambition has outpaced even my wildest plans, most villainous schemes, and more importantly strongest and most heat-­resistant materials. Heck, we’re actually closer to immortal humans (see Chapter 8) than we are to tunneling to the Earth’s core. It’s unachievable. Impossible. There’s simply no way forward.

It’s truly, truly hopeless. It’s hard for me to admit it, but even the maddest science can’t realize every ambition.

I’m sorry. There’s nothing more I can do.

. . . for that plan, anyway!

But every good villain always has a Plan B, one that snatches victory from the jaws of defeat. And heck, if you’ve got your heart set on digging a hole, making some demands, and becoming richer than Midas and Gates and Luthor in the process—who am I to stop you?

You’re going to sidestep the issues of heat and pressure in the Earth’s core by staying safely inside the crust, within the depth range of holes we already know how to dig. And you’re going to sidestep the issues of legality that tend to surround schemes to take the Earth’s core hostage by instead legally selling access to your hole to large corporations and the megarich, who will happily pay through their noses for the privilege. Why?

Because instead of digging down, you’re going to dig sideways. Instead of mining gold, you’re going to mine information. And unlike even the lost gold of the Earth’s core, this mine is practically inexhaustible.

It all has to do with stock trading. In the mid-­twentieth century, stock exchanges had trading floors, which were actual, physical floors where offers to buy and sell were shouted, out loud, to other traders. It was noisy and chaotic, but it ensured everyone on the trading floor had, in theory, equal access to the same information. Those floor traders were later supplemented by telephone trading, and then almost entirely replaced by electronic trading, which is how most stock exchanges operate today. At the time, both telephone and electronic trading could be pitched as simply a higher-­tech version of the same floor trading that already existed, but they also did something more subtle: they moved trading from the trading floor to outside the exchanges themselves, where everyone might not have access to the same information.

Turns out, there’s money to be made from that.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: Amiga and the birth of 256-color gaming

With modern consoles offering gamers graphics so photorealistic that they blur the line between CGI and reality, it's easy forget just how cartoonishly blocky they were in the 8-bit era. In his new book, Creating Q*Bert and Other Classic Arcade Games, legendary game designer and programmer Warren Davis recalls his halcyon days imagining and designing some of the biggest hits to ever grace an arcade. In the excerpt below, Davis explains how the industry made its technological leap from 8- to 12-bit graphics.       

qbert
Santa Monica Press

©2021 Santa Monica Press


Back at my regular day job, I became particularly fascinated with a new product that came out for the Amiga computer: a video digitizer made by a company called A-Squared. Let’s unpack all that slowly.

The Amiga was a recently released home computer capable of unprecedented graphics and sound: 4,096 colors! Eight-bit stereo sound! There were image manipulation programs for it that could do things no other computer, including the IBM PC, could do. We had one at Williams not only because of its capabilities, but also because our own Jack Haeger, an immensely talented artist who’d worked on Sinistar at Williams a few years earlier, was also the art director for the Amiga design team.

Video digitization is the process of grabbing a video image from some video source, like a camera or a videotape, and converting it into pixel data that a computer system (or video game) could use. A full-color photograph might contain millions of colors, many just subtly different from one another. Even though the Amiga could only display 4,096 colors, that was enough to see an image on its monitor that looked almost perfectly photographic.

Our video game system still could only display 16 colors total. At that level, photographic images were just not possible. But we (and by that I mean everyone working in the video game industry) knew that would change. As memory became cheaper and processors faster, we knew that 256-color systems would soon be possible. In fact, when I started looking into digitized video, our hardware designer, Mark Loffredo, was already playing around with ideas for a new 256-color hardware system.

Let’s talk about color resolution for a second. Come on, you know you want to. No worries if you don’t, though, you can skip these next few paragraphs if you like. Color resolution is the number of colors a computer system is capable of displaying. And it’s all tied in to memory. For example, our video game system could display 16 colors. But artists weren’t locked into 16 specific colors. The hardware used a “palette.” Artists could choose from a fairly wide range of colors, but only 16 of them could be saved in the palette at any given time. Those colors could be programmed to change while a game was running. In fact, changing colors in a palette dynamically allowed for a common technique used in old video games called “color cycling.”

For the hardware to know what color to display at each pixel location, each pixel on the screen had to be identified as one of those 16 colors in the palette. The collection of memory that contained the color values for every pixel on the screen was called “screen memory.” Numerically, it takes 4 bits (half a byte) to represent 16 numbers (trust me on the math here), so if 4 bits = 1 pixel, then 1 byte of memory could hold 2 pixels. By contrast, if you wanted to be able to display 256 colors, it would take 8 bits to represent 256 numbers. That’s 1 byte (or 8 bits) per pixel.

So you’d need twice as much screen memory to display 256 colors as you would to display 16. Memory wasn’t cheap, though, and game manufacturers wanted to keep costs down as much as possible. So memory prices had to drop before management approved doubling the screen memory.

Today we take for granted color resolutions of 24 bits per pixel (which potentially allows up to 16,777,216 colors and true photographic quality). But back then, 256 colors seemed like such a luxury. Even though it didn’t approach the 4,096 colors of the Amiga, I was convinced that such a system could result in close to photo-realistic images. And the idea of having movie-quality images in a video game was very exciting to me, so I pitched to management the advantages of getting a head start on this technology. They agreed and bought the digitizer for me to play around with.

The Amiga’s digitizer was crude. Very crude. It came with a piece of hardware that plugged into the Amiga on one end, and to the video output of a black-and-white surveillance camera (sold separately) on the other. The camera needed to be mounted on a tripod so it didn’t move. You pointed it at something (that also couldn’t move), and put a color wheel between the camera and the subject. The color wheel was a circular piece of plastic divided into quarters with different tints: red, green, blue, and clear.

When you started the digitizing process, a motor turned the color wheel very slowly, and in about thirty to forty seconds you had a full-color digitized image of your subject. “Full-color” on the Amiga meant 4 bits of red, green, and blue—or 12-bit color, resulting in a total of 4,096 colors possible.

It’s hard to believe just how exciting this was! At that time, it was like something from science fiction. And the coolness of it wasn’t so much how it worked (because it was pretty damn clunky) but the potential that was there. The Amiga digitizer wasn’t practical—the camera and subject needed to be still for so long, and the time it took to grab each image made the process mind-numbingly slow—but just having the ability to produce 12-bit images at all enabled me to start exploring algorithms for color reduction.

Color reduction is the process of taking an image with a lot of colors (say, up to the 16,777,216 possible colors in a 24-bit image) and finding a smaller number of colors (say, 256) to best represent that image. If you could do that, then those 256 colors would form a palette, and every pixel in the image would be represented by a number—an “index” that pointed to one of the colors in that palette. As I mentioned earlier, with a palette of 256 colors, each index could fit into a single byte.

But I needed an algorithm to figure out how to pick the best 256 colors out of the thousands that might be present in a digitized image. Since there was no internet back then, I went to libraries and began combing through academic journals and technical magazines, searching for research done in this area. Eventually, I found some! There were numerous papers written on the subject, each outlining a different approach, some easier to understand than others. Over the next few weeks, I implemented a few of these algorithms for generating 256 color palettes using test images from the Amiga digitizer. Some gave better results than others. Images that were inherently monochromatic looked the best, since many of the 256 colors could be allotted to different shades of a single color.

During this time, Loffredo was busy developing his 256-color hardware. His plan was to support multiple circuit boards, which could be inserted into slots as needed, much like a PC. A single board would give you one surface plane to draw on. A second board gave you two planes, foreground and background, and so on. With enough planes, and by having each plane scroll horizontally at a slightly different rate, you could give the illusion of depth in a side-scrolling game.

All was moving along smoothly until the day word came down that Eugene Jarvis had completed his MBA and was returning to Williams to head up the video department. This was big news! I think most people were pretty excited about this. I know I was, because despite our movement toward 256-color hardware, the video department was still without a strong leader at the helm. Eugene, given his already legendary status at Williams, was the perfect person to take the lead, partly because he had some strong ideas of where to take the department, and also due to management’s faith in him. Whereas anybody else would have to convince management to go along with an idea, Eugene pretty much had carte blanche in their eyes. Once he was back, he told management what we needed to do and they made sure he, and we, had the resources to do it.

This meant, however, that Loffredo’s planar hardware system was toast. Eugene had his own ideas, and everyone quickly jumped on board. He wanted to create a 256-color system based on a new CPU chip from Texas Instruments, the 34010 GSP (Graphics System Processor). The 34010 was revolutionary in that it included graphics-related features within its core. Normally, CPUs would have no direct connection to the graphics portion of the hardware, though there might be some co-processor to handle graphics chores (such as Williams’ proprietary VLSI blitter). But the 34010 had that capability on board, obviating the need for a graphics co-processor.

Looking at the 34010’s specs, however, revealed that the speed of its graphics functions, while well-suited for light graphics work such as spreadsheets and word processors, was certainly not fast enough for pushing pixels the way we needed. So Mark Loffredo went back to the drawing board to design a VLSI blitter chip for the new system.

Around this time, a new piece of hardware arrived in the marketplace that signaled the next generation of video digitizing. It was called the Image Capture Board (ICB), and it was developed by a group within AT&T called the EPICenter (which eventually split from AT&T and became Truevision). The ICB was one of three boards offered, the others being the VDA (Video Display Adapter, with no digitizing capability) and the Targa (which came in three different configurations: 8-bit, 16-bit, and 24-bit). The ICB came with a piece of software called TIPS that allowed you to digitize images and do some minor editing on them. All of these boards were designed to plug in to an internal slot on a PC running MS-DOS, the original text-based operating system for the IBM PC. (You may be wondering . . . where was Windows? Windows 1.0 was introduced in 1985, but it was terribly clunky and not widely used or accepted. Windows really didn’t achieve any kind of popularity until version 3.0, which arrived in 1990, a few years after the release of Truvision’s boards.)

A little bit of trivia: the TGA file format that’s still around today (though not as popular as it once was) was created by Truevision for the TARGA series of boards. The ICB was a huge leap forward from the Amiga digitizer in that you could use a color video camera (no more black-and-white camera or color wheel), and the time to grab a frame was drastically reduced—not quite instantaneous, as I recall, but only a second or two, rather than thirty or forty seconds. And it internally stored colors as 16-bits, rather than 12 like the Amiga. This meant 5 bits each of red, green, and blue—the same that our game hardware used—resulting in a true-color image of up to 32,768 colors, rather than 4,096. Palette reduction would still be a crucial step in the process. The greatest thing about the Truevision boards was they came with a Software Development Kit (SDK), which meant I could write my own software to control the board, tailoring it to my specific needs. This was truly amazing! Once again, I was so excited about the possibilities that my head was spinning.

I think it’s safe to say that most people making video games in those days thought about the future. We realized that the speed and memory limitations we were forced to work under were a temporary constraint. We realized that whether the video game industry was a fad or not, we were at the forefront of a new form of storytelling. Maybe this was a little more true for me because of my interest in filmmaking, or maybe not. But my experiences so far in the game industry fueled my imagination about what might come. And for me, the holy grail was interactive movies. The notion of telling a story in which the player was not a passive viewer but an active participant was extremely compelling. People were already experimenting with it under the constraints of current technology. Zork and the rest of Infocom’s text adventure games were probably the earliest examples, and more would follow with every improvement in technology. But what I didn’t know was if the technology needed to achieve my end goal—fully interactive movies with film-quality graphics—would ever be possible in my lifetime. I didn’t dwell on these visions of the future. They were just thoughts in my head. Yet, while it’s nice to dream, at some point you’ve got to come back down to earth. If you don’t take the one step in front of you, you can be sure you’ll never reach your ultimate destination, wherever that may be.

I dove into the task and began learning the specific capabilities of the board, as well as its limitations. With the first iteration of my software, which I dubbed WTARG (“W” for Williams, “TARG” for TARGA), you could grab a single image from either a live camera or a videotape. I added a few different palette reduction algorithms so you could try each and find the best palette for that image. More importantly, I added the ability to find the best palette for a group of images, since all the images of an animation needed to have a consistent look. There was no chroma key functionality in those early boards, so artists would have to erase the background manually. I added some tools to help them do that.

This was a far cry from what I ultimately hoped for, which was a system where we could point a camera at live actors and instantly have an animation of their action running on our game hardware. But it was a start.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Two years and a pandemic later, fast-charging graphene batteries are hitting shelves

After years of research, products leveraging the amazing properties of graphene are finally available for batteries. What do we get from this new tech?
Emerging Tech | Digital Trends

Hitting the Books: How Amazon’s aggressive R&D push made it an e-commerce behemoth

Amazon is the Standard Oil of the 21st century. Its business operations and global reach dwarf those of virtually every other company on the planet — and exceed the GDP of more than a few countries — illustrating the vital importance innovation has on the modern economy. In his latest book, The Exponential Age: How Accelerating Technology is Transforming Business, Politics and Society, author Azeem Azhar examines how the ever-increasing pace of technological progress is impacting, influencing — and often rebuilding — our social, political and economic mores from the ground up.

The Exponential Age by Azeem Azhar
Diversion Books

Excerpted from The Exponential Age: How Accelerating Technology is Transforming Business, Politics and Society by Azeem Azhar. Copyright © 2021 Azeem Azhar. Printed with permission of the publisher, Diversion Books. All rights reserved.


In 2020, Amazon turned twenty-six years old. Over the previous quarter of a century, the company had transformed shopping. With retail revenues in excess of $ 213 billion, it was larger than Germany’s Schwarz Gruppe, America’s Costco, and every British retailer. Only America’s Walmart, with more than half a trillion dollars of sales, was bigger. But Amazon was, by this time, far and away the world’s largest online retailer. Its online business was about eight times larger than Walmart’s. Amazon was more than just an online shop, however. Its huge operations in areas such as cloud computing, logistics, media, and hardware added a further $ 172 billion in sales.

At the heart of Amazon’s success is an annual research and development budget that reached a staggering $ 36 billion in 2019, and which is used to develop everything from robots to smart home assistants. This sum leaves other companies — and many governments — behind. It is not far off the UK government’s annual budget for research and development. The entire US government’s federal R&D budget for 2018 was only $ 134 billion. 

Amazon spent more on R&D in 2018 than the US National Institutes of Health. Roche, the global pharmaceutical company renowned for its investment in research, spent a mere $ 12 billion in R&D in 2018. Meanwhile Tesco, the largest retailer in Britain — with annual sales in excess of £50 billion (approximately $ 70 billion) — had a research lab whose budget was in the “six figures” in 2016.

Perhaps more remarkable is the rate at which Amazon grew this budget. Ten years earlier, Amazon’s research budget was $ 1.2 billion. Over the course of the next decade, the firm increased its annual R&D budget by about 44 percent every year. As the 2010s went on, Amazon doubled down on its investments in research. In the words of Werner Vogels, the firm’s chief technology officer, if they stopped innovating they “would be out of business in ten to fifteen years.”

In the process, Amazon created a chasm between the old world and the new. The approach of traditional business was to rely on models that succeeded yesterday. They were based on a strategy that tomorrow might be a little different, but not markedly so.

This kind of linear thinking, rooted in the assumption that change takes decades and not months, may have worked in the past—but not anymore. Amazon understood the nature of the Exponential Age. The pace of change was accelerating; the companies that could harness the technologies of the new era would take off. And those that couldn’t keep up would be undone at remarkable speed.

This divergence between the old and the new is one example of what I call the “exponential gap.” On the one hand, there are technologies that develop at an exponential pace—and the companies, institutions, and communities that adapt to or harness those developments. On the other, there are the ideas and norms of the old world. The companies, institutions, and communities that can only adapt at an incremental pace. These get left behind—and fast.

The emergence of this gap is a consequence of exponential technology. Until the early 2010s, most companies assumed the cost of their inputs would remain pretty similar from year to year, perhaps with a nudge for inflation. The raw materials might fluctuate based on commodity markets, but their planning processes, institutionalized in management orthodoxy, could manage such volatility. But in the Exponential Age, one primary input for a company is its ability to process information. One of the main costs to process that data is computation. And the cost of computation didn’t rise each year; it declined rapidly. The underlying dynamics of how companies operate had shifted.

In Chapter 1, we explored how Moore’s Law amounts to a halving of the underlying cost of computation every couple of years. It means that every ten years, the cost of the processing that can be done by a computer will decline by a factor of one hundred. But the implications of this process stretch far beyond our personal laptop use—and far beyond the interests of any one laptop manufacturer.

In general, if an organization needs to do something that uses computation, and that task is too expensive today, it probably won’t be too expensive in a couple of years. For companies, this realization has deep significance. Firms that figured out that the effective price of computation was declining, even if the notional price of what they were buying was staying the same (or even rising), could plan, practice, and experiment with the near future in mind. Even if those futuristic activities were expensive now, they would become affordable soon enough. Organizations that understood this deflation, and planned for it, became well-positioned to take advantage of the Exponential Age.

If Amazon’s early recognition of this trend helped transform it into one of the most valuable companies in history, they were not alone. Many of the new digital giants—from Uber to Alibaba, Spotify to TikTok—took a similar path. And following in their footsteps were firms who understand how these processes apply in other sectors. The bosses at Tesla understood that the prices of electric vehicles might decline on an exponential curve, and launched the electric vehicle revolution. The founders of Impossible Foods understood how the expensive process of precision fermentation (which involves genetically modified microorganisms) would get cheaper and cheaper. Executives at space companies like Spire and Planet Labs understood this process would drive down the cost of putting satellites in orbit. Companies that didn’t adapt to exponential technology shifts, like much of the newspaper publishing industry, didn’t stand a chance.

We can visualize the gap by returning to our now-familiar exponential curve. As we’ve seen, individual technologies develop according to an S-curve, which begins by roughly following an exponential trajectory. And as we’ve seen, it starts off looking a bit humdrum. In those early days, exponential change is distinctly boring, and most people and organizations ignore it. At this point in the curve, the industry producing an exponential technology looks exciting to those in it, but like a backwater to everyone else. But at some point, the line of exponential change crosses that of linear change. Soon it reaches an inflection point. That shift in gear, which is both sudden and subtle, is hard to fathom. 

Because, for all the visibility of exponential change, most of the institutions that make up our society follow a linear trajectory. Codified laws and unspoken social norms; legacy companies and NGOs; political systems and intergovernmental bodies—all have only ever known how to adapt incrementally. Stability is an important force within institutions. In fact, it’s built into them.

The gap between our institutions’ capacity to change and our new technologies’ accelerating speed is the defining consequence of our shift into the Exponential Age. On the one side, you have the new behaviors, relationships, and structures that are enabled by exponentially improving technologies, and the products and services built from them. On the other, you have the norms that have evolved or been designed to suit the needs of earlier configurations of technology. The gap leads to extreme tension. In the Exponential Age, this divergence is ongoing—and it is everywhere.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: Why that one uncle of yours continually refuses to believe in climate change

The holidays are fast approaching and you know what that means: pumpkin spice everything, seasonal cheer, and family gatherings — all while avoiding your QAnon adherent relatives like the plague. But when you do eventually get cornered by them, come prepared. 

In his latest book, How to Talk to a Science Denier, author Lee McIntyre examines the phenomenon of denialism, exploring the conspiracy theories that drive it, and explains how you can most effectively address your relatives' misplaced concerns over everything from mRNA vaccines to why the Earth isn't actually flat.

asdf
The MIT Press

How to Talk to a Science Denier: Conversations with Flat Earthers, Climate Deniers, and Other Who Defy Reason, by Lee McIntyre, published by The MIT Press.


Belief in conspiracy theories is one of the most toxic forms of human reasoning. This is not to say that real conspiracies do not exist. Watergate, the tobacco companies’ collusion to obfuscate the link between cigarette smoking and cancer, and the George W. Bush–era NSA program to secretly spy on civilian Internet users are all examples of real-life conspiracies, which were discovered through evidence and exposed after exhaustive investigation.

By contrast, what makes conspiracy theory reasoning so odious is that whether or not there is any evidence, the theory is asserted as true, which puts it beyond all reach of being tested or refuted by scientists and other debunkers. The distinction, therefore, should be between actual conspiracies (for which there should be some evidence) and conspiracy theories (which customarily have no credible evidence). We might define a conspiracy theory as an “explanation that makes reference to hidden, malevolent forces seeking to advance some nefarious aim.” Crucially, we need to add that these tend to be “highly speculative [and] based on no evidence. They are pure conjecture, without any basis in reality.”

When we talk about the danger of conspiracy theories for scientific reasoning, our focus should therefore be on their nonempirical nature, which means that they are not even capable of being tested in the first place. What is wrong with conspiracy theories is not normally that they have already been refuted (though many have), but that thousands of gullible people will continue to believe them even when they have been debunked.

If you scratch a science denier, chances are you’ll find a conspiracy theorist. Sadly, conspiracy theories seem to be quite common in the general population as well. In a recent study by Eric Oliver and Thomas Wood they found that 50 percent of Americans believed in at least one conspiracy theory.

This included the 9/11 truther and Obama birther conspiracies, but also the idea that the Food and Drug Administration (FDA) is deliberately withholding a cure for cancer, and that the Federal Reserve intentionally orchestrated the 2008 recession. (Notably, the JFK assassination conspiracy was so widely held that it was excluded from the study.)

Other common conspiracy theories — which run the range of popularity and outlandishness — are that “chemtrails” left by planes are part of a secret government mind-control spraying program, that the school shootings at Sandy Hook and Parkland were “false flag” operations, that the government is covering up the truth about UFOs, and of course the more “science-related” ones that the Earth is flat, that global warming is a hoax, that some corporations are intentionally creating toxic GMOs, and that COVID-19 is caused by 5G cell phone towers.

In its most basic form, a conspiracy theory is a non-evidentially justified belief that some tremendously unlikely thing is nonetheless true, but we just don’t realize it because there is a coordinated campaign run by powerful people to cover it up. Some have contended that conspiracy theories are especially prevalent in times of great societal upheaval. And, of course, this explains why conspiracy theories are not unique to modern times. As far back as the great fire of Rome in 64 AD, we saw conspiracy theories at work, when the citizens of Rome became suspicious over a weeklong blaze that consumed almost the entire city — while the emperor Nero was conveniently out of town. Rumors began to spread that Nero had started it in order to rebuild the city in his own design. While there was no evidence that this was true (nor for the legend that Nero sang while the city burned), Nero was apparently so upset by the accusation that he started his own conspiracy theory that it was in fact the Christians who were responsible, which led to the prevalence of burning them alive.

Here one understands immediately why conspiracy theories are anathema to scientific reasoning. In science, we test our beliefs against reality by looking for disconfirming evidence. If we find only evidence that fits our theory, then it might be true. But if we find any evidence that disconfirms our theory, it must be ruled out. With conspiracy theories, however, they don’t change their views even in the face of disconfirming evidence (nor do they seem to require much evidence, beyond gut instinct, that their views are true in the first place). Instead, conspiracy theorists tend to use the conspiracy itself as a way to explain any lack of evidence (because the clever conspirators must be hiding it) or the presence of evidence that disconfirms it (because the shills must be faking it). Thus, lack of evidence in favor of a conspiracy theory is in part explained by the conspiracy itself, which means that its adherents can count both evidence and lack of evidence in their favor.

Virtually all conspiracy theorists are what I call “cafeteria skeptics.” Although they profess to uphold the highest standards of reasoning, they do so inconsistently. Conspiracy theorists are famous for their double standard of evidence: they insist on an absurd standard of proof when it concerns something they do not want to believe, while accepting with scant to nonexistent evidence whatever they do want to believe. We have already seen the weakness of this type of selective reasoning with cherry-picking evidence. Add to this a predilection for the kind of paranoid suspicion that underlies most conspiracy-minded thinking, and we face an almost impenetrable wall of doubt. When a conspiracy theorist indulges their suspicions about the alleged dangers of vaccines, chemtrails, or fluoride — but then takes any contrary or debunking information as itself proof of a cover-up — they lock themselves in a hermetically sealed box of doubt that no amount of facts could ever get them out of. For all of their protests of skepticism, most conspiracy theorists are in fact quite gullible.

Belief in the flatness of the Earth is a great example. Time and again at FEIC 2018, I heard presenters say that any scientific evidence in favor of the curvature of the Earth had been faked. “There was no Moon landing; it happened on a Hollywood set.” “All the airline pilots and astronauts are in on the hoax.” “Those pictures from space are Photoshopped.” Not only did disconfirming evidence of these claims not cause the Flat Earthers to give up their beliefs, it was used as more evidence for the conspiracy! And of course to claim that the devil is behind the whole cover-up about Flat Earth could there be a bigger conspiracy theory? Indeed, most Flat Earthers would admit that themselves. A similar chain of reasoning is often used in climate change denial. President Trump has long held that global warming is a “Chinese hoax” meant to undermine the competitiveness of American manufacturing.

Others have contended that climate scientists are fudging the data or that they are biased because they are profiting from the money and attention being paid to their work. Some would argue that the plot is even more nefarious — that climate change is being used as a ruse to justify more government regulation or takeover of the world economy. Whatever evidence is presented to debunk these claims is explained as part of a conspiracy: it was faked, biased, or at least incomplete, and the real truth is being covered up. No amount of evidence can ever convince a hardcore science denier because they distrust the people who are gathering the evidence. So what is the explanation? Why do some people (like science deniers) engage in conspiracy theory thinking while others do not?

Various psychological theories have been offered, involving factors such as inflated self-confidence, narcissism, or low self-esteem. A more popular consensus seems to be that conspiracy theories are a coping mechanism that some people use to deal with feelings of anxiety and loss of control in the face of large, upsetting events. The human brain does not like random events, because we cannot learn from and therefore cannot plan for them. When we feel helpless (due to lack of understanding, the scale of an event, its personal impact on us, or our social position), we may feel drawn to explanations that identify an enemy we can confront. This is not a rational process, and researchers who have studied conspiracy theories note that those who tend to “go with their gut” are the most likely to indulge in conspiracy-based thinking. This is why ignorance is highly correlated with belief in conspiracy theories. When we are less able to understand something on the basis of our analytical faculties, we may feel more threatened by it.

There is also the fact that many are attracted to the idea of “hidden knowledge,” because it serves their ego to think that they are one of the few people to understand something that others don’t know. In one of the most fascinating studies of conspiracy-based thinking, Roland Imhoff invented a fictitious conspiracy theory, then measured how many subjects would believe it, depending on the epistemological context within which it was presented. Imhoff’s conspiracy was a doozy: he claimed that there was a German manufacturer of smoke alarms that emitted high-pitched sounds that made people feel nauseous and depressed. He alleged that the manufacturer knew about the problem but refused to fix it. When subjects thought that this was secret knowledge, they were much more likely to believe it. When Imhoff presented it as common knowledge, people were less likely to think that it was true.

One can’t help here but think of the six hundred cognoscenti in that ballroom in Denver. Out of six billion people on the planet, they were the self-appointed elite of the elite: the few who knew the “truth” about the flatness of the Earth and were now called upon to wake the others.

What is the harm from conspiracy theories? Some may seem benign, but note that the most likely factor in predicting belief in a conspiracy theory is belief in another one. And not all of those will be harmless. What about the anti-vaxxer who thinks that there is a government cover-

up of the data on thimerosal, whose child gives another measles? Or the belief that anthropogenic (human- caused) climate change is just a hoax, so our leaders in government feel justified in delay? As the clock ticks on averting disaster, the human consequences of the latter may end up being incalculable.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Hitting the Books: How autonomous EVs could help solve climate change

Climate change is far and away the greatest threat of the modern human era — a crisis that will only get worse the longer we dither — with American car culture as a major contributor to the nation’s greenhouse emissions. But carbon-neutralizing energ…
Engadget

Hitting the Books: The invisible threat that every ISS astronaut fears

Despite starry-eyed promises by the likes of SpaceX and Blue Origin, only a handful of humans will actually experience existence outside of Earth’s atmosphere within our lifetime. The rest of us are stuck learning about life in space second hand but…
Engadget RSS Feed

Hitting the Books: How to huck a human into low Earth orbit

Astronauts may get the glory for successful spaceflights but they’d never even get off the ground if not for the folks at Mission Control. In Shuttle, Houston: My Life in the Center Seat of Mission Control, Paul Dye vividly recounts his 20-year caree…
Engadget RSS Feed

Hitting the Books: What astronauts can learn from nuclear submariners

We’ve dreamt of colonizing the stars since our first tenuous steps across the moon, yet fifty years after the Apollo 11 mission, the prospect of living and working beyond the bounds of Earth remains tantalizingly out of reach. In his latest book, Spa…
Engadget RSS Feed

Hitting the Books: Did the advent of the first desktop computer lead to murder?

Welcome to Hitting the Books. With less than one in five Americans reading just for fun these days, we've done the hard work for you by scouring the internet for the most interesting, thought provoking books on science and technology we can find and…
Engadget RSS Feed

Hitting the Books: Teaching AI to sing slime mold serenades

Welcome to Hitting the Books. With less than one in five Americans reading just for fun these days, we've done the hard work for you by scouring the internet for the most interesting, thought provoking books on science and technology we can find and…
Engadget RSS Feed

Hitting the Books: The Second Kind of Impossible

Welcome, dear readers, to Engadget's new series, Hitting the Books. With less than one in five Americans reading just for fun these days, we've done the hard work for you by scouring the internet for the most interesting, thought provoking books on s…
Engadget RSS Feed

A nested tab design is hitting the Google Play Store, violating Google’s own design guidelines

A new design element was found several weeks ago in the Google Play Store that violated Google’s own Material Design guidelines. A nested navigation bar is suddenly appearing under a main tab of categories. As an example in the picture above, you can see Pop, Alternative, Rock, etc. under the main categories, Genre, Artist, Album, […]

Come comment on this article: A nested tab design is hitting the Google Play Store, violating Google’s own design guidelines

Visit TalkAndroid


TalkAndroid