Forge Friday Roundup - July 9, 2021

July 9, 2021

In today’s Roundup: teaching “same” and “different” to neural networks; unraveling fractal broccoli; randomized trials for AI; COVID adds downward pressure to US health stats; building ground-up algorithmic accountability; FDA backpedals on aducanumab indication; rural ambulances endangered; new puzzles from Skeleton Lake; taking stock of peer review in the age of preprints; the ethics of incidental imaging findings; recent heatwave surpasses modeled extremes; much more:


Romanesco broccoli (or cauliflower) showing fractal growth pattern. Image credit: Ivar Leidus via Wikipedia (CC BY-SA 4.0)
Image credit: Ivar Leidus/Wikipedia (CC BY-SA 4.0)
  • “It was a remarkable achievement for a girl who only began spelling competitively two years ago. Not only did she dissect word after word on spelling’s biggest stage, she had already set three Guinness world records for dribbling, bouncing and juggling basketballs. All before the ninth grade.” Please take a few minutes bask vicariously in this moment of undiluted triumph and joy, courtesy of the multi-talented winner of this year’s Scripps National Spelling Bee, Zaila Avant-garde.
  • Science News’ Nikk Ogasa reports on how an investigation into the process that generates the eye-catching fractal patterns in Romanesco cauliflower has identified a set of genetic switches responsible for the patterns - switches that can be manipulated to produce fractal growth patterns in other plant species (H/T @CMichaelGibson).
  • Well, maybe shallow breaths for this one: North Carolina State University is the proud possessor of a titan arum, better known as the corpse lily due to its penetrating aroma of rotten meat, and the NCSU plant is entering a new bloom cycle. This plant blooms only rarely, and NCSU is marking the occasion with a guess-the-date contest, the reward for which is a “corpse flower inspired prize pack.”


  • “Our candidate completed a phone interview with Curious Thing. She first did a regular job interview and received a 8.5 out of 9 for English competency. In a second try, the automated interviewer asked the same questions, and she responded to each by reading the Wikipedia entry for psychometrics in German….Yet Curious Thing awarded her a 6 out of 9 for English competency. She completed the interview again and received the same score.” Ich glaub mich knutscht ein Elch! MIT Technology Review’s Sheridan Wall and Hilke Schellmann report on a test of automated job interview software that revealed a, um, startling degree of linguistic flexibility on the part of the AI.
  • “The researchers found that a CNN [convolutional neural network] trained on many examples of these patterns could distinguish ‘same’ from ‘different’ with up to 75% accuracy when shown new examples from the SVRT image set. But modifying the shapes in two superficial ways — making them larger, or placing them farther apart from each other — made the CNNs’ accuracy go ‘down, down, down,’ Ricci said. The researchers concluded that the neural networks were still fixated on features, instead of learning the relational concept of ‘sameness.’” Quanta’s John Pavlus explores an arena of machine learning research devoted to shape recognizing and differentiating shapes – including why sophisticated AI systems find it so difficult to accomplish tasks that are virtually effortless for humans.
  • “Randomized controlled trials have not only been the foundation of the advancement of medicine, they have also prevented countless potential disasters — the release of drugs that could have killed us. Such trials could do the same for AI. And if we were to join AI’s knack to recognize correlations with the ability of randomized controlled trials to help us infer causation, we would stand a much better chance of developing both a more powerful and a more ethical AI.” An opinion article at Harvard Business Review by AI ethics expert Carissa Véliz makes a case for subjecting commercial or government AI systems to the kinds of scrutiny the FDA applies to medical evidence – including a demonstration of safety and efficacy in a randomized controlled trial.
    A go set, showing black and white round pieces on 17x17 grid board as part of a game in progress. Image credit: Goban1/Wikipedia
    Image credit: Goban1/Wikipedia
  • “The “general” in artificial general intelligence is not characterized by the number of different problems it can solve, but by the ability to solve many types of problems. A general intelligence agent must be able to autonomously formulate its own representations….We cannot achieve general intelligence until we can remove the dependency on humans to structure problems. Reinforcement learning, as a selective process, cannot do it.” In a detailed essay at TechTalks, Herbert Roitblat critiques a publication by a team from Google’s DeepMind project that advances the thesis that maximizing reward is the lynchpin of developing artificial general intelligence (H/T @LofredM).
  • “I worry that, by adopting the trappings of reproducibility, poor-quality work can look as if it has engaged in best practices. The problem is that sloppy work is driven by a scientific culture that overemphasizes exciting findings. When funders and journals reward showy claims at the expense of rigorous methods and reproducible results, reforms to change practice could become self-defeating.” A Nature viewpoint article by former Arnold Foundation Vice President Stuart Buck considers recent initiatives aimed at fostering transparency and reproducibility in science, and warns against efforts that fall short on substantive rigor.
  • “Lawmakers, it is said, don’t understand technology well enough to regulate it. They are too old. They are out of touch. They have disinvested from staff and other experts that could help them understand it. And while all those criticisms may be true, why should we expect our lawmakers to become individual experts on every challenge facing society?” A new report from Data & Society (“Assembling Accountability: Algorithmic Impact Assessment for the Public Interest”) addresses the need for a broad-based, consensus-oriented approach to evaluating the societal impact of algorithmic technologies – one that goes beyond performative politics and embraces input from experts and the persons and groups most likely to be affected.
  • “…police departments have their own resources for tracking and surveilling residents, regardless of what Zencity does or doesn't permit on its own platform. Giving police the ability to monitor public discussions critical of policing is alarming to many privacy groups. Police across the US have used a variety of software over the years to scan social media, often scrutinizing groups tied to police reform and opposing surveillance.” At Wired, Sidney Fussell takes a close look at an AI application that law enforcement departments that are using to sift social media on a large scale – and the accompanying worries about privacy and civil rights.


  • “You could say the trajectory of American health care before, during, and after the pandemic is like that of an individual vulnerable patient: It was sicker to begin with, hit hard by Covid-19, and will be dealing with the lingering effects for a long time.” An article by Vox’ Dylan Scott examines the ways that the COVID pandemic has further exacerbated existing challenges – some of them revealed in recent, pre-pandemic drops in average life expectancy -  in the US healthcare system.
  • “Some research centres say that the urgency of the pandemic forced them to accelerate their procedures in ways that will carry over to future trials, regardless of whether changes to official guidelines stay in place…But some negative impacts could also linger. Blanke points to a survey showing that about 20% of cancer survivors are less likely to enrol in a clinical trial than they were before the pandemic.” At Nature, Heidi Ledford examines what happens now that the surge of COVID-related research is beginning to recede a bit – and the future implications for clinical research in a post-COVID world.
  • “During the pandemic, scientists around the globe switched gears to find the answers the world needed. They rapidly solved protein structures, tracked viral genomics, repurposed drugs and developed vaccines, apps and behaviour-change strategies. Our warming world will cause even more disruption, but the research response is too little, too removed and too theoretical. There needs to be a broader, open shift to apply science to local climate adaptation.” A Nature perspective by Alice C. Hill explores lessons that the COVID epidemic might have for efforts aimed at responding to the disruptions of climate change.
    Shoulders and head of a model skeleton posed with hand to chin as if in thought. Image credit: Mathew Schwartz/Unsplash
    Image credit: Mathew Schwartz/Unsplash
  • A perspective article by Platt, Simon, and Hernandez published in the New England Journal of Medicine explores the benefits and challenges of embedding pragmatic clinical research within health systems to gather data from “real-world” environments.
  • “Biogen and the FDA described the Aduhelm label update as a clarification meant to better reflect data from clinical trials. But changing the label so soon will be seen as the FDA yielding to outside criticism that the drug’s approval – just one month ago – was overly permissive.” STAT News’ Adam Feuerstein reports (log-in required) on the FDA’s startling reversal on the indication for the newly approved Alzheimer therapy aducanumab (marketed as Aduhelm).
  • “Roopkund’s strangeness unnerves even professionals. In the 1950s, one explorer described the site to an Indian radio station as a “ghastly scene that made us catch our breath.” And for decades, many scholars have tried to figure out who the men and women at Roopkund were and when they died.” It sounds like a scenario from an outré potboiler: at intervals that stretch to many centuries, groups of travelers and pilgrims, some from as far away as the Mediterranean basin, convene at a high Himalayan lake in India near the Nepal border, and die en masse, leaving behind a mysterious collection of skeletons. National Geographic’s Kristin Romey reports on efforts by scientists and scholars to understand the puzzle of Lake Roopkund.
  • “In rural America, it’s increasingly difficult for ambulance services to respond to emergencies like Greyn’s. One factor is that emergency medical services are struggling to find young volunteers to replace retiring EMTs. Another is a growing financial crisis among rural volunteer EMS agencies: A third of them are at risk because they can’t cover their operating costs.” A story at Kaiser Health News by Aaron Bolton illuminates a growing problem for rural communities, many of which have seen local hospitals close amid waves of consolidation: the ambulance services that are becoming ever more critical for transporting residents for urgent medical care are becoming more expensive to maintain, and the pool of volunteers that staff those services are aging – and not enough new volunteers are replacing them.
  • “It’s not just that numerous temperature records were broken, van Oldenborgh said. It’s that the observed temperatures were so far outside of historical records, breaking those records by as much as 5 degrees C in many places — and a full month before usual peak temperatures for the region. The observations were also several degrees higher than the upper temperature limits predicted by most climate simulations for the heat waves, even taking global warming into account.” A sobering article by Carolyn Gramling at Science News underscores the fact that the recent extreme temperatures experienced across parts of the Pacific Northwest were not only worse than previous records, they were in some case worse than what experts had predicted was possible.
  • “…within the scientific community, it is the argument over the amyloid hypothesis that has set off some of the biggest fireworks and could have a sweeping impact on the future of Alzheimer’s treatment. There is consensus that the buildup of amyloid beta is a hallmark of Alzheimer’s, which robs people of their memory and ability to do everyday tasks. But to some, logic and science dictate that getting rid of the amyloid clumps is critical. To others, that notion is a costly distraction.” An article by the Washington Post’s Laurie McGinley examines how the recent (and controversial) FDA decision to approve the Alzheimer therapy aducanumab (Aduhelm) has sparked a new blaze of debate over the merits of the “amyloid hypothesis” of Alzheimer disease.
  • “Distance does not predict the spread of COVID-19 when students/staff/teachers are masked…Masking is adequate to prevent within-school COVID-19 transmission, with no difference between schools requiring greater than 3 feet of distance between students compared to those requiring less than 3 feet. Distance did not predict infection.” The ABC Science Collaborative, a joint effort led by the Duke and University of North Carolina medical schools that advises on school safety amid the COVD pandemic, has issued a new report that gives high marks to North Carolina schools for preventing COVID transmission. Among the report’s key findings is that while masking has proven effective at preventing disease spread, social distancing (in the presence of widespread masking) does not seem to have much effect.


  • “The real trouble with preprints — which is, funnily enough, also the real trouble with peer-reviewed research — is how those studies are promoted and written about on social media and by the press, experts told me.” An article by 538 science reporter Maggie Koerth takes a nuanced look at how the practice of publishing scientific preprints – papers that are made public prior to the completion of peer review and journal publication – is changing the dissemination of scientific findings as the pressures of the COVID era have sent the phenomenon into overdrive.
  • “…media policies have been a bitter source of conflict at hospitals over the past year, as physicians, nurses, and other health care workers around the country have been fired or disciplined for publicly speaking or posting about what they saw as dangerously inadequate Covid-19 safety precautions. These fights also reflect growing tension between health care workers, including physicians, and the increasingly large, profit-oriented companies that employ them.” At STAT News, Harris Meyer reports on the outcome of a legal broil that emerged during the COVID pandemic about what hospital and health system employees can say about their employers’ practices.
    Close up photograph of an old-fashioned vocal microphone. Image credit: Matt Botsford/Unsplash
    Image credit: Matt Botsford/Unsplash
  • “Black In X leaders tell Nature that they are proud of how their collective efforts have helped to amplify the voices of Black scientists, but there is still much work to be done to dismantle oppression in science — work that requires direct action by institutions. ‘The onus is not on us to fix racism in the academy,’ says Samantha Theresa Mensah, a materials chemist at the University of California, Los Angeles, and co-founder of #BlackInChem.” Nature’s Ariana Remmel looks back on a year in which “Black in X” campaigns attempted to challenge and raise awareness of systemic racism in academic fields.
  • “In the medium and long term, we need more than piecemeal tweaks employed in the moment as problems become identified. We need better active defence measures against propaganda and systematic, transparent overhauls of our current social media platforms. We also need new social media platforms and new companies — designed from the outset with democracy and human rights in mind — instead of continuing with a system in which the incumbents make only piecemeal changes while still maintaining an overwhelming focus on selling ads, whatever the cost to society.” In an essay published at Center for International Governance Innovation’s website, Samuel Woolley lays out some concrete steps – short, medium, and long-term – for combating the growing influence of digital propaganda and misinformation.
  • “Since Friday, at least six scientists have resigned positions as associate or section editors with Vaccines, including Florian Krammer, a virologist at the Icahn School of Medicine at Mount Sinai, and Katie Ewer, an immunologist at the Jenner Institute at the University of Oxford who was on the team that developed the Oxford-AstraZeneca COVID-19 vaccine. Their resignations were first reported by Retraction Watch.” Last week, Science’s Meredith Wadman reported on a gathering furor – one that included the resignations of editorial board members in protest - over the publication of a widely criticized study that made assertions about the risk/benefit ratio of COVID vaccination that experts considered fundamentally flawed. Since this story broke, Retraction Watch reports that one of the authors’ institution has severed its ties with the researcher.
  • “The idea of inoculating people against false or misleading information is simple. If you show people examples of misinformation, they will be better equipped to spot it and question it. Much like vaccines train your immune response against a virus, knowing more about misinformation can help you dismiss it when you see it.” At First Draft News, Laura Garcia and Tommy Shane offer a primer in “prebunking” – a method that aims to prevent the spread of misinformation up front instead of merely (and less effectually) responding to it.
  • Retraction Watch reports that the Elsevier cardiology imprint Atherosclerosis is taking quick action to address concerns about irregularities with a clutch of relatively frequently cited papers, all by the same group of authors.


  • “State and national LGBTQ advocates started sounding the alarm in June, when the language was introduced, saying that it will prevent LGBTQ people from accessing the health care they need. With this newly enacted language in place, a medical provider could refuse to prescribe PrEP to an LGBTQ patient looking to reduce their risk of contracting HIV, or refuse to provide gender-affirming care to trans and nonbinary patients, or puberty blockers to transgender minors.” Rolling Stone’s Hannah Murphy reports on the recent passage (as part of budget amendment) of a measure that would permit Ohio healthcare workers to refuse to administer medical treatments on the basis of “moral, ethical, or religious beliefs.”
  • “Business leaders frequently proclaim that ‘people are our most important resource.’ Yet those who are resistant to permitting telework are not living by that principle. Instead, they’re doing what they feel comfortable with, even if it devastates employee morale, engagement and productivity, and seriously undercuts retention and recruitment, as well as harming diversity and inclusion. In the end, their behavior is a major threat to the bottom line.” A Scientific American article by Gleb Tsipursky argues that some of the corporate decision-making surrounding workers’ return (or not) to offices may be suffering from the effects of entrenched psychological biases.
  • “Medicare Part B pays for physician-administered drugs that are used to treat some of the most serious and debilitating illnesses, including cancer and Alzheimer’s. In 2018, the Medicare program and its beneficiaries spent roughly $35 billion on drugs paid through Part B; from 2009 through 2018, Part B drug spending grew at an average annual rate of 12 percent. The program incentives are perverse: They reward physicians for prescribing the costliest drugs and offer few brakes on the prices of drugs used to treat Medicare beneficiaries.” A Health Affairs blog post by Conti and colleagues proposes some ideas for reforming Medicare Part B in ways that will help ameliorate rising drug costs.
  • “Two related questions dominate the discussion: to what extent should neuroimaging researchers look for incidental findings, and what should be disclosed to participants when an incidental finding is discovered…. We argue that researchers have an obligation to look for and disclose incidental findings to participants only insofar as doing so is required by distributive justice.” A paper by Graham and colleagues published last week in the Journal of Law, Medicine & Ethics delves into ethical considerations governing how physician-researchers disclose to patients what are called “incidental findings” during the course of performing scans or procedures mandated by a research protocol (H/T @charlesweijer).