About the Institute

The Hybrid Vigor Institute is dedicated to rigorous critical thinking and the establishment of better methods for understanding and solving society’s most difficult problems. Our particular emphasis is on cross-sector and collaborative approaches; we seek out experts and stakeholders from a range of fields for their perspectives or to work together toward common goals.
Principals | Advisors | What We Offer



hybridvigor.net houses the work of critical thinkers, researchers and practitioners who conduct cross-sector and cross-disciplinary explorations and collaborations.
Blog | Contributors | Topics

  Subscribe to Hybrid Vigor’s RSS Feed



Privacy | Funding


Contact Us



Intervention by Denise Caruso Read Intervention by Denise Caruso, Executive Director of the Hybrid Vigor Silver Award Winner, 2007 Independent Publisher Book Awards; Best Business Books 2007, Strategy+Business Magazine

'21st Century Risk' Archive


by ~ September 29, 2008

A story in today’s New York Times discusses how the media has struggled to explain the financial crisis to audiences. Admittedly, many industry experts are dumbfounded by the events of the last few weeks. Where the media has faltered, my good friend and former colleague Bob Blakley has succeeded with his down-to-earth post on “Wall Street’s Governance and Risk Management Crisis.” Thanks, Bob!

I particularly liked Bob’s phrasing of the “collective margin call” on the banks. It indicates that part of what’s happened is a failure in coordination: banks have cash on hand as long as only a few percent of their patrons want to withdraw their cash. This echoes a theme of a post I wrote about the credit crunch back in June. Here’s an excerpt from that post:

Clearly, a great crime has been committed. An entire nation has been robbed. World markets are shaken. But who’s responsible? Nobody. And everybody. The insidious nature of this crime is that we all collaborated to commit it and without a master plan. Can such collective action crimes be avoided? Or is the commons forever doomed to be the scene of tragedy?

Bob’s comments on risk management are also strongly reminiscent of Denise’s work on risk management in the biotech industry. Bob writes:

Risk management failures created the current financial crisis, and risk management failures have also created the personal information disclosure crisis, and the malware crisis, and a bunch of other problems which are not yet crises. We do risk management poorly in all disciplines. We do it poorly for a bunch of reasons: executives don’t understand their own businesses well enough to understand their risks; risk managers don’t know how to talk to executives about risk; incentives favor creating long-term risks in order to accrue short-term profits; the list goes on and on.

Denise’s main assertion in her book, “Intervention: Confronting the Real Risks of Genetic Engineering and Life on a Biotech Planet,” is that the biotech industry is similarly awash in poorly managed risk. Genetic engineering is another impending crisis that, once it reaches crisis levels, people will be dumbfounded to explain.

As an avalanche of new laws and regulations hit Wall Street over the next few years, I fear that we’ll lose sight of the most important learning to take away from this disaster. Again, Bob Blakley explains:

A final thought.  The financial crisis exists because of a failure of risk management. There will be a temptation to fix the problem using compliance mandates. Compliance mandates, however, don’t fix risk management problems. All they do is prevent specific risk management failures from happening over and over again. Organizations whose risk management is weak will find new ways to fail – and these new ways will circumvent compliance regulations. The right way to fix a risk management problem is to do a better job of risk management.


by ~ April 18, 2008

I’ve just finished reading Denise Caruso’s book, Intervention: Confronting the Real Risks of Genetic Engineering and Life on a Biotech Planet. I absolutely love it! As the book’s subtitle suggests, Denise recounts the tragedy of how hubris in the biotech industry — compounded by sub-standard risk assessment methods used by government regulators — has blinded us to potentially catastrophic consequences of releasing billions of living, reproducing, evolving man-made organisms the environment, the long-term effects of which are completely unknown.

But Intervention delivers a much broader message, about how the human propensity for hamartia isn’t miraculously expunged by mathematics, statistics, or the scientific method.

In proving her point about assessing the risks of genetic engineering, Denise calls into question the seemingly unassailable position of science in our culture. The book suggests we desperately need “a new kind of science” (to borrow Steven Wolfram’s phrase) — one that accounts for the nature of the beings (i.e., us) who are wielding its increasingly powerful tools. Try as we might, whatever model we create to try and describe reality, our scientific models inescapably say much more about human beings than they do about some objective reality. In the book, Denise exposes our lapses in rationality due to cognitive, social, and technological realities. Such lapses are everywhere in the areas I cover (technology, social trust, and privacy).

So while reading the book, I decided present my views on these issues in a blog post. Admittedly, going into some depth on Denise’s book on the Hybrid Vigor blog (which is Denise’s creation) seems almost self-congratulatory. But I think the larger themes in Intervention are relevant to most of the really difficult problems we’re trying to solve globally today, and understanding these issues will help focus our discussion at Hybrid Vigor. Continue reading »


by ~ February 27, 2008

Last week, Melissa Lafsky cited some statistics on the rampant growth of click fraud and then punctuated the absurdity of the situation by questioning, rhetorically, whether everyone on the Internet is now a criminal for clicking with unlawful intent. Just who made it a fraud to click on a link, anyway? But to people whose immense fortunes are tied to sorting out honest clicks from false clicks, click fraud isn’t absurd at all. So after a flurry of comments about the piece, Lafsky clarified her position in a follow-on post.

Happily, Google’s got click interpretation down to a science, so we’re all off the hook (although the algorithm apparently still struggles with interpreting wit, sarcasm, irony, rhetoric, and French). So now I’m anxiously anticipating the beta of Google Intentions: an app for searching everyone’s click streams, categorized by intent!


by ~ February 27, 2008

The latest (Spring 2008) issue of Strategy+Business magazine is on the newsstand and on the web — and in it, my piece whacking cost-benefit analysis, the bane of innovation and sane regulatory policy. I’m already getting letters …

(Free) registration is required to read the article online.


by ~ February 20, 2008

Gerry Gebel of Burton Group wrote an excellent post last week called “Moving Beyond Command and Control.” It’s the kind of thing I’d like to have written. It’s the kind of post everyone who cares about Internet security should read.

Gerry’s referring to the prevailing style of computer security, in which an administrator creates IDs and manages access to the system. The phrase “command and control” comes from a militaristic style of management with centralized or hierarchical authority. There’s nothing inherently wrong with the command/control model; the issue is that it’s a horrible fit for Internet security, where authority is unavoidably distributed.

Here are some simple shibboleths to detect a person’s managerial orientation:

If you hear frequent repetition of the words identification card, identity assurance, encryption, rights, management, access control, and policy …

BINGO! This person is a command and control disciple.

If instead you hear frequent repetition of words like reputation, reciprocity, empathy, signaling, collaborative action, recognition, shared experience, social interactions, ceremony, and connection

Then they’re talking about social trust — and that person needs to SPEAK UP and start blogging about it!


by ~ February 18, 2008

Technologists have long admired the almighty algorithm: the piece of patentable code worth millions of dollars. While computer hardware has gone the way of commodity pricing, software and online services companies insist that consumers pay big bucks for use of their proprietary algorithms (when most consumers can’t even say “al-go-rhythm”), in the form of the software packages they buy, or use online for a fee.

But how much is your personal information worth? One woman, Raelyn Campbell, claims her information is worth $54 million. Campbell says she took her $2,000 computer for repairs at BestBuy more than six months ago and hasn’t seen it since. BestBuy offered to settle for about $2,000 to cover the lost hardware. But Campbell rightly points out that in losing her computer, BestBuy also lost her personal information, including account information to a variety of online sites.

Hopefully, Campbell stored only her own information on this computer, and not any HR information from her employer. Stolen or missing laptops are common types of data breaches, as attested to by the Privacy Rights Clearinghouse. These cases are usually multi-million-dollar lawsuits. Curiously though, Campbell draws on a dry cleaning incident as precedent for her suit. In that case, a customer sued a dry cleaning establishment for losing a pair of paints (see the link to the Campbell story for more details). The $54 million law suit was eventually dismissed after costing the dry cleaner $100,000 in legal fees. Campbell, it seems, has an elevated sense of dramatic irony by attempting to take BestBuy to the cleaners for the same amount as the case of the purloined pants. But keep your shirt on — Campbell admits to using an inflated amount to get media attention. Nice move.

I’m sure that encryption vendors love this kind of story. And yes, encrypting our data is a good idea in theory. But it’s not particularly easy or convenient. It messes up indexes, so you can’t ever find stuff when you need to. Simply put, we’re a few ideas short of a solution to this problem.


by ~ February 15, 2008

The juxtaposition of two events in the last week exemplified the growing tension of social trust on the Internet. First, the OpenID Foundation announced the additions of Google, IBM, Microsoft, VeriSign, and Yahoo! to their board. A few days later, the New York Times reported on people’s frustrated attempts to delete their Facebook accounts.

It seems identity theft is officially passé: now you have to worry about “soft” identity theft by social sites that play keep-away with the information you provide. Thankfully, some users have reportedly succeeded in getting their accounts permanently excised from Facebook (for example, see this post on the 2,504 steps to closing your Facebook account).

But their Pyrrhic victories do little to stem the deluge of personally identifying information pouring into and being captured on the Internet.
For example, how do I delete my profile from Spock, when I didn’t even set it up in the first place? Can I instruct Google not to index information about me?

So, last week while technologists were building out the apparatus for connecting people’s information across sites, real people confronted an Internet that neither forgives nor forgets.

Of course, the OpenID folks are convinced that their approach—a decentralized, single sign-on system—will improve privacy by reducing the number of accounts people need. Control of one’s personal data is a tenant of the “user-centric identity” movement that OpenID represents. But OpenID is an identification system, not a trust system (in either the technical/cryptographic or the social sense), by the designers’ own admission. So while I’m encouraged to see an impressive list of tech companies working together on identification systems, it’s unfortunate that they’ve wholly missed the point. It’s not the ID system that needs fixing.

Sure we’re all bugged to have to remember 57 passwords, but it’s a nuisance, not a betrayal of trust. The announcement I’d like to see is that the same list of companies is collaborating on an apparatus for improving social trust online. For those of you frustrated with the eternal stickiness of social sites, I recommend never using your actual identity—create a persona instead. Unfortunately, personas aren’t that easy to create and maintain at the moment, but it’s something we’re working on here at Hybrid Vigor.


by ~ February 8, 2008

The U.K.’s Parliamentary Office of Science and Technology (POST) functions something like the late lamented U.S. Office of Technology Assessment, killed off by Newt Gingrich back in the ’90s. They regularly publish brief but fairly comprehensive, interdisciplinary reports with cross-sector relevance on trends in science and technology.

POST recently published three POSTnotes entitled “Ecological Networks” [PDF], “Smart Metering of Electricity and Gas” [PDF] and “Autism” [PDF]. The first two POSTnotes for 2008 were on “smart” materials and systems, and synthetic biology.

You can subscribe to the POST reports yourself, by sending an email to: mailto:post@parliament.uk.

“Ecological Networks” considers the possible conservation benefits of ecological network implementation in the UK. Ecological networks are intended to maintain environmental processes and to help to conserve biodiversity where remnants of semi-natural habitat have become fragmented and isolated. Continue reading »


by ~ February 7, 2008

When Denise Caruso asked me to become a regular contributor to the Hybrid Vigor blog, I jumped at the opportunity. Since late 2006, I’d been developing ideas for building social trust and reducing fraud on the Internet by way of a blog I started at Burton Group.

That blog holds a lot of sway with Burton Group’s readership: mainly, IT professionals and software vendors. But after Denise brought some of our ideas to a wider audience with her article in the New York Times, it drew the attention of important new communities, such as (for example, this interview with Emergent Chaos, this post from Jim Harper, and this one from Michael David Cobb Bowen).

I realized that any successful approach to building social trust online would require the attention of a broad-based, interdisciplinary community — a blending of ideas from social science, evolutionary biology, human factors, economics, mathematics and engineering. The technical design requires tempering by our best understanding of political science, psychology, philosophy and the law.
Continue reading »


by ~ October 24, 2007

The WELL, one of the oldest online communities still in existence, is hosting me as guest author for a two-week conversation in its ‘Inkwell’ book discussion topic about Intervention — and whatever topics come up as a result of talking about technology, innovation and risk. It’s been underway for several days now, and will continue until October 31st.

So far much of the conversation has been focused on deliberative processes for assessing risk, and we are just starting to wade into deeper waters with talk of the precautionary principle and whether or not Hillary could manage to re-start the Office of Technology Assessment without wrecking it with politics.

You don’t have to be a WELL member to read the conversation, but if you aren’t a member and want to start prodding me with some questions, just send an email to <inkwell@well.com> to have them added to the thread.

The host of the conversation is the redoubtable Jon Lebkowsky, a Texan who I’ve known for many years from the technology world who now writes a regular column for Worldchanging.com.