Abzu’s cookie policy: “Apologies” for our humanity.

At Abzu, we’re for human-centric tech. We’re not for control by external agents, so we’ll never ask for anything we don’t need.

When cookies were the progeny of “magic cookies”, they were seemingly innocuous packets of e-commerce data that stored a user’s partial transaction state on their computer. It wasn’t disclosed that you were playing a beneficial part in a much larger system, and you didn’t have a choice on accepting your role.

Imagine a stranger slipping a packet into your pocket. And then that stranger regularly slips their hand back into your pocket to check on it, because unbeknownst to you that packet has the power to gather and store all sorts of information about you.

For over two whole years, some (ingenious) weirdos were doing this virtually unnoticed until the Financial Times reported on the phenomena in 1997. Unnerving? Sure. Yummy? Nope.

Today those weirdos are a lot more obvious: I bump into them at least 100 times a day. What was supposed to transform the internet into a safer, more privacy-preserving place — making this activity conspicuous and consensual via cookie consent banners — is just exposing what an unpleasant place the internet can be.

There are no cookie consent banners on abzu.ai.

There are no cookie consent banners on our domain because we only use strictly necessary cookies and session cookies. Check out our privacy policy — it’s pretty straightforward.

Our cookie policy is simple: Start with a question, not with data.

Data is universally available. What is scarce is the foresight to synthesize information from your data. You’re constantly being mined and monitored in innumerous domains for innumerable ilk of data, because starting with data is unscientifically easy. Asking the question first, building a model to test the hypothesis: now that’s the hard part.

In a vacuum of critical thinking by controllers — and until recently a vacuum of enforced policies — the equation reversed.

The direction of computation should have been: “What data do we need to answer our question?” But in an abundance of raw material (re: your personal data), controllers floated along a lazy river fed by tributaries of data-collection and allowed the flow of data to define what to do next.

Star Trek Data rubbing his belly and patting his head

When data drives decisions.

Data drove operations and decisions. When controllers let data tell them what to do next, data simply said it wanted more data.

But controllers forgot to consider that data doesn’t imply correlation. It’s easy to assume or overestimate a link between variables. Continue adding personal information to the equation, and now it’s conclusion-by-conspiracy-wall because dependencies and critical thinking didn’t originate the function. If it sounds like you’re talking crazy or jumping to conclusions, you probably just need more data.

I’ve been that controller: the marketing professional who rejected Occam’s razor and thoughtful planning. Instead I embraced unnecessary complexity and an overabundance of data simply because I could, and because users let me. Friendly interfaces and, frankly, conveniences in consumption journeys obscured the computational misdirection (data first, questions later) and the magnitude of what was going on.

Granted, some really interesting interrelated behaviors surfaced because we are mostly predictable beings for all our impulsive and irrational behavior. But without a hypothesis to solve, or any forethought, there is no promise that you’ll generalize correctly. And there are consequences to allowing companies such intimacy and accessibility into so many aspects of our lives — consequences to playing god with big systems.

Human-centric technology means giving up "God mode."

We’ve been creating new tools but for the same old urges. We’re still trying to play god, to take and hoard and monitor more than we should before we have divined a direction. Is anyone surprised that our inventions reflect our biasesavarice, and prejudices?

Hoban "Wash" Washburne from Serenity.

“Likely crash and kill us all.”

When we talk about making humane tech or human-centered AI, what we really mean is giving up “God mode”. We mean making decisions thoughtfully and responsibly.

Metering our activities doesn’t mean we throttle possibilities or become narrow-minded! It means that we become more of our best selves: communal rather than individual, explorers over conquerors, hunters instead of gatherers, creative in lieu of one-track-minded.

At Abzu, we’re for lean data and human-centric technology. We’re for self-organization. We’re not for control by external agents, which is why we’ll never ask for anything we don’t need.

We start with questions, and we answer through data, not by ineffectively digging through limitless data sets.

We listen to our users to get a sense of their needs. We conduct personalized interviews. We hand-hold, we white-glove, and we hug because we embrace our humanity. And we firmly believe we aren’t the crazy ones.

Share some perspective.

Beyond the algorithm: The human impact of AI-driven RNA therapeutics.

Abzu's perspective on accelerating R&D with explainable AI.

Explainable AI has huge potential to accelerate disease understanding and drug design by revealing the biological mechanisms that drive drug activity, stability, safety, and delivery.
In the life sciences sector, many AI applications fall under the high-risk category.
Our proprietary technology and brilliant Abzoids ensure that each step that we help our partners take in their pipeline is actually a leap towards viable, in vivo success.

Subscribe for
notifications from Abzu.

You can opt out at any time. We’re cookieless, and our privacy policy is actually easy to read.