stop saying what #ai can do and start saying what needs to get done

introduction:

to not connect when something only unsatisfactorily connects is a really important skillset and aspect which the good human brain exhibits.

the capacity to forget for a bit is a human virtue of the highest thinking.

to create a future NOT based out of the past is what separates humans from our tools. (even as sometimes we ourselves are turned abusively into aspects of the latter.)

post:

so: #lean and startup-land solve one problem and one problem always. the problem they solve is not what the customer is hurting most about. it’s not the pain in the customer journey as tech dogma claims it suggests.

no.

it’s the pain on the journey on which both client and supplier begin to ride together. it’s the pain-point that the two of them can agree on as being where they must meet:

  • not what i as client most need sorting necessarily, but what i can afford right now according to your price list.
  • not even what you as supplier are most objectively suited to resolving but, rather, what you as supplier find it easiest to convince me is what i need, once you’ve identified where it is i am most prepared to first stump up some dough. (and with this, any dough ffs. just so long as some.)
  • and finally, most definitely, in no way do such processes and spaces guarantee at all clearly that the world’s most complex (note: not complicated) and, therefore, pressing problems will find their solutions even attended to never mind provided by such startup ecosystems, business mindsets, and wider ways of working deeply.

speculative tech like #ai — it’s supposedly been beating humans incompletely since at least the 1980s (and probably for decades before) — always needs to convince us it’s just about there. the funding, for one thing, demands:

it still isn’t. here are my reasons why:

1. until an #ai (let’s use this loose term for now) is able to get so frustrated with an insoluble problem that they end up getting blind drunk … only to wake up in a haze to a solution perfectly formed …

2. until an #ai awakes from a beautiful dream with a fabulous new idea subconsciously imagineered to perfection overnight … for it then, in the second of that awakening, to forget totally the detail — or even the concept itself — for six further months … and then for everything to come flooding back in even more astonishing form, to be finally recorded, scoped and amazingly implemented …

3. until an #ai may choose to NOT make connections or identify patterns or see relationships or deliver on finalities … because something doesn’t fit quite as right as it one day might …

4. until an #ai has a leap of faith based on data it INVENTED OUT OF THE FUTURE … because creation is the nature of the human brain … even when utterly utterly mindlessly terrifying:

until all the above and other things happen .. well … our dearly beloved proponents of #ai (thus loosely described) will continue to argue that we can do everything on the basis of what’s already been patterned. and that the apex of human achievement invokes ONLY copying what others have already done — just better: that is, faster, more accurately, for longer without tired — essentially, an alpha male’s orgasm … no?

🙂

and thus the competent delivery of #ai and similar technologies only ever needs to aspire to this.

the new #ai unleashed onto the world recently — #chatgpt — is praised for writing indistinguishably school and university essays. it’s praised for giving advice crackpot-like or otherwise. it — and similar tech — receive plaudits for faking flemish masters’ painting styles. i haven’t seen but surely imagine that songs in the key of sheeran have already flooded the social networks of this planet, devised at the hands of this sort of code.

we need to stop this.

we need to stop saying what #ai can do.

we need to start asking two questions:

1. if #ai can do what a human being can do, is it actually an achievement of historical record and delivery that another human being can do exactly the same as the human process that the #ai in question is also able to do? that is: why WANT to indistinguishably copy a flemish master? why not always prefer to discover the master — or mistress — deep and unrevealed inside your very own being?

because whilst copying is cool, it’s only cool if it leads to synthesis and final originality. after all, in the act of production we may uncover the output of creation. and then understand, perhaps, that there might inhabit well inside our persons — each of us — geniuses of magnificent shine:

(equally, of course, we might ask — as rightly people have — if a machine so easily reproduces a human activity such as writing an essay for university or school, whether our civilisation is focussed on the right testing regimes in the first place. but that’s for a quite different post.)

2. it’s bad enough to find ourselves needing to ask whether what our tech can duplicate in ourselves involves things which we might continue ourselves — in the absence of such tech — to even consider it societally desirable to carry on doing. it’s much worse to say a whole tech ecosystem — that which is structured around #lean, customer journeys, pain-points and the overbearing need to invoice a digital service or product asap (in short, startup land as we have known and loved it for so long) — doesn’t solve the vast majority of remaining deep and complex world problems because, actually, its practitioners and owners have designed it specifically so it wouldn’t.

there’s much more money in planning a drip-feed of ten years of product releases than in creating a system which might identify the real dangers of immensely, horrifyingly human #nonconformism: that is, its thinking processes and the disadvantage we now face, as we have become progressively incapacitated to do this sort of activity in the corporate-sanctioned dynamics of diluting teamwork.

because whilst we in corporate-land choose to inhibit creative #nonconformists via our tech architectures and business structures, the putins of the world have clearly been doing — longitudinally — quite the opposite.

haven’t they?

so let’s not only stop to duly think whether a tech that can repeat what the majority of us humans do every day is worth the thunder and lightning its proponents will continue to generate. yes: it’s useful in many respects, of course. but we must stop asking what can our tech do and start, instead, to focus our funding on OUTCOMES that need DELIVERY, not BOTTOM LINES that must be DELIVERED.

otherwise, we get what we get: and so we have.

an example? just one? here’s one … a western defensive and attacking capability to identify and forestall future ukraines, and maybe now, as we are allowed to voice these things finally … alongside neo-terrorist pandemics … before they even become a glint in evil organisations’ eyes:

summary:

let’s just stop, shall we?

let’s stop being bad techies.

let’s stop fetishising the machines whose investments have cost us so much, and whose future ROIs we have planned out to the millimetre.

let’s be human, instead.

let’s say:

  • we can make money out of making humans more human, not less significant
  • we can make money out of prioritising solutions that cure rather than perpetuate
  • we can choose to focus on complex and spread the word that this won’t mean complicated
  • and we can finally argue that the startup ecosystems various that we have become sooo used to — as they provided the only furniture with which we have lived our comfortable lives — do have their seriously disadvantageous downsides: whilst simultaneously (of course) solving some of the accumulatively littler things fabulously, equally they have served to diminish the big things gravely — especially when it is becoming increasingly apparent that this has also been a deliberated dynamic all along …

because just because you’ve invented a system which does lots of relatively small things massively — that is, automatedly — doesn’t mean you are anywhere capable or prepared re delivering on the resolution of massive things pointedly.

and, after all, this is a difference we are currently choosing not to even discuss.

no?

Leave a comment