On talking about #intuition

Introduction to this post:

Today I had a brief video-chat with someone positively predisposed to the idea of #intuition. He even saw it as bordering the mystical. He was Indian. Indians love #intuition, it’s true. But #it-#tech Indians have caveats they all seem to share. This is something I have seen before: real deep trust in human #intuition’s capabilities but a real distrust in any chance of ever validating it usefully.

This man is also involved professionally in #it-#tech. When I gave him four examples of how not all #tech had chosen to diminish human beings in the field of non-traditional #datasets, he was still unconvinced.

The four templates we should look to when validating #intuition:

Example 1: the #film-#tech industry from its beginnings over a hundred years ago has decided to almost always amplify and enhance existent human abilities: more voice with a microphone; keener vision with a camera; greater expressiveness with the language of close-up. And in so doing it’s made billions, perhaps trillions, in the paradigmatic century of its total cultural dominance.

Example 2: in my younger years video was not admissible evidence in the #criminaljustice system of my homeland. Now it is. What changed to put in the hands of #lawenforcement and #justice’s stakeholders and subjects this tool to eliminate procedural waste so dramatically? We didn’t change any #justice system: we just introduced new tools to validate video evidence, so that the hidden knife in the real life holdup was proven to have been used via a validated electronic cousin.

Example 3: the detective who just knows that someone is lying in an interrogation may be wrong too, on occasions; but often they all too accurate. Yet it then takes due process months, maybe years, to arrive at the same conclusion. What if we could validate — not prove right but decide definitively (as the #video example above now allows us to much more speedily) whether in truth MAYBE wrong but ALSO maybe right — so that this detective’s #hunch would bring about a conviction (or release) of the most adequate?

Example 4: I then suggested to my interlocutor that we should come up with a new 9/11 before it strikes us again. Here, I suggest we learn how to reverse- or forward-engineer bad human thought, so as to stop it in its tracks, with the most #creativecrimefighting you could conceive of:

crimehunch.com/terror

But not the “when” or “who” of what is already being planned out: in these cases, machine automation operates really competently on the basis of existent #lawenforcement and #nationalsecurity #it-#tech data-gathering processes …

Rather, I mean to say here the “what” and “how” of an awfully #creativecriminality. And I say this because 9/11 was a case of where assiduous machines which humans used conscientiously, and in all good faith, were roundly beaten by horrible humans who used machines as extensions of themselves terrifyingly well: being the case, therefore, of simply not supporting existent habits of #creativecrimefighting (because detectives can be immensely creative already in tussling out narratives that explain otherwise insoluble crimes) with conventional #it-#tech choices and strategies that absolutely do NOT since time immemorial care to foreground and upskill human #intuition.

What happened next and, maybe, why:

When I said to my interlocutor that these four examples surely served as robust precedents and templates for proceeding to validate #intuition and #crimehunch insights just as deeply, as well as to an equally efficient end … well, this was when he veered back to talking again of #intuition’s impenetrable workings. “Yeah,” he was saying, “intuiting is great process … but don’t dare to untangle it.”


And it’s funny how those who work in an industry — that is, #it-#tech — where the richest of its members are incredibly wealthy on the back of their particular and often mostly privately privileged visions of how the future must become … well, that these wealthy individuals then, and similarly equally, find themselves incapable of conceding that such a profoundly value-adding activity for them should have its own wider validation systems for us all. Why? Well. In order that EVERYONE who could care to might acquire a distributed delivery of similar levels of genius-like thinking: what I have in fact called the “predictable delivery of unpredictable thinking”.

platformgenesis.com

How I would, then, most like us to proceed:

I’d like us to create software, wearables, firmware and hardware environments where not only a select few can enjoy being geniuses, but where we all have the opportunity to be upskilled and enhanced into becoming value-adding, natively intuition-based thinkers and creators:

complexifylab.com | sverige2.earth/canvas


One small and hugely practical example:

Attached below, just one small application we might develop, using existent architectures — not the particular ones I think more appropriate for truly deep #intuitionvalidation, where we conflate admin/user in one #datasubject — and with a proposed 100-day roadmap to demonstrate that the beautiful insight I had more than a year ago is actually, honestly, spot-on:

1. That #intuition, #arationality, #highleveldomainexpertise, #thinkingwithoutthinking, and #gutfeeling are potential #datasets as competent as #video suddenly became when we believed finally its validation was a real deliverable.

2. That all the above all-very-human ways of processing special #datasets actually contain zero #emotion and even less of the #emotive when it’s their processes we’re dealing with. And that when they do EXPRESS themselves emotionally it’s out of the utter frustration which the driver and #datasubject of such #intuitive processes suffers from as a consequence of the fact that no one at all, but NO ONE, in #it-#tech cares to consider #intuition and related as #datasets worthy of their software and platform attentions.

So out of frustration I say .. but never the intrinsic nature of such #intuitive patterns of collecting #data and extracting insights which people like that detective I described earlier do believe sincerely in, when driving the most mission-critical operations of #publicsafety of all.

secrecy.plus/fire


why don’t people who love advocating machine progress find it easy to advocate analogous processes of human progress?

it’s a long title, but it’s a big subject.

over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.

without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.

one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.

but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.

that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.

because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”

or write code with just a verbal instruction, is the latest one.

what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.

or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.

in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.

and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.

and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.

and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.

when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.

i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.

really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?

the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”

🙂

wdyt?

any truth in any of the above?

why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?

on what REALLY floats my boat, though … repurposing tech to make humans … well … more so

introduction:

most #it #tech is primarily about identifying how to remove human beings from processes so the organisations which achieve these goals can more easily generate revenues from what remains: their machines. but it’s not the only way.

post:

what i love about #movie and #film #tech is its capacity to make humans bigger humans: essentially, much more human. the microphone, a bigger voice. the camera, a finer eye. even the stage, a more concise bearing of witness.

meantime, #it #tech has striven always to remove humans from the processes in question so that dominant interests may monetise more easily what remains.

take a look at this scene whose technological and emotional achievements mainly involved the simple repurposing — devised and driven i believe by the film director alfred hitchcock — of existing filmic strategies, via the combining of two separate procedures into just one flow:

a simultaneous zoom and dolly movement: which served to challenge the physical world through physical means. no animation; no cgi. something real to bear witness to something utterly real.

repurposing #tech is really what floats my boat: low risk; high value-add; countering current, usually lazy, hegemonic thinking in a sector … that is, choosing ultimately to deliver on the ancient greek version of what #innovation really should invoke: “to cut deeply into …”

that is, i am intending to mean, REAL cutting-edge thinking. as, for example, i’ve been proposing for a while re #ai here:

it’s beautiful, is this idea of repurposing. it’s beautiful because it foregrounds little details. and through their accumulation, kept over time in single planes of thought, such processes of repurposing may achieve the quantum leaps of faith i am asking us all to return to believing we human beings are fundamentally capable of; are fundamentally suited to:

our due value-add as human beings to this planet we should learn better to share with other species and with future times.

a philosophy of the best.

is all.