“coffee-shop cctv hacked to gain intel with military value”: EXACTLY why our security needs different tech philosophies

https://www.theguardian.com/world/2023/apr/11/russian-hackers-target-security-cameras-inside-ukraine-coffee-shops

and these freedoms for us all. not just the inhibiting hierarchies enjoyed by those who own — in more ways than one — this thing we know as tech. and therefore our democracies.

mil williams, stockholm sweden, 12th april 2023

there is so much #darkfigure being delivered by people in tech, and our law-enforcement and security agencies have given up on developing systems which could counter such #neocrime:

crimehunch.com/neocrime

the agencies rely heavily on machines plus humans — in that order — because their tech partners are interested in the monetisation virtues of this order of priorities:

sverige2.earth/complexify


meantime, the bad hackers use humans plus machines — in this order — to creatively imagine, imagineer, and only then engineer new and covert ways of committing crimes that remain as invisible as possible for as long as possible.

https://www.theguardian.com/world/2023/apr/11/russian-hackers-target-security-cameras-inside-ukraine-coffee-shops


and it’s not even that our agencies are criminal in the main (though some take advantage of #darkfigure extensively to bend, or sometimes go so far as to break, the law), but their ongoing inability to recognise the importance of humans over machines is negligence of a sort:


and even more so, the illegitimate criminal power of the human+machine workflows and deals over the agency machine+human combos is utterly ignored, in the absence of truthful criteria re innovation and procurement. and yet it would be so easy to begin a process of repurposing, with integrity, of existing technologies to ensure people are made bigger by machines and not diminished.

i’m sorry. but it has to be observed, painful though the recognition, if a given, will be.

because this next one has to be taken on the chin, if we are to improve our capacity to fight creative criminality in a collective future-present:

9/11 came about because horribly creative humans used machines as tools to kill other humans, who failed to prevent it from happening because their tech partners had consistently recommended using machine+human workflows.

not because they really believed this was true; that is, that human intuition was a lesser value than the incremental thinking engendered by machines.

no.

rather, because machines+humans make more money, more easily, than workflows that consist of humans+machines … that is, humans expanded and enhanced by machines.

mil williams, stockholm sweden, 12th april 2023

crimehunch.com/terror

and so the only way we can prevent such horrors in the future — particularly the invisible ones such as the recent us defense leaks due to bad hackers and web actors, some of which date back to october 2022 and are only just now discovered, as well as the cctv hacking reported in the guardian newspaper article above — is to begin to reverse the order and purpose of tech.

this — what follows now — is what i suggest and advocate most firmly: humans always first, enhanced and expanded by tools whose primary rationale no longer remains monetising a tech partner into obscene levels of technological wealth whatever the wider human cost but, instead, delivering without exception on making a safer and more secure, more legitimate, more socially responsible, and more honest world all round.

and these freedoms for us all. not just the inhibiting hierarchies enjoyed by those who own — in more ways than one — this thing we know as tech. and therefore our democracies:

sverige2.earth/complexify


as a tool of state, this is not life (says the “he” that is “me” in #sweden … where life BECOMES a tool of state)

but never again shall i salivate the evil of the unnecessarily violent. as a last resort … this is how life sometimes must conduct itself. as a tool of habitual state … this is not.

mil williams, johan & nyström coffee shop, stockholm sweden, 9th april 2023

it’s not all plain selling. but then that’s not what life’s about.

but if i manage to stay here in the end, this — the end — it won’t be. it will be the best beginning i’ve ever managed. i spent seven years between the uk and ireland, trying to engineer a relationship between ireland and the uk. i failed.

now i say it out loud: not with joy but acceptance. acceptance that i failed in everything institutionally and personally related.

but not ideas-wise. not in respect of my increasing capacity to uncover them: like a pig and his beloved truffles. for me, ideas are truffles, waiting to be found; and they say that pigs bear many good resemblances to humans. physiologically, for sure. maybe in other respects i am still unaware of.

all i can say is if a pig is good enough for george clooney, why not associate myself with the same?

🙂

so why here — and now?

because in a very brief period of time i see a society like none i have experienced in my life. there are cruel people here: but the society as a wider whole is striving not to legislate or legitimate state cruelty. and this i am defo not accustomed to back in my homeland.

so if i have to contribute to a tech which scales up basic government and regional administrative instincts, i want it to be in a place where more manually these instincts are sound. meantime, the triumvirate of evil exists in the uk with the conservative attachment to russian wealth and trump’s idiocies all in one. and all by now as an all too well-established nouveau establishment of the horrifyingly, casually cruel.

one thing many don’t realise, and i still don’t fully understand: a military society can be a liberating one too. it all depends to what purpose you militarise — and with what genders you compose your military out of.

during my whole time in the uk i was oppressed by outliers of a military which, tbh, needed very few outliers anyway to operate and impose such oppression with the necessary precision. look at the state of the london metropolitan police right now just to appreciate how ugly the uk has allowed itself to become. and that’s the first line: just the police.

this is why here, and why now. and if it’s not possible now and here, it will be somewhere else similar, and sometime then.

but never again shall i salivate the evil of the unnecessarily violent. as a last resort … this is how life sometimes must conduct itself. as a tool of habitual state … this is not.

how to combine three brains to fight the fire of creative criminality with the fire of a newly creative crimefighting

introduction:

this post contains thoughts from a fortnight’s thinking processes more or less; plus the content of a synthesising presentation which is the sum of years of thought-experimenting on my part. i’ll start with the presentation, which is now where i want us to go:

fighting creatively criminal fire with a newly creative crimefighting

i created the slide below for a presentation i was asked to submit to a european digital agency pitching process, by the uk organisation public. the submission didn’t prosper. the slide, however, is very very good:


the easy answer is that obviously it benefits an industry. the challenging question is why this has been allowed to perpetuate itself as a reality. because real people and democratic citizens have surely perished as a result: maybe unnecessarily.

here is the presentation which public failed to accept for submission to the european digital process last october 2022, and from which the above slide is taken:

presentation submitted to public in october 2022 (pdf download)


where and how i now want us to come together and proceed to deliver on creative crimefighting and global security

the second presentation which follows below indicates my thinking today: no caveats; no red lines; no markers in the sand any more. if you can agree to engage with the process indicated here, no conditions on my side any more.

well. maybe just one. only western allies interested in saving democracy will participate, and benefit both societally and financially from what i’m now proposing:

www.secrecy.plus/fire | full pdf download


following on from the above then, thoughts i wrote down today — in edited format to just be now relevant only to the above — on my iphone notes app. this constitutes a regular go-to tool for my thought-experimenting:

on creating a bespoke procurement process for healthy intuition-validation development

step 1

pilot a bespoke procurement process we use for the next year.

we keep in mind the recent phd i’ve had partial access to on the lessons of how such process is gamed everywhere.

we set up structures to get it right from the start.

no off-the-peg sold as bespoke and at a premium, even when still only repurposed tech for the moment.

step 2

we share this procurement process speedily with other members of the inner intuition-validation core.

they use it: no choice.

but no choice then gives a quid quo pro: this means total freedom to then develop and contribute freely to the inner core ip in ways that most fit others’ cultures.

and also, looking ahead, to onward commercialise in the future in their zones of influence where they know what’s what, and exactly what will work.

and so then, a clear common interest and target: one we all know and agree on.

mil williams, 8th april 2023

historical thought and positions from late march 2023

finally, an earlier brainstorming from the same process as described in part two above, conducted back in late march of this year. this is now a historical document and position, and is included to provide a rigorous audit trail of why free thinking is so important to foment, trust and believe in, and actively encourage.

we have to create an outcome which means we know we think unthinkable things far worse than any criminal ever will be able to, to prevent them. we need a clear set of ground rules, but these rules shouldn’t prevent the agents from thinking comfortably (as far as this is the right word) things they never dared to approach.

the problem isn’t putin or team jorge. it is, but not what we see. it’s what they and others do that we don’t even sense. it’s the people who do worse and events that hurt even more … these things which we have no idea about.

if you like, yes, the persian proverb: the unknown unknowns. i want to make them visible. all of them. the what and how. that’s my focus.

trad tech discovers the who and when. but my tech discovers the what and how before even a glint in criminals’ eyes.

so we combine both types of tech in one process that doesn’t require each culture to work with the other. side-by-side, yes. but in the same way, no. so we guarantee for each the purest state each needs of each.

my work and my life/love if you prefer will not only be located in sweden but driven from here too. that’s my commitment. and not reluctantly in any way whatsoever.

[…]

i have always needed to gather enough data. now i have, the decision surely is simple.

mil williams, 21st march 2023

on the kind of society i’d love to work towards

as a by the by, these days in tech they often talk of something called “zero trust”. they never even broach the concept of “total openness”.

why …?

mil williams, johan & nyström coffee shop, stockholm sweden, 7th April 2023

a) i accept you are good

ok.

you’re right.

but you sometimes act using fascist tools, without really realising you are.

not taking ownership is a fascist tool.

pretending you’re something you’re not: this is fascist.

so fascists, then, can be right sometimes? is that what you prefer me to conclude? (btw, i don’t think you are fascists at all. but you’re so used to the right an admin has to admin a system that you can’t see why i should sincerely object.

why it could be in good faith, too.)

b) the nub of the issue

you feel i should trust you absolutely as if i were a catholic and you were the church. i don’t want that relationship with anyone. look what it brought us.

i want a trust built on a right to get it. and that means information-sharing.

look.

if you believe i am less able to comprehend what you already all comprehend, why work with me in the first place? why want to work with people less able than you?

one reason. just one.

but evidenced … not on trust.

c) what i mean by “not on trust”

i don’t want to take such important things on trust. i’ve done things on trust and they’ve just not worked out. i did when i got married. i did in 2002 in open source; and then in late 2002 in my mother’s homeland, and in the uk re my father’s wretched establishment’s prejudices from 2003 to the current day.

i also foolishly and stupidly used the tool of trusting in others in 2004 in both cases then given: i) media-related in respect of the new labour government at the time; and ii) a horribly personal example, as well.

d) what i mean by an “open society”

we shouldn’t have to build a democracy and society on trust. an open society, yes. of course. but a society where a person does what they do without evidencing to another it’s cool … no … not that. it inevitably leads to corruption. it inevitably leads to abuse of all kinds of powers. in all contexts, public and private. it enables rape. it enables the police force we now have in london.


we need openness precisely so we DON’T need trust. let’s get rid of trust and your demand for it. why? simple: it’s a lazy euphemism for faith. and faith comes from a time before gutenberg. and gutenberg brought science to us all. and now it’s time we gave arationality its place, and by so doing facilitated openness to the very maximum.

now it is. it really is.


e) my preferred timeline

first, do away with faith.

then, do away with trust.

and then make of our world a magnificent, peer-to-peer society of an EVERYTHING that it is to be UTTERLY egalitarian.

openness is beautiful. trust, meantime, is a tool to be turned against you by the powerful (at home and outwith, tbh). and faith is rarely more than what blinds you to what’s really out there. this being what faith always has been throughout human history: the bedrock of religions’ abuses. (not only that. good too, yes it’s true. but what i have seen in most of my life is that the good do good whenever they can, whilst the bad rise to the heights that serve almost inevitably TO CAN the striven good of the good. over and over.)

f) conclusion

anyways.

just that.

just this.

not much more to say today.

just as a by the by, though: these days in tech they often talk of something called “zero trust”. they never even broach the concept of “total openness”. why …?


and so to one final final thought, as i walk the streets of stockholm after posting: if i’m right in what i write here today, trust is a component of faith but not of openness. those of us who want open societies should, therefore, ensure we take note.


if you’d like to contact me, try email: we can start there … yeah?

milwilliams.sweden@outlook.com | positive@secrecy.plus

looking forward to chatting — and hopefully disagreeing!

“upskilling” human beings in the ways of the machine … again? i don’t THINK so

introduction

i just got a message from microsoft (linkedin) which asked me to consider and/or explain how what i was about to post (what you see below in the screenshots) related to my work or professional role.

why nudge in this way

is this a stealthy attempt to remove the ambiguities of #arts-based thinking patterns from contaminating the baser #chatgpt-x instincts and what they scrape?

more than personally, quite intellectually i think it’s wrong — in a world which needs lateral and nonconformist thinking — to define, a priori, what a thinker who wishes to shape a better business should use as a primary discourse.

because this discourse may include how much we follow or no the traditional way of framing information: where we state what we will say, say it, and then summarise it, we fit the needs of machines and people trained to think like them.

art should be used to communicate in any forum

‘truth is, when we choose a precise ambiguity (one forged out of the arts — not the confusions — of deep communication), where such ambiguity and the uncertainty it generates may in itself be a necessary part of the communication process’s context — and even content — what value ever is added by telling the speaker and/or writer they are ineffective?

in any case, the public will always have the final vote on this: and if you prefer to communicate in such ways and be not read, why not let it happen?

why choose this kind of nudge to upskill writers in the ways of the machine?

using automated machines to do so, too …!

so what do YOU think? what DO you?

me, what follows is what i want. what no one in tech wants to allow. because i’m not first to the starting-line: i’m last. they decided it didn’t suit their business models decades ago. i decided i didn’t agree. and i still don’t. and neither should you.

on making a systemically distributed intelligence and genius of all human beings … not just an elite

do you understand now?

background

i trained in spain as an editor in the early part of the 2000s. but as someone who always blogged better than he authored, i already had the custom of attributing my ideas through the tool and habit of hyperlinking.

attribution is really important: not because it inevitably takes us down a peg or two, although it does. more importantly, an outcome with shape to it is more valuable than an outcome, full stop.

the memex machine

i realised this consciously ever since reading vannevar bush’s treatise on human thought: “as we may think”.

you can find it in “the atlantic” to this day:

https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

in it, using the immediate post-war tech available, and any future tech extrapolated from the same, he suggested a machine called the #memex. it would enable human thought to the extent that we would not only be able to store outcomes — what those most interested in making money would consider a product or service to be scaled up massively — but store the trails of thought that would always lead up previously (and to my mind — and bush’s — also preciously) to such outcomes.

because when a thought is a reel of thoughts not just a pic which snaps shut on time … this is when we have a human being rather than a machine.

the implications of attribution’s fact

the result of all the above? NOTHING we think up is independent of ANYTHING. as a consequence, it is tantamount — just about — to daylight (and more importantly, night-time) robbery to claim one has an exclusive right to make money out of any idea “you might EVER have”.

you don’t agree? then it isn’t. but then you’re a genius in the divisive sense most of society understands: that is, very few of us can or will be this figure; and most of us will never be anywhere near.

which is elitism to the max, in fact.

only i don’t agree with this. i don’t believe such elitist approaches accurately reflect the human brain and how it best works. and when leaders in different fields impress on us to believe that they do reflect the state of human thought, and then build business models on such belief systems, we get a type of tech-kryptonite thrust into the most vulnerable core of human experience.

my experience, meantime, as a language teacher, then facilitator, then enabler (the progression taking place out of what almost became a fanatical frustration with the rank incompetence of traditional learning paths) showed me ALL humans can radically improve their thought processes, given the right environment.

and where processes can be improved dramatically, surprising — even shocking — outcomes can be achieved. and so i saw it happen. and was a convert.

platform genesis

this is why both my son, from whom i am now deeply estranged, and myself worked together for a while on a platform we named “platform genesis”.

the idea is that all of us, all human beings, can find the right tools to uncover some aspect of genius in our souls, hearts, and grey cells. it’s not the preserve of intellect: it’s mainly the preserve of confidence in oneself — whether one has it or not. if one doesn’t, intellect cannot follow. if one does, a happy series of accidents — where a given — will lead to your genius in some way or other.

https://platformgenesis.com

my happy path to my genius

a few minutes after the photo that follows was taken i met the muse of my life. it’s hard to admit that nothing i have thought up since 2016 that evening in dublin on the banks of the river liffey would now exist if we hadn’t met and spoken for three hours, as we shared a meal on the roof terrace of “the woollen mills”.

i had lost my fear of flying the day before, you see: the 15th of june of that same year, when i flew into dublin on ryanair for the first time, for precisely the meeting that would then take place on what was my birthday the following day. being the 16th. coincidently called, in ireland mainly, “bloomsday”. (and so don’t you think it’s cool to have a birthday called “bloomsday”? the day you are born being that, i mean.

don’t you?)

and so i literally lost my fear of flying that june. actually, truly, in a second. a total flip. a click of the brain’s chip that’s never clicked back.

and therefore in my wider life, too: because i assure you, quite objectively, i have a better brain now at 60-something than i have ever had the whole of my life before. and it’s NOT because of intelligence that i am far more intelligent than i could once have imagined myself becoming. it’s because of the confidence that slowly seeped into and then infused my brain, my heart, and my soul … as an utterly natural consequence of meeting the muse of my life.

the meaning of this … for me

bringing together the threads of this article, then, i now conclude:

1. none of my ideas since are mine to exploit exclusively for my own benefit.

2. none of my ideas since are mine.

3. none of my ideas are yours.

4. none of my ideas are yours to exploit exclusively for your benefit.

5. all of the ideas which have emerged after this meeting of ours, and precisely because we met, belong to no one — and therefore belong, altogether, to a whole planet and creative ecosystem.

because it WAS a sort of magic: and magic, the genius of genie, once uncorked CANNOT be boxed into a business model for the benefit of the few.

conclusion

when we upturn a paradigm, we can choose to leave the business model — and therefore the hegemony — untouched. or we can choose to upturn two paradigms simultaneously: the “thing” itself (for want of a better phrase), and how we get it out there as fast as possible.

what’s the problem i’m looking to solve with the #gutenbergofintuitivethinking and the #intuitionvalidationengine? and #intuitionvalidation more broadly?

https://www.ivepics.com

it’s not #darkfigure and #neocrime (only). it’s not #wastefulmeetings (only).

it’s not #loopholes and other zemiological activities (only).

it’s not the inability of #specialisms to duly and safely communicate across their knowledge lines (only).

rather, it’s something which informs all these use-cases — and many many more.

one UNCOMMON denominator: a denominator that connects everything in human experience since the dawn of rhymes:

truth. a concept of an absolute and indivisible truth.
so we kick relativism finally into the long grasses, where it may be admired and remembered but not treasured. nor missed. again.

mil williams, stockholm sweden, 5th april 2023

accept and underline, therefore, and deliver on and re-establish, as a result, that a VALIDATED truth may be something we can establish ABSOLUTELY.

not universal truth: i’m not sustaining this. we can still be dependent on circumstance and context. but in each of these, an unattackable truth.

just this: impossible to break.

this i believe. deeply. firmly. and now till i die.

and if so, even if only gently, this then leads this article to its final statements.

all of the above says to me only one thing: i need to know what i owe my muse. because if any of the above can happen to any useful degree one day, it will happen one day only because of the impact she had on me, that day.

it’s consequently NOT a fixation at all i have for you, my dear muse, but an almost overpoweringly intellectual — as well as obviously emotional — demand to work out to what extent i was just a lever someone pulled.

not that YOU pulled it, either. i don’t think this for a minute. no.

but a tool or extension of someone or something out there … why is this so impossible to propose or contemplate? or prosper as an assertion to be justly considered?

do you understand me now?

do you?

on a “human-sensitive ai”

ai’s proponents and advocates — of the human-insensitive version of this set of technologies, i mean — have kind of decided on a necessary battlefield between #machines and #humans.

as a #teacher, #trainer and #facilitator during decades this has never been my way. for me, knowledge isn’t how big yours might be but, rather, how well — how pointedly — you learn how to use what you acquire over the years.

speaking well in a language doesn’t require more than 800 words. it’s true. ask #chatgpt-x. what makes the difference is the baggage we bring to each word; the connections; the semantics; the allusions and how we choose not to say exactly what’s expected.

back in 2019 i lost my middle son’s affections. i had to borrow money from him to keep my #startup going. i’ll never get him back — for this and one other, unrelated reason. it was to get the below project off the ground.

in the event, the organisation i submitted to said it was unique (in a good way) and, simultaneously, that it didn’t advance science (in an opposing and bad sense, obviously). they informed me of this unofficially one morning early on — that is, that all my hopes and dreams were dashed — as i stood on a train platform whilst a train came in just that second.

the cctv would have seen me: the organisers themselves could also have seen — if they had wanted or cared to — the cctv of where i was and how i looked. it was obviously a terrible coincidence i resisted the temptation to take advantage of.

none of my three children now speak to me because of #startup-land. but the #philosophy — not the #tech — of the project attached deserves to speak to us, five years later.

let’s allow it to encourage us to be better #techies everywhere. change is inevitable, of course; but in #tech its nature never is. in such moments, in #tech we’re always choosing.

let’s choose wiser. please.

https://mils.page/ai

yeah?

on returning to our childhood states of creative enquiry … and to the max, maybe?

introduction:

maybe #ai can do a few things humans are paid to do. but that doesn’t mean what we’re paid to do by businesses everywhere consists of what our real creativity as unpredictable humans is being exhibited — or even widely fomented.

the proposition:

maybe #it-#tech’s architectures have for so long forced us — as the humans we are — into undervaluing, underplaying and underusing our properly creative sides, that what #ai’s proponents determine are human creative capabilities are actually the dumbed-down instincts and impulses of what would otherwise be sincerely creative manifestations of human thinking: that is, where given the architectures i suggest we make more widely available — for example, just to start with, a decent return to a secrecy-positive digital form of pencil & paper so we DON’T consistently inhibit real creativity — and therefore encourage a return to our much more creatively childlike states of undeniably out-of-the-box enquiry …

augmentedintuition.com | a historical whitepaper advocating an augmented human intuition

in this sense, then, the real lessons of recent #gpt-x are quite different: not how great #ai is now delivering, but how fundamentally toxic to human creativity the privacy- and secrecy-destroying direction of ALL #it-#tech over the years has become. because this very same #tech did start out in its early days as hugely secrecy- and privacy-sensitive. one computer station; one hard-drive; no physical connections between yours and mine: digital pencil & paper indeed!

it’s only since we started laying down cables and access points for some, WITHOUT amending the radically inhibiting architecture of all-seeing admins overlording minimally-privileged user, that this state of affairs has come about: an #it-mediated and supremely marshalled & controlled human creativity.

no wonder #ai appears so often to be creative. our own human creativity has become winged fatally by #tech, to the extent that the god which is now erected as #ai has begun to make us entirely in HIS image, NOT extend and enhance our own intrinsic and otherwise innate preferences.

summary:

its not, therefore, that #it-#tech has been making #ai more human: it’s that the people who run #bigtech have been choosing to shape humans out of their most essential humanity.

and so as humans who are increasingly less so, we become prostrate-ducks for their business pleasures and goals.

an alternative? #secrecypositive, yet #totalsurveillance-compliant software and hardware architectures: back, then, to recreating the creativity-expanding, enhancing and upskilling tools that a digital pencil & paper used to deliver:

secrecy.plus/spt-it | a return to a secrecy- and privacy-positive “digital pencil & paper”

a final thought:

in a sense, even from #yahoo and #google #search onwards, both the #internet and the #web were soon designed (it’s always a choice, this thing we call change: always inevitable, true, it’s a fact … but the “how” — its nature — is never inevitable) … so from #search onwards, it all — in hindsight — become an inspectorial, scraping set of tools to inhibit all human creative conditions absolutely.

the rationale? well, the rationale being that #bigmoney needed consumers who thought they were creators, not creators who would create distributed and uncontrollable networks of creation under the radar.

and then with the advent of newer #ai tools, which serve primarily to deliver on the all-too-human capability to bullshit convincingly, #it and related are finally, openly, brazenly, shamelessly being turned on all human beings who don’t own the means of production.

we were given the keys to the kingdom, only to discover it was a #panopticon we would never escape from. because instead of becoming the guards, that is to say the watchers, we discovered — too late — we were forcefully assigned the roles of the watched:

thephilosopher.space | #NOTthepanopticon

and so not owning the means of production, with its currently hugely toxic concentrations of wealth and riches, means that 99.9 percent of us are increasingly zoned out of the minimum conditions a real human creativity needs to even begin to want to function in a duly creative manner at all.

that is to say, imho, practically everything we see in corporate workplaces which claims the tag of creativity is simple repurposing of the existing. no wonder the advocates of #ai are able to gleefully proclaim their offspring’s capabilities to act as substitutes of such “achievements”.

wouldn’t you with all that money at stake?

secrecy.plus/hmagi | #hmagi

why don’t people who love advocating machine progress find it easy to advocate analogous processes of human progress?

it’s a long title, but it’s a big subject.

over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.

without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.

one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.

but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.

that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.

because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”

or write code with just a verbal instruction, is the latest one.

what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.

or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.

in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.

and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.

and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.

and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.

when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.

i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.

really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?

the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”

🙂

wdyt?

any truth in any of the above?

why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?

machines + humans or humans + machines … or …?

i once wrote the below:

crimehunch.com/terror

i think i upset a lot of people. i remember a more than hour-long conversation with faceless executives from a big us tech corporation i really value and would love one day to work with.

i say “faceless” neutrally, mind: they had no faces, just circles with initials; and were never introduced to me. six or seven plus the person who organised the video-chat. during lockdown, it was.

i asked them the above question: there was silence for around ten seconds. in the event, no one replied at all. the fear was palpable. the fear that someone would say something which someone else would report back, and forever mark a person’s career, without recourse to explanation.

or so i thought. on reflection, maybe i had gone too far. maybe it was wrong for me to suggest their machines weren’t up to the job of beating creatively criminal terrorists. maybe it was wrong for me to suggest we could do more to creatively crimefight: to make human beings capable of being as nonconformist to the good as the putins et al of recent years have manifestly been longitudinally to the extreme ill.

here’s the thing: maybe i wasn’t wrong but maybe i wasn’t enough right.

obviously, if the exercise was delivered to its full extent, whatever your answer the assembled would inevitably agree that both machines and hollywood scriptwriters (or their analogous: the skillsets of, at least) would be the best solution. but even here problems would exist — and i would go so far as to suggest, actually, real roadblocks.

people who operate by rules and regulations — conformists we all need that make the world function with justice and fairness — don’t find it easy to value the contribution of nonconformists who, more often than not, make their own hugely competent rules. and, then again, of course, vice versa. conformists don’t always float the boats of nonconformists as much as they should.

so to allude to the fact that we need to be as good as the supremely creative criminality out there in our own forging of a singular combination of intuitive arationality with the best machines we can manufacture is NOT the solution.

no.

the solution lies in ensuring the cultures of nonconformism and conformism may come together to facilitate this outcome of creative crimefighting and national security … this … just this … has to be the solution.

if we minimally know our philosophy, a thesis — being that crimefighting and national security need ever more traditional ai to deliver a fearsome capacity to pattern-recognise nonconformist evil out of existence, alongside people who press the operational buttons on the back of such insights — will get, from me, its antithesis: that is … we need just as much, if not more, what we might term the “human good” to battle the “human bad”.

and maybe the machines, too. alongside and in fabulous cahoots.

yes. and maybe, of course, the machines.

but what if we change the process? what if a synthesis? as all good philosophy?

1. to find the nonconformist what and how — the next 9/11 before it arises — we use hollywood and analogous creativity to imagineer such events.

2. and to find the who and when of such newly uncovered neocrimes, we apply the obviously terrifyingly useful pattern-recognition capabilities of the ever more traditional ai. so that their adepts, their supporters, their proponents … and those conformists who more generally are comfortable with such approaches … well … simply be comfortable with this new paradigm i propose.

in this scenario, the suits and the flowery shirts work in consonance but never simultaneously. and so we square the circle of respect amongst the two parties, which long-term would always be difficult to sustainably engineer and forge permanently.

wdyt?