how to combine three brains to fight the fire of creative criminality with the fire of a newly creative crimefighting

introduction:

this post contains thoughts from a fortnight’s thinking processes more or less; plus the content of a synthesising presentation which is the sum of years of thought-experimenting on my part. i’ll start with the presentation, which is now where i want us to go:

fighting creatively criminal fire with a newly creative crimefighting

i created the slide below for a presentation i was asked to submit to a european digital agency pitching process, by the uk organisation public. the submission didn’t prosper. the slide, however, is very very good:


the easy answer is that obviously it benefits an industry. the challenging question is why this has been allowed to perpetuate itself as a reality. because real people and democratic citizens have surely perished as a result: maybe unnecessarily.

here is the presentation which public failed to accept for submission to the european digital process last october 2022, and from which the above slide is taken:

presentation submitted to public in october 2022 (pdf download)


where and how i now want us to come together and proceed to deliver on creative crimefighting and global security

the second presentation which follows below indicates my thinking today: no caveats; no red lines; no markers in the sand any more. if you can agree to engage with the process indicated here, no conditions on my side any more.

well. maybe just one. only western allies interested in saving democracy will participate, and benefit both societally and financially from what i’m now proposing:

www.secrecy.plus/fire | full pdf download


following on from the above then, thoughts i wrote down today — in edited format to just be now relevant only to the above — on my iphone notes app. this constitutes a regular go-to tool for my thought-experimenting:

on creating a bespoke procurement process for healthy intuition-validation development

step 1

pilot a bespoke procurement process we use for the next year.

we keep in mind the recent phd i’ve had partial access to on the lessons of how such process is gamed everywhere.

we set up structures to get it right from the start.

no off-the-peg sold as bespoke and at a premium, even when still only repurposed tech for the moment.

step 2

we share this procurement process speedily with other members of the inner intuition-validation core.

they use it: no choice.

but no choice then gives a quid quo pro: this means total freedom to then develop and contribute freely to the inner core ip in ways that most fit others’ cultures.

and also, looking ahead, to onward commercialise in the future in their zones of influence where they know what’s what, and exactly what will work.

and so then, a clear common interest and target: one we all know and agree on.

mil williams, 8th april 2023

historical thought and positions from late march 2023

finally, an earlier brainstorming from the same process as described in part two above, conducted back in late march of this year. this is now a historical document and position, and is included to provide a rigorous audit trail of why free thinking is so important to foment, trust and believe in, and actively encourage.

we have to create an outcome which means we know we think unthinkable things far worse than any criminal ever will be able to, to prevent them. we need a clear set of ground rules, but these rules shouldn’t prevent the agents from thinking comfortably (as far as this is the right word) things they never dared to approach.

the problem isn’t putin or team jorge. it is, but not what we see. it’s what they and others do that we don’t even sense. it’s the people who do worse and events that hurt even more … these things which we have no idea about.

if you like, yes, the persian proverb: the unknown unknowns. i want to make them visible. all of them. the what and how. that’s my focus.

trad tech discovers the who and when. but my tech discovers the what and how before even a glint in criminals’ eyes.

so we combine both types of tech in one process that doesn’t require each culture to work with the other. side-by-side, yes. but in the same way, no. so we guarantee for each the purest state each needs of each.

my work and my life/love if you prefer will not only be located in sweden but driven from here too. that’s my commitment. and not reluctantly in any way whatsoever.

[…]

i have always needed to gather enough data. now i have, the decision surely is simple.

mil williams, 21st march 2023

on the kind of society i’d love to work towards

as a by the by, these days in tech they often talk of something called “zero trust”. they never even broach the concept of “total openness”.

why …?

mil williams, johan & nyström coffee shop, stockholm sweden, 7th April 2023

a) i accept you are good

ok.

you’re right.

but you sometimes act using fascist tools, without really realising you are.

not taking ownership is a fascist tool.

pretending you’re something you’re not: this is fascist.

so fascists, then, can be right sometimes? is that what you prefer me to conclude? (btw, i don’t think you are fascists at all. but you’re so used to the right an admin has to admin a system that you can’t see why i should sincerely object.

why it could be in good faith, too.)

b) the nub of the issue

you feel i should trust you absolutely as if i were a catholic and you were the church. i don’t want that relationship with anyone. look what it brought us.

i want a trust built on a right to get it. and that means information-sharing.

look.

if you believe i am less able to comprehend what you already all comprehend, why work with me in the first place? why want to work with people less able than you?

one reason. just one.

but evidenced … not on trust.

c) what i mean by “not on trust”

i don’t want to take such important things on trust. i’ve done things on trust and they’ve just not worked out. i did when i got married. i did in 2002 in open source; and then in late 2002 in my mother’s homeland, and in the uk re my father’s wretched establishment’s prejudices from 2003 to the current day.

i also foolishly and stupidly used the tool of trusting in others in 2004 in both cases then given: i) media-related in respect of the new labour government at the time; and ii) a horribly personal example, as well.

d) what i mean by an “open society”

we shouldn’t have to build a democracy and society on trust. an open society, yes. of course. but a society where a person does what they do without evidencing to another it’s cool … no … not that. it inevitably leads to corruption. it inevitably leads to abuse of all kinds of powers. in all contexts, public and private. it enables rape. it enables the police force we now have in london.


we need openness precisely so we DON’T need trust. let’s get rid of trust and your demand for it. why? simple: it’s a lazy euphemism for faith. and faith comes from a time before gutenberg. and gutenberg brought science to us all. and now it’s time we gave arationality its place, and by so doing facilitated openness to the very maximum.

now it is. it really is.


e) my preferred timeline

first, do away with faith.

then, do away with trust.

and then make of our world a magnificent, peer-to-peer society of an EVERYTHING that it is to be UTTERLY egalitarian.

openness is beautiful. trust, meantime, is a tool to be turned against you by the powerful (at home and outwith, tbh). and faith is rarely more than what blinds you to what’s really out there. this being what faith always has been throughout human history: the bedrock of religions’ abuses. (not only that. good too, yes it’s true. but what i have seen in most of my life is that the good do good whenever they can, whilst the bad rise to the heights that serve almost inevitably TO CAN the striven good of the good. over and over.)

f) conclusion

anyways.

just that.

just this.

not much more to say today.

just as a by the by, though: these days in tech they often talk of something called “zero trust”. they never even broach the concept of “total openness”. why …?


and so to one final final thought, as i walk the streets of stockholm after posting: if i’m right in what i write here today, trust is a component of faith but not of openness. those of us who want open societies should, therefore, ensure we take note.


if you’d like to contact me, try email: we can start there … yeah?

milwilliams.sweden@outlook.com | positive@secrecy.plus

looking forward to chatting — and hopefully disagreeing!

“upskilling” human beings in the ways of the machine … again? i don’t THINK so

introduction

i just got a message from microsoft (linkedin) which asked me to consider and/or explain how what i was about to post (what you see below in the screenshots) related to my work or professional role.

why nudge in this way

is this a stealthy attempt to remove the ambiguities of #arts-based thinking patterns from contaminating the baser #chatgpt-x instincts and what they scrape?

more than personally, quite intellectually i think it’s wrong — in a world which needs lateral and nonconformist thinking — to define, a priori, what a thinker who wishes to shape a better business should use as a primary discourse.

because this discourse may include how much we follow or no the traditional way of framing information: where we state what we will say, say it, and then summarise it, we fit the needs of machines and people trained to think like them.

art should be used to communicate in any forum

‘truth is, when we choose a precise ambiguity (one forged out of the arts — not the confusions — of deep communication), where such ambiguity and the uncertainty it generates may in itself be a necessary part of the communication process’s context — and even content — what value ever is added by telling the speaker and/or writer they are ineffective?

in any case, the public will always have the final vote on this: and if you prefer to communicate in such ways and be not read, why not let it happen?

why choose this kind of nudge to upskill writers in the ways of the machine?

using automated machines to do so, too …!

so what do YOU think? what DO you?

me, what follows is what i want. what no one in tech wants to allow. because i’m not first to the starting-line: i’m last. they decided it didn’t suit their business models decades ago. i decided i didn’t agree. and i still don’t. and neither should you.

on making a systemically distributed intelligence and genius of all human beings … not just an elite

do you understand now?

background

i trained in spain as an editor in the early part of the 2000s. but as someone who always blogged better than he authored, i already had the custom of attributing my ideas through the tool and habit of hyperlinking.

attribution is really important: not because it inevitably takes us down a peg or two, although it does. more importantly, an outcome with shape to it is more valuable than an outcome, full stop.

the memex machine

i realised this consciously ever since reading vannevar bush’s treatise on human thought: “as we may think”.

you can find it in “the atlantic” to this day:

https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/

in it, using the immediate post-war tech available, and any future tech extrapolated from the same, he suggested a machine called the #memex. it would enable human thought to the extent that we would not only be able to store outcomes — what those most interested in making money would consider a product or service to be scaled up massively — but store the trails of thought that would always lead up previously (and to my mind — and bush’s — also preciously) to such outcomes.

because when a thought is a reel of thoughts not just a pic which snaps shut on time … this is when we have a human being rather than a machine.

the implications of attribution’s fact

the result of all the above? NOTHING we think up is independent of ANYTHING. as a consequence, it is tantamount — just about — to daylight (and more importantly, night-time) robbery to claim one has an exclusive right to make money out of any idea “you might EVER have”.

you don’t agree? then it isn’t. but then you’re a genius in the divisive sense most of society understands: that is, very few of us can or will be this figure; and most of us will never be anywhere near.

which is elitism to the max, in fact.

only i don’t agree with this. i don’t believe such elitist approaches accurately reflect the human brain and how it best works. and when leaders in different fields impress on us to believe that they do reflect the state of human thought, and then build business models on such belief systems, we get a type of tech-kryptonite thrust into the most vulnerable core of human experience.

my experience, meantime, as a language teacher, then facilitator, then enabler (the progression taking place out of what almost became a fanatical frustration with the rank incompetence of traditional learning paths) showed me ALL humans can radically improve their thought processes, given the right environment.

and where processes can be improved dramatically, surprising — even shocking — outcomes can be achieved. and so i saw it happen. and was a convert.

platform genesis

this is why both my son, from whom i am now deeply estranged, and myself worked together for a while on a platform we named “platform genesis”.

the idea is that all of us, all human beings, can find the right tools to uncover some aspect of genius in our souls, hearts, and grey cells. it’s not the preserve of intellect: it’s mainly the preserve of confidence in oneself — whether one has it or not. if one doesn’t, intellect cannot follow. if one does, a happy series of accidents — where a given — will lead to your genius in some way or other.

https://platformgenesis.com

my happy path to my genius

a few minutes after the photo that follows was taken i met the muse of my life. it’s hard to admit that nothing i have thought up since 2016 that evening in dublin on the banks of the river liffey would now exist if we hadn’t met and spoken for three hours, as we shared a meal on the roof terrace of “the woollen mills”.

i had lost my fear of flying the day before, you see: the 15th of june of that same year, when i flew into dublin on ryanair for the first time, for precisely the meeting that would then take place on what was my birthday the following day. being the 16th. coincidently called, in ireland mainly, “bloomsday”. (and so don’t you think it’s cool to have a birthday called “bloomsday”? the day you are born being that, i mean.

don’t you?)

and so i literally lost my fear of flying that june. actually, truly, in a second. a total flip. a click of the brain’s chip that’s never clicked back.

and therefore in my wider life, too: because i assure you, quite objectively, i have a better brain now at 60-something than i have ever had the whole of my life before. and it’s NOT because of intelligence that i am far more intelligent than i could once have imagined myself becoming. it’s because of the confidence that slowly seeped into and then infused my brain, my heart, and my soul … as an utterly natural consequence of meeting the muse of my life.

the meaning of this … for me

bringing together the threads of this article, then, i now conclude:

1. none of my ideas since are mine to exploit exclusively for my own benefit.

2. none of my ideas since are mine.

3. none of my ideas are yours.

4. none of my ideas are yours to exploit exclusively for your benefit.

5. all of the ideas which have emerged after this meeting of ours, and precisely because we met, belong to no one — and therefore belong, altogether, to a whole planet and creative ecosystem.

because it WAS a sort of magic: and magic, the genius of genie, once uncorked CANNOT be boxed into a business model for the benefit of the few.

conclusion

when we upturn a paradigm, we can choose to leave the business model — and therefore the hegemony — untouched. or we can choose to upturn two paradigms simultaneously: the “thing” itself (for want of a better phrase), and how we get it out there as fast as possible.

what’s the problem i’m looking to solve with the #gutenbergofintuitivethinking and the #intuitionvalidationengine? and #intuitionvalidation more broadly?

https://www.ivepics.com

it’s not #darkfigure and #neocrime (only). it’s not #wastefulmeetings (only).

it’s not #loopholes and other zemiological activities (only).

it’s not the inability of #specialisms to duly and safely communicate across their knowledge lines (only).

rather, it’s something which informs all these use-cases — and many many more.

one UNCOMMON denominator: a denominator that connects everything in human experience since the dawn of rhymes:

truth. a concept of an absolute and indivisible truth.
so we kick relativism finally into the long grasses, where it may be admired and remembered but not treasured. nor missed. again.

mil williams, stockholm sweden, 5th april 2023

accept and underline, therefore, and deliver on and re-establish, as a result, that a VALIDATED truth may be something we can establish ABSOLUTELY.

not universal truth: i’m not sustaining this. we can still be dependent on circumstance and context. but in each of these, an unattackable truth.

just this: impossible to break.

this i believe. deeply. firmly. and now till i die.

and if so, even if only gently, this then leads this article to its final statements.

all of the above says to me only one thing: i need to know what i owe my muse. because if any of the above can happen to any useful degree one day, it will happen one day only because of the impact she had on me, that day.

it’s consequently NOT a fixation at all i have for you, my dear muse, but an almost overpoweringly intellectual — as well as obviously emotional — demand to work out to what extent i was just a lever someone pulled.

not that YOU pulled it, either. i don’t think this for a minute. no.

but a tool or extension of someone or something out there … why is this so impossible to propose or contemplate? or prosper as an assertion to be justly considered?

do you understand me now?

do you?

a different, more process-focussed way of humanising #ai

introduction

i had an idea way-back-when. i posted it and then talked about it in various forums. i think the first time formally was a berkeley skydeck submission.

then i did an online whitepaper called crime hunch:

crimehunch.com

it contained a number of different ways of doing crime, ways which lent themselves particularly to the almost infinitely malleable — and therefore unimaginably criminal — world we now live in.

crimehunch.com/neocrime

crimehunch.com/loopholes

it also included what in hindsight has become a nascent way of fighting crime:

crimehunch.com/terror

developing the nascent idea more fairly

without asking the question as clearly as i could have at the start, the image that follows is really what was at the back of my mind … what i was gnawing away at without being so clear as i could have been at the time:

after the crime hunch page on terror and before the above slide, which in truth was created for a euro-event sponsored by the british organisation PUBLIC, i also had a lengthy video conversation with seven or eight american tech corporation executives. i never saw their faces or knew their names. but the conversation, even so, was valuable. before and after this conversation, i have found it easy to rate positively and highly the corporation in question.

anyway. i asked the assembled the conundrum which the crime hunch terror page poses. however, none of them was prepared to say anything; not even address it to say that it shouldn’t have been posed in the first place.

this was when i began to realise i might have gone too far.

so recently i decided i, myself, would address what could have been hurting people out there: people who otherwise might have seen themselves through to considering it useful to work with me.

i realised, too, i needed to finesse not only my words but also how i might address the challenges being raised: the tool or tools — or conceptual positions — needed.

squaring the circles of human intuition-enhancing #ai (and therefore of creative crimefighting) with traditional #datascience views

less than a month ago i produced a presentation about three kinds of human brains and how we might make it easy for them to work together. i was interested in exploring the weaknesses in my hollywood writers idea, and maybe bring onboard as well the strengths of a more traditional and exclusively automating #ai.

because one of the replies those people who do answer the terror conundrum have previously given is that using both teams of resources is the best solution.

the problem with this however is that it’s not necessarily a solution. we have cultural challenges of simple workplace interactions which inevitably kick in, where differing professional mindsets — necessarily conformist crimefighters (someone has to want to apply the rules) versus nonconformist creatives, for example — may struggle to understand, or even minimally validate, the other’s work and approaches.

what #datascience finds easy — and then, what it really struggles with

i then deepened this perception specifically in relation to the #datascience brain and how it values other, more intuitive ways of thinking.

and this formed the basis of the three brains presentation i mentioned: “fighting fire with fire”:

www.secrecy.plus/fire

and what follows from the presentation itself on what i honestly now believe are cultural NOT technological challenges facing us:

i’d like us to focus for the moment on the first slide above:

without intending to or seeing at first what i had done, i was delivering finally on a solution to the conundrum i had — maybe a year or so before — ended up using in good faith but, at the same time, unintentionally hurting the sensibilities and feelings of more than a few.

in this slide we see a process emerging at last where two cultures can work profoundly well together, without having to negotiate anything ever of their own ways of seeing, or of their professional praxis and therefore often unspoken assumptions.

so. to the nitty-gritty.

how would it work?

we take the sorts of minds and creatives i’ve already typed and labelled as “hollywood screenwriters”. but not just hollywood, of course. more widely, the intuitive thinkers; the ones who go with hunches and inventing new future-presents on the basis not of experience exclusively but, rather, in tandem, and deeply so, with what we could call the leaps of faith of what often necessarily leads to genius — whether good guys or criminals.

and then with these brains, in the first stage of our newly creative part but never whole of crimefighting, law enforcement and national & global security, we also type the increasingly unknown unknowns of #darkfigure, and related, which the what and how of terrifyingly unexpected creative criminal activity surely involve.

and with this approach and separation of responsibilities — traditional #datascience and automating #ai on the one hand, creative #intuition-focussed humans to the max on the other — we may now propose using traditional automating #ai as it has functioned to date: that is, where the patterning and recognition of past and present events serves to predict the who and when of future ones. and so, leaving the frighteningly, newly radical and unexpected unknown unknowns of what and how to the creatives.

the value-add of this new process-focussed approach to humanising #ai

never the twain shall meet, maybe? because in a sense, with this separation of responsibilities, established and necessarily conforming security and law-enforcement organisations can advantage themselves of the foresight of creative #intuition and #hunches without losing the purity — if you like — of tried and tested security processes.

and the creative second and third brains below can create and forward-engineer the real evil out there before it becomes a bloody fact — yet without inhibitions or compunctions.

and then, what’s more, both parties — rightly conformist security professionals and effectively nonconformist creative crimefighting professionals — can do to the max, without confusion or shame, what best — and even most emotionally — floats their boats.

initial steps to delivering this process

this is the first steps of process i see and suggest:

final words

so what do you think?

is this a fairer, more inclusive, and frankly practical approach — as well as a way forwards to a real and potential implementation — of the original crime hunch terror conundrum i outlined at the top?

and if so, what would those first steps actually look like? #ai technologies and approaches like this, maybe — coupled closely with an existing #ai where no one would have to change their spots?

www.secrecy.plus/hmagi

thephilosopher.space

____________________

further reading:

platformgenesis.com | • crimehunch.com

when big politics and business make the customer a kleenex

three good things happened today: all related to how i perceive the world.

1. first, i do have a death wish: why, when i first read him, hemingway sooo immediately clicked with me.

2. however, i don’t want to be unreasonable or hurtful to others in my goal to achieve this outcome. i also most definitely don’t want support to ameliorate it. amelioration is the biggest wool-over-the-eyes of our western democratic time. i don’t want to be part of a process that perpetuates its cruelties.

3. my strategy — that is, only strategy — will from now on be as follows: i shall say and write about everything that i judge needs to be called out, in such a way that the powerful i will be bringing to book day after day after day will, one day, only have the alternative to literally shoot me down.

in order, then, to make effective the above, i resolve:

a) to solve the problem of my personal debt, acquired mainly due to my startup activities, so the only way in the future that the powerful shall be able to shoot me down is by literally killing me.

for my mistake all along was to sign up to the startup ecosystem, as it stands, as a tool for achieving my personal and professional financial independence:

startuphunch.com (being my final attempt at making startup human)

as this personal debt is causing me much mental distress and, equally, is clearly a weakness i show to an outside world i now aim to comprehensively and fully deconstruct, as a massive first step, then, i do need to deal with it properly.

b) once a) is resolved, i shall proceed to attack ALL power wherever it most STEALTHILY resides.

that is, i focus on this kind of power: the stealthiest and most cunning versions of.

the ones where it appears we are having favours done for us, for example.

specifically, that is, big tech. but many many others, too.

what essentially constitutes the driving forces behind zemiology, loopholes, neo-crimes, and similar legally accepted but criminally immoral societal harm; all of which, as a general rule, is most difficult right now to track, trace, investigate and prosecute.

crimehunch.com/neocrime

crimehunch.com/loopholes

www.secrecy.plus/law | legalallways.com

www.sverige2.earth/example

this is why i have concluded that my natural place of work is investigative journalism. and where i want to specialise — in this aforementioned sector and field of endeavour — is in the matter of how big tech has destroyed our humanity. but not as any collateral, accidental, or side effect of a principle way of being it may legitimately manifest.

no.

purposefully; deliberately; in a deeply designed way, too … to mainly screw those clients and customers whose societies and tax bases it so voraciously and entirely dismantles.

to screw, and — equally! — control. and then dispose of lightly and casually, when no longer needed, or beneficial to bottom lines various.

and so as a result of all this, i see that having a death wish is beneficial: if channelled properly, as from today i now intend it shall be, then it will make me fearless as never i dared to be. fearless in thought and disposition. fearless even when made fun of.

not in order to take unreasonable risks with my life — or anyone else’s: no.

rather, to know that life doesn’t exist when the things i see clearly are allowed to, equally clearly, continue.

and to want deeply, deeper than ever in my life, to enable a different kind of life for everyone.

NOT just for the self-selected few. those who lead politics, business and the acts of pillage and rape in modern society.

not just for them.

a better life for everyone, i say. everyone.

because i don’t care about mine. i care that mine should make yours fine.

now do you see? this is what makes me feel useful. nothing else. nothing else at all. and certainly not finding personal happiness. that would only blunt the tool.

🙂

on returning to our childhood states of creative enquiry … and to the max, maybe?

introduction:

maybe #ai can do a few things humans are paid to do. but that doesn’t mean what we’re paid to do by businesses everywhere consists of what our real creativity as unpredictable humans is being exhibited — or even widely fomented.

the proposition:

maybe #it-#tech’s architectures have for so long forced us — as the humans we are — into undervaluing, underplaying and underusing our properly creative sides, that what #ai’s proponents determine are human creative capabilities are actually the dumbed-down instincts and impulses of what would otherwise be sincerely creative manifestations of human thinking: that is, where given the architectures i suggest we make more widely available — for example, just to start with, a decent return to a secrecy-positive digital form of pencil & paper so we DON’T consistently inhibit real creativity — and therefore encourage a return to our much more creatively childlike states of undeniably out-of-the-box enquiry …

augmentedintuition.com | a historical whitepaper advocating an augmented human intuition

in this sense, then, the real lessons of recent #gpt-x are quite different: not how great #ai is now delivering, but how fundamentally toxic to human creativity the privacy- and secrecy-destroying direction of ALL #it-#tech over the years has become. because this very same #tech did start out in its early days as hugely secrecy- and privacy-sensitive. one computer station; one hard-drive; no physical connections between yours and mine: digital pencil & paper indeed!

it’s only since we started laying down cables and access points for some, WITHOUT amending the radically inhibiting architecture of all-seeing admins overlording minimally-privileged user, that this state of affairs has come about: an #it-mediated and supremely marshalled & controlled human creativity.

no wonder #ai appears so often to be creative. our own human creativity has become winged fatally by #tech, to the extent that the god which is now erected as #ai has begun to make us entirely in HIS image, NOT extend and enhance our own intrinsic and otherwise innate preferences.

summary:

its not, therefore, that #it-#tech has been making #ai more human: it’s that the people who run #bigtech have been choosing to shape humans out of their most essential humanity.

and so as humans who are increasingly less so, we become prostrate-ducks for their business pleasures and goals.

an alternative? #secrecypositive, yet #totalsurveillance-compliant software and hardware architectures: back, then, to recreating the creativity-expanding, enhancing and upskilling tools that a digital pencil & paper used to deliver:

secrecy.plus/spt-it | a return to a secrecy- and privacy-positive “digital pencil & paper”

a final thought:

in a sense, even from #yahoo and #google #search onwards, both the #internet and the #web were soon designed (it’s always a choice, this thing we call change: always inevitable, true, it’s a fact … but the “how” — its nature — is never inevitable) … so from #search onwards, it all — in hindsight — become an inspectorial, scraping set of tools to inhibit all human creative conditions absolutely.

the rationale? well, the rationale being that #bigmoney needed consumers who thought they were creators, not creators who would create distributed and uncontrollable networks of creation under the radar.

and then with the advent of newer #ai tools, which serve primarily to deliver on the all-too-human capability to bullshit convincingly, #it and related are finally, openly, brazenly, shamelessly being turned on all human beings who don’t own the means of production.

we were given the keys to the kingdom, only to discover it was a #panopticon we would never escape from. because instead of becoming the guards, that is to say the watchers, we discovered — too late — we were forcefully assigned the roles of the watched:

thephilosopher.space | #NOTthepanopticon

and so not owning the means of production, with its currently hugely toxic concentrations of wealth and riches, means that 99.9 percent of us are increasingly zoned out of the minimum conditions a real human creativity needs to even begin to want to function in a duly creative manner at all.

that is to say, imho, practically everything we see in corporate workplaces which claims the tag of creativity is simple repurposing of the existing. no wonder the advocates of #ai are able to gleefully proclaim their offspring’s capabilities to act as substitutes of such “achievements”.

wouldn’t you with all that money at stake?

secrecy.plus/hmagi | #hmagi

curie + foucault … and then a crime-free world?

foucault said everything is dangerous: and more reason, for this reason, to study everything more deeply.

curie said we shouldn’t fear understanding: almost that it was our duty.

i want, now, to set up a national security facility which uses curie’s approach for its outer core, where our good people learn in supported ways to fight bad people.

and i want then, once we have fashioned the necessary tools, to develop an inner core which gets as pointed as foucault’s persistence re the dangerous.

at the #nobelprize museum today i saw two words on the floor near the entrance, amongst many others. the two i recognised and stood near were in english. i hope one day others i am able to recognise will be in swedish.

my words of preference were “persistence” and “disrespect”. of the two, the one i stood next to first was “disrespect”. not gratuitous: measured. that’s me. and that will always be me.

and that’s what i want to make of the aforementioned national security facility: something deeply infused with a profound lack of respect to the shibboleths of crime and … to what we can or can’t do to stop and dismantle them.

let’s do it.

it’s time we did. time to have confidence in our abilities. our competences. and our integrity.

why don’t people who love advocating machine progress find it easy to advocate analogous processes of human progress?

it’s a long title, but it’s a big subject.

over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.

without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.

one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.

but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.

that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.

because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”

or write code with just a verbal instruction, is the latest one.

what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.

or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.

in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.

and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.

and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.

and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.

when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.

i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.

really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?

the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”

🙂

wdyt?

any truth in any of the above?

why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?