Example workstreams and projects which may begin to form part of any work we collectively deliver
The first step to working on the projects under discussion
There is one condition we must all fulfil in order to work on these projects and workstreams in the future:
be aware — and practise daily this awareness — of #neoterrorismontheindividual. This means we realise completely and unreservedly that all our past and current decision-making processes and outcomes may have been the result of an embedded criminality and related zemiology, designed strategically to undermine — profoundly and covertly — our true capacity to act independently
“Neo-Terrorism on the Individual” — an overview … but now as defence tool, no longer research proposal
The two linked-to documents in the section that follows below, which originally formed part of a #phd-level draft proposal of mine from a couple of years back, may now be more helpful as descriptors of what I, and maybe many other people, have been experiencing over these years.
It’s more popularly and more generally known as #gaslighting: but I think in certain societies we’ve been suffering from an immensely technified version of it.
This is why I have given it its own name: “Neo-Terrorism on the Individual”.
That is, a tech-driven longitudinal terrorism delivered efficiently on specific human and organisational targets and marks, in order to shape societies over the years in the direction of certain toxic sociopolitical and business interests.
In this sense then, the two documents mentioned should perhaps be seen more as forming a manual of instructions than a research idea any more, in order to begin to foment and ensure a growing awareness of the tech-driven tactics which certain criminal and zemiological actors may still be using — and broadly at that:
Noted: the above is as true of organisations and nation-states in terms of their collective natures and interests as it is in respect of individuals like you and me, being persons with allegedly direct responsibility for our behaviours and actions.
If we achieve this goal, what should we do next?
If we get sign-up and buy-in, to what effectively is a CULTURE of working re all the #privacysensitive, #privacypositive, #secrecysensitive and #secrecypositive projects and workstreams I am proposing, then the organisational and agency law- and regulation-making which has to exist specifically for such projects and our own personal behaviours will be much to administer, inspect, ensure, and deliver on.
Why? Because CULTURE promotes the rule of laws which emerge from the same organically, and therefore make it much easier and possible for people to see them as their own: thus, compliance is achieved out of approval not fear.
Meantime, LAWS ONLY, created by ruling classes (whether elected or de facto) which attempt to IMPOSE what is surely only their culture, clearly outside the majority (the UK is an example ever since I was born; Ireland has become so over the years as a result of its incestuous financial dependence on global tech), only lead to the corruption and illegitimacy that facilitate authoritarianism behaviours and outcomes, where the same need for compliance — for society by definition needs its citizens to comply in some measure — here is achieved primarily, and sometimes exclusively, through tools and discourses of fear.
Just because you smile when you impose your authoritarianism doesn’t make you any less an authoritarian.
Now … does it?
To summarise …
“For anyone, including myself, to be enabled to work on any and/or all of these projects — which for the moment I shall globally describe as the #gutenbergofintuitivethinking, or the printing-press of intuition — we have to accept that our human agency during our personal present-past, in respect of the decisions we took both privately and work-related, may have been fatally compromised by forces truly outwith our ken.
Not mystical or mysterious forces. No. Not this. Just human beings and organisations acting deliberately to longitudinally benefit, in planned and roadmapped ways, their hyper-focussed and zemiological self-interests, prejudicing a much more shared and collective present-past which could have been. And in fact still could be: one, that is, which benefits every human being, and which will be firmly based on all individuals’ sovereignties.”
So … quite simple, really. Accept the thesis of #neoterrorismontheindividual as a potential reality we have suffered from without perhaps realising it in all aspects of our lives to date. Nothing we did, however apparently deeply thought, was of our own doing.
And so our human agency became anything but human.
Wouldn’t it be a quite remarkable achievement if we could, as a first step to remaking our civilisation in the image of the root word “to civilise”, eliminate compassionately not surgically all such #neoterrorismontheindividual in, say, seven years?
And parallel to all that, begin to deliver all this:
as if we were talking, in fact, about creating software code in the shape of UN inalienable rights and charters, conventions and manifestos, and stuff with these kinds of discourses, as opposed to the more conventional laws and regulatory approaches parliaments and so forth generally prefer to come up with
from my iphone’s app this late morning / around midday:
introduction
yes
this is what we can embrace, if we choose to:
• one nation-state fully onboard
• one big tech partner, fully committed
• one local and regional web of finance, legislation, tax, accountability, delivery, and societal benefit: sweden
then once this is secured, we can discuss exporting
but not before
in respect of past deeds
not interested in the past in respect of those of us who deserve to be in CORE
am interested in collective future-present and deep partners who want a different future-present from the ones we’ve all been a part of in the past
this i repeat is also true for me, just as much as for anyone else or for any other org
good
on trust systems and their development
this means … we have to learn to trust each other, but always suspect everyone and everything
be childlike to the most if you like; but equally, not naive in the least
game-changing trust is built over time with tools no one has ever considered
this is why we need the brightest nonconformist brains committed to changing the world for the better: both gradually and overnight
that is, parallel processes
the value of cultural dissonance and internal respect amongst all parties
yes
true
everything is best when combined
not one or the other team
everything
cultural dissonance and cultural rub are the preconditions for both innovation and invention
but the condition being that different types of seeing and doing also learn to value the others interchangeably and equally
generously
truly generously
so as long as with this caveat upfront and conditioning everything we all do, we will also need conformists at the base of everything we do
my work / life expectations and aspirations
personally, i want to live modestly
i want to think untrammelled, obviously
so this is why i need the modest life to ensure the untrammelled doesn’t leak into my behaviours
a modest life, therefore
decent food
healthy exercise
and a dollop of joy every so often
the fields of action and play
the battlegrounds are various:
• resistance: putin’s russia and everyone who approves of its actions
• fightback: putin’s russia and everyone who approves of its actions
• long-term, however, the focus MUST be local and regional: embedded global criminals at local and regional levels who use symbolic communication as per mafias everywhere, to evade justice as it currently stands, need to be dealt with
why? these are the real funding streams that enable putin and his ilk everywhere to not only have the cruel ambitions they have but the capability, the financial muscle, to deliver on them: local crime turfs spread out across the continents and connected via 21st century digital means
implications
thus:
in my judgement, law enforcement and trusted private security need both to be involved at the start, at least with the 100-day rapid app development programmes that use existing architectures
but they have so much knowhow, the aforementioned security and citizen-safety orgs and their people i mean, that they deserve to be in deep, also from the beginning, re the scoping of new architectures and ways of structuring tech
but i am always open to other opinions and views
always will be, now
now we begin to propose having these foundations
my emotional life
i’d like an emotional life, yes
someone with a view of life i can engage with and which allows her to engage with my work and play, both
and me with hers in equal, and absolutely peer-to-peer, measure
and it’s obviously part of the whole, but firm foundations to the project as we are discussing today will help me be much much more patient and much much less needy now
so all good
it’s ok
with the two pillars we need to fight neo-terrorism on the individual (noi), trust will grow very quickly
spain sits curiously: i separate what i feel about the country easily from what i feel about the personal, which obviously has existed from the start
so it’s ok in this respect
i could travel to and from and work with people from there, despite the fact that i also had really dreadful experiences with businesspeople there once upon a time
and i don’t know why now ok. maybe there is a reason. maybe just time
maybe just the time that has elapsed
why sweden
for me, in my opinion, humbly expressed, sweden is objectively better as a collective built on individual rights than any other country i have ever known or lived in
whatever it is, the most important thing for me here in sweden is that i see people who strive to be good people every day. and even people with the power to effect change (eg in the uk there are also plenty of good people: none of them are powerful)
not all people here do this, of course. not all do good by any means, even in my limited personal experience
impossible that it should be so
maybe, even, not desirable: it wouldn’t be allowing for the human we sometimes imperfectly have to be
but enough do good to the best of their ability for the threshold to be far gooder than i have sensed intuitively at any other time in my life
anywhere else
and not just strive and then wave their hands foolishly when it doesn’t work:
• because you don’t fucking give up until it works here in sweden
• but you don’t get silly either. you wait until this moment arrives beautifully, and only then do you pounce supportively
it’s a series of behaviours i would love one day to emulate well myself
so again, here it’s true: people laugh a lot
and this is good
but sarcasm isn’t a national trait as far as i can see
inquisitiveness defo is
a thirst to uncover and discover
it’s refreshing
it suits my own deep ways of being and seeing
and maybe now much more possible, my ways of doing
a caveat or two re funding provenance
as long as we establish funding-stream provenance professionally and competently, i’m open to support from whom you judge trustworthy
even the countries i’ve mentioned in less glowing terms
yeah
and so i guess some covert part of the uk, which isn’t and never will be mi5 or have relationships with the unis that have bad-actor funding connections … even here we could propose some kind of engagement after the groundwork i’m sketching out today was firmly put in place
the evidence of good faith would have to be overpowering, tho’. absolutely incontrovertible and irreproachable … and right now, no one in the uk is in a position to offer anyone this evidence of their ability to distinguish between political right and geopolitical wrong
who may form part of CORE
none of them as CORE, for reasons that should be obvious (and if to you who are reading these words they’re not obvious, this pretty automagically precludes you from any participation at any level for a long time: certainly, until they do become obvious to you)
not that, then: not them inside CORE
this means, therefore, that none of the alluded to, i repeat, will have any CORE influence over how and what and when and stuff re product, service, platform architectures, and so on.
none will have the ability to impose their preferred approaches whereby innovation would become mere tweaking, and invention something we never even broach. ukraine can’t be won through a mentality of tweaks, after all (and if you believe it can, that’s why you’re automagically not going to be a part of CORE)
• such parties will only be enabled to participate — if we decide they deserve it — as right-at-the-end clients, in a covert marketplace if covert is needed
• and if not needed, a public marketplace of b2b and b2gov
• but no bespoke or consultative products, services or outcomes here
what CORE will consist of
this is my proposal, as it stands today:
• one committed nation-state: that is, yourselves
• your local and regional business, commercial, tax, legislative, delivery and sociocultural infrastructures as framework in perpetuity
• finally, where this is judged advisable and collaboratively intelligent, one big tech partner who wants to redo the world, including maybe what they did in other times which they’d now begin to question … (but then again, this will clearly be the same for most of the rest of us too, as already observed)
if it has to be eventually more oppenheimer than curie, that’s ok
i understand
but curie laid the foundations for oppenheimer, after all
and if it’s more global boiling than fighting directly the kind of criminality i’ve been discussing itself, i’d still say that to ensure our researchers feel brave enough and protected enough to deliver the killer blows to the climate denial we all want them to deliver, they need to know and feel they will be permanently and efficiently protected to the max from new kinds of crime and zemiology, potentially conducted on their persons day in, day out
so even if it’s now to become more a climate change / global boiling focus, it needs to remain a crime and zemiology one robustly in parallel as well
what CORE will consider and deliver
the CORE needs to strategise the castle & moat as well as the thinking-spaces and their architectures
our secrecy-positive spaces will be needed to protect our desired climate boiling people and outcomes
this is what i propose be our strategy from now on in:
• we should focus on creating an an impregnable theoretical, philosophical, practical and technological castle around the sweden-chosen big tech partner-local & regional partnership before moving out to other areas of endeavour and action — even at the risk of not doing as much for those in need as we might
• why? because you just HAVE to know you utterly CANNOT be undermined by anyone, before you reach out a hand to others however deserving
re precedents, we can follow the manhattan project, apollo moonshot, and darpa internet templates if we like
but i think we can learn from modern silicon valley strategy too:
• a flexible PLATFORM is the best research tool in the right hands
• out of which specific applications can be delivered, just as japanese car manufacturers first did with elements of a car
• example: separate workstreams for each element (eg dashboard design & functionality) identified as key, and then slotted whenever discretely ready in terms of their own timelines into what became new versions of the cars
• therefore, manufacturing a car isn’t a new car release every five years as in the olden days, but modulating and updating regularly
the intuition validation engine, then …?
do we go back to platform genesis and the raw READ.ME of the intuition validation engine? i think we do …
• a library of tools
• as already determined, a PLATFORM in order to enable ACCESS freely, not to tie in users to one software / hardware constitution or another
• equal sovereignty for all objects, whether people, code, or machines
as if we were talking, in fact, about creating software code in the shape of UN inalienable rights and charters, conventions and manifestos, and stuff with these kinds of discourses, as opposed to the more conventional laws and regulatory approaches parliaments and so forth generally prefer to come up with
and some would say this would lead to vague
i radically disagree
i would term it as being the “precisely ambiguous”:
• that is, an arts-based approach to real-world problem-solving
• a structure, but not one which deeply determines the kinds of outcomes
• something, instead, that will remain relevant and useful for as long as we do this: JUST like UN charters
in order for it to exist like this, it just needs to be considered for longer before — finally! — finalising its directives
🙂
but we will know when it is finalised
how? because it will be our eureka moment: it will just feel gobsmackingly RIGHT!
my ex- has two indian friends she used to teach spanish to. they lived close to where we did: a married couple.
we were invited to theirs on occasions, and would go over enthusiastically of course, for a full evening repast with other guests we might or might not have met before. they were immensely gracious guests, were her indian friends.
one time, we were introduced to what turned out to be a techie guy: an executive type, though.
yes … not a software engineer or anything like this.
i was clear i’d been invited by apple via the brother of the bebo founder, at a meetup in the wellcome foundation cafe some years before in london, to come onboard.
this time, the techie guy basically spun the story that all tech corps controlled the next ten years of tech … all tech corps. this wasn’t an apple thing, let’s be clear. this was all of them, including apple. (he did assert he knew the apple case from inside.)
so. big tech would rarely launch useful stuff, just for the good of the world. it would do so when a series of conditions were met.
for example:
• what — for them — was all-too-existent tech, but invisible and, indeed, unknown to the outside world, wouldn’t end up being revealed to anyone unless there was a sound bottom-line reason. they wouldn’t even float the concept publicly (that is, telling the idea but not saying they had developed it …)
• neither did they ever seem keen to express the desire, or be driven by the need, to apply such apparently non-existent tech imaginatively for the whole species’ benefit, before, that is, its time arrived as per their aforementioned ten-year calendarisations of the related monetisation opportunities and timelines
remember google glass?
research the year it appeared: go on.
dr steve mann invented it and used his own from 1984, if my memory serves me right:
google then had to finally retire its own consumer version from sale because of “invasion of privacy” concerns from the wider market (and perhaps, also, the wider mass media): and this, even when the version sold had an unnecessarily large and obviously clumpy camera.
do you think they weren’t using it far more covertly way before they launched a consumer version?
do you think they stopped using their own privately covert version after the consumer version was boxed off and deactivated?
of course they used it way before, covertly and more, on everyone.
of course they wouldn’t stop using such a powerful surveillance — and counter-surveillance — tool.
like exxon in the 1970s hiding the research that predicted THEN to the tenth of a degree the global warming (not climate change, ffs) NOW incurred due directly to their fossil fuels:
well. big tech behaves in exactly the same way. it has massive solutions: it had them decades ago. its bottom-line doesn’t need them now, though.
and it certainly DOESN’T want to democratise genius, as i have argued increasingly our species needs us to aim at doing, if we want to survive the cataclysmic climate and other challenges encroaching more and more our daily experiences of life:
so what do we do? if big tech refuses to change its ways 180 degrees — and it will refuse, i assure you — what do we do?
we do it ourselves!
we do it for the military and security, but also for a citizen force which uses sousveillance not to control the state but work with it.
we create relevant software constitutions to achieve it. we use the genius resident deep down in every human being to deliver unpredictable thought, predictably.
and ultimately, we will eliminate ALL loopholes.
and we will eliminate a wider zemiology from every community.
and we will cut back the dried-out deadwood of our societies’ most creatively criminal poachers.
we will make the woods of every community — whether professional or geographical — good again: all of them.
that is, make the timbers of a civilised society no longer anything to be shivered about by anyone.
look:
in sweden you already invented a cctv which is useful but, at the same time, doesn’t need to store the images to deliver law-enforcement support.
it’s this kind of shameless thinking — shamelessly free! — that i hanker after, and now really really do need.
this is why from here: from sweden. exactly this.
yes …
and i appreciate, too, that everyone needs to participate.
but i am angry at big tech for giving up on the species.
and i know how capable it is of getting into projects in order to mess around with them for defensive reasons and purposes: to protect above all the interests of its blessed bottom-line over the interests of, for example, war-torn victims.
the fortnite founder event in salford i attended some years ago proved this, when i was informed by an attendee that basically my idea of #hmagi had been bought up and closed down from another bright mind years before:
but never again shall i salivate the evil of the unnecessarily violent. as a last resort … this is how life sometimes must conduct itself. as a tool of habitual state … this is not.
mil williams, johan & nyström coffee shop, stockholm sweden, 9th april 2023
it’s not all plain selling. but then that’s not what life’s about.
but if i manage to stay here in the end, this — the end — it won’t be. it will be the best beginning i’ve ever managed. i spent seven years between the uk and ireland, trying to engineer a relationship between ireland and the uk. i failed.
now i say it out loud: not with joy but acceptance. acceptance that i failed in everything institutionally and personally related.
but not ideas-wise. not in respect of my increasing capacity to uncover them: like a pig and his beloved truffles. for me, ideas are truffles, waiting to be found; and they say that pigs bear many good resemblances to humans. physiologically, for sure. maybe in other respects i am still unaware of.
all i can say is if a pig is good enough for george clooney, why not associate myself with the same?
🙂
so why here — and now?
because in a very brief period of time i see a society like none i have experienced in my life. there are cruel people here: but the society as a wider whole is striving not to legislate or legitimate state cruelty. and this i am defo not accustomed to back in my homeland.
so if i have to contribute to a tech which scales up basic government and regional administrative instincts, i want it to be in a place where more manually these instincts are sound. meantime, the triumvirate of evil exists in the uk with the conservative attachment to russian wealth and trump’s idiocies all in one. and all by now as an all too well-established nouveau establishment of the horrifyingly, casually cruel.
one thing many don’t realise, and i still don’t fully understand: a military society can be a liberating one too. it all depends to what purpose you militarise — and with what genders you compose your military out of.
during my whole time in the uk i was oppressed by outliers of a military which, tbh, needed very few outliers anyway to operate and impose such oppression with the necessary precision. look at the state of the london metropolitan police right now just to appreciate how ugly the uk has allowed itself to become. and that’s the first line: just the police.
this is why here, and why now. and if it’s not possible now and here, it will be somewhere else similar, and sometime then.
but never again shall i salivate the evil of the unnecessarily violent. as a last resort … this is how life sometimes must conduct itself. as a tool of habitual state … this is not.
this post contains thoughts from a fortnight’s thinking processes more or less; plus the content of a synthesising presentation which is the sum of years of thought-experimenting on my part. i’ll start with the presentation, which is now where i want us to go:
fighting creatively criminal fire with a newly creative crimefighting
i created the slide below for a presentation i was asked to submit to a european digital agency pitching process, by the uk organisation public. the submission didn’t prosper. the slide, however, is very very good:
the easy answer is that obviously it benefits an industry. the challenging question is why this has been allowed to perpetuate itself as a reality. because real people and democratic citizens have surely perished as a result: maybe unnecessarily.
here is the presentation which public failed to accept for submission to the european digital process last october 2022, and from which the above slide is taken:
where and how i now want us to come together and proceed to deliver on creative crimefighting and global security
the second presentation which follows below indicates my thinking today: no caveats; no red lines; no markers in the sand any more. if you can agree to engage with the process indicated here, no conditions on my side any more.
well. maybe just one. only western allies interested in saving democracy will participate, and benefit both societally and financially from what i’m now proposing:
following on from the above then, thoughts i wrote down today — in edited format to just be now relevant only to the above — on my iphone notes app. this constitutes a regular go-to tool for my thought-experimenting:
on creating a bespoke procurement process for healthy intuition-validation development
step 1
pilot a bespoke procurement process we use for the next year.
we keep in mind the recent phd i’ve had partial access to on the lessons of how such process is gamed everywhere.
we set up structures to get it right from the start.
no off-the-peg sold as bespoke and at a premium, even when still only repurposed tech for the moment.
step 2
we share this procurement process speedily with other members of the inner intuition-validation core.
they use it: no choice.
but no choice then gives a quid quo pro: this means total freedom to then develop and contribute freely to the inner core ip in ways that most fit others’ cultures.
and also, looking ahead, to onward commercialise in the future in their zones of influence where they know what’s what, and exactly what will work.
and so then, a clear common interest and target: one we all know and agree on.
mil williams, 8th april 2023
historical thought and positions from late march 2023
finally, an earlier brainstorming from the same process as described in part two above, conducted back in late march of this year. this is now a historical document and position, and is included to provide a rigorous audit trail of why free thinking is so important to foment, trust and believe in, and actively encourage.
we have to create an outcome which means we know we think unthinkable things far worse than any criminal ever will be able to, to prevent them. we need a clear set of ground rules, but these rules shouldn’t prevent the agents from thinking comfortably (as far as this is the right word) things they never dared to approach.
the problem isn’t putin or team jorge. it is, but not what we see. it’s what they and others do that we don’t even sense. it’s the people who do worse and events that hurt even more … these things which we have no idea about.
if you like, yes, the persian proverb: the unknown unknowns. i want to make them visible. all of them. the what and how. that’s my focus.
trad tech discovers the who and when. but my tech discovers the what and how before even a glint in criminals’ eyes.
so we combine both types of tech in one process that doesn’t require each culture to work with the other. side-by-side, yes. but in the same way, no. so we guarantee for each the purest state each needs of each.
my work and my life/love if you prefer will not only be located in sweden but driven from here too. that’s my commitment. and not reluctantly in any way whatsoever.
[…]
i have always needed to gather enough data. now i have, the decision surely is simple.
i just got a message from microsoft (linkedin) which asked me to consider and/or explain how what i was about to post (what you see below in the screenshots) related to my work or professional role.
why nudge in this way
is this a stealthy attempt to remove the ambiguities of #arts-based thinking patterns from contaminating the baser #chatgpt-x instincts and what they scrape?
more than personally, quite intellectually i think it’s wrong — in a world which needs lateral and nonconformist thinking — to define, a priori, what a thinker who wishes to shape a better business should use as a primary discourse.
because this discourse may include how much we follow or no the traditional way of framing information: where we state what we will say, say it, and then summarise it, we fit the needs of machines and people trained to think like them.
art should be used to communicate in any forum
‘truth is, when we choose a precise ambiguity (one forged out of the arts — not the confusions — of deep communication), where such ambiguity and the uncertainty it generates may in itself be a necessary part of the communication process’s context — and even content — what value ever is added by telling the speaker and/or writer they are ineffective?
in any case, the public will always have the final vote on this: and if you prefer to communicate in such ways and be not read, why not let it happen?
why choose this kind of nudge to upskill writers in the ways of the machine?
using automated machines to do so, too …!
so what do YOU think? what DO you?
me, what follows is what i want. what no one in tech wants to allow. because i’m not first to the starting-line: i’m last. they decided it didn’t suit their business models decades ago. i decided i didn’t agree. and i still don’t. and neither should you.
on making a systemically distributed intelligence and genius of all human beings … not just an elite
i had an idea way-back-when. i posted it and then talked about it in various forums. i think the first time formally was a berkeley skydeck submission.
then i did an online whitepaper called crime hunch:
it contained a number of different ways of doing crime, ways which lent themselves particularly to the almost infinitely malleable — and therefore unimaginably criminal — world we now live in.
without asking the question as clearly as i could have at the start, the image that follows is really what was at the back of my mind … what i was gnawing away at without being so clear as i could have been at the time:
after the crime hunch page on terror and before the above slide, which in truth was created for a euro-event sponsored by the british organisation PUBLIC, i also had a lengthy video conversation with seven or eight american tech corporation executives. i never saw their faces or knew their names. but the conversation, even so, was valuable. before and after this conversation, i have found it easy to rate positively and highly the corporation in question.
anyway. i asked the assembled the conundrum which the crime hunch terror page poses. however, none of them was prepared to say anything; not even address it to say that it shouldn’t have been posed in the first place.
this was when i began to realise i might have gone too far.
so recently i decided i, myself, would address what could have been hurting people out there: people who otherwise might have seen themselves through to considering it useful to work with me.
i realised, too, i needed to finesse not only my words but also how i might address the challenges being raised: the tool or tools — or conceptual positions — needed.
squaring the circles of human intuition-enhancing #ai (and therefore of creative crimefighting) with traditional #datascience views
less than a month ago i produced a presentation about three kinds of human brains and how we might make it easy for them to work together. i was interested in exploring the weaknesses in my hollywood writers idea, and maybe bring onboard as well the strengths of a more traditional and exclusively automating #ai.
because one of the replies those people who do answer the terror conundrum have previously given is that using both teams of resources is the best solution.
the problem with this however is that it’s not necessarily a solution. we have cultural challenges of simple workplace interactions which inevitably kick in, where differing professional mindsets — necessarily conformist crimefighters (someone has to want to apply the rules) versus nonconformist creatives, for example — may struggle to understand, or even minimally validate, the other’s work and approaches.
what #datascience finds easy — and then, what it really struggles with
i then deepened this perception specifically in relation to the #datascience brain and how it values other, more intuitive ways of thinking.
and this formed the basis of the three brains presentation i mentioned: “fighting fire with fire”:
and what follows from the presentation itself on what i honestly now believe are cultural NOT technological challenges facing us:
i’d like us to focus for the moment on the first slide above:
without intending to or seeing at first what i had done, i was delivering finally on a solution to the conundrum i had — maybe a year or so before — ended up using in good faith but, at the same time, unintentionally hurting the sensibilities and feelings of more than a few.
in this slide we see a process emerging at last where two cultures can work profoundly well together, without having to negotiate anything ever of their own ways of seeing, or of their professional praxis and therefore often unspoken assumptions.
so. to the nitty-gritty.
how would it work?
we take the sorts of minds and creatives i’ve already typed and labelled as “hollywood screenwriters”. but not just hollywood, of course. more widely, the intuitive thinkers; the ones who go with hunches and inventing new future-presents on the basis not of experience exclusively but, rather, in tandem, and deeply so, with what we could call the leaps of faith of what often necessarily leads to genius — whether good guys or criminals.
and then with these brains, in the first stage of our newly creative part but never whole of crimefighting, law enforcement and national & global security, we also type the increasingly unknown unknowns of #darkfigure, and related, which the what and how of terrifyingly unexpected creative criminal activity surely involve.
and with this approach and separation of responsibilities — traditional #datascience and automating #ai on the one hand, creative #intuition-focussed humans to the max on the other — we may now propose using traditional automating #ai as it has functioned to date: that is, where the patterning and recognition of past and present events serves to predict the who and when of future ones. and so, leaving the frighteningly, newly radical and unexpected unknown unknowns of what and how to the creatives.
the value-add of this new process-focussed approach to humanising #ai
never the twain shall meet, maybe? because in a sense, with this separation of responsibilities, established and necessarily conforming security and law-enforcement organisations can advantage themselves of the foresight of creative #intuition and #hunches without losing the purity — if you like — of tried and tested security processes.
and the creative second and third brains below can create and forward-engineer the real evil out there before it becomes a bloody fact — yet without inhibitions or compunctions.
and then, what’s more, both parties — rightly conformist security professionals and effectively nonconformist creative crimefighting professionals — can do to the max, without confusion or shame, what best — and even most emotionally — floats their boats.
initial steps to delivering this process
this is the first steps of process i see and suggest:
final words
so what do you think?
is this a fairer, more inclusive, and frankly practical approach — as well as a way forwards to a real and potential implementation — of the original crime hunch terror conundrum i outlined at the top?
and if so, what would those first steps actually look like? #ai technologies and approaches like this, maybe — coupled closely with an existing #ai where no one would have to change their spots?
over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.
without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.
one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.
but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.
that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.
because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”
or write code with just a verbal instruction, is the latest one.
what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.
or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.
in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.
and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.
and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.
and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.
when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.
i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.
really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?
the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”
🙂
wdyt?
any truth in any of the above?
why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?