i just got a message from microsoft (linkedin) which asked me to consider and/or explain how what i was about to post (what you see below in the screenshots) related to my work or professional role.
why nudge in this way
is this a stealthy attempt to remove the ambiguities of #arts-based thinking patterns from contaminating the baser #chatgpt-x instincts and what they scrape?
more than personally, quite intellectually i think it’s wrong — in a world which needs lateral and nonconformist thinking — to define, a priori, what a thinker who wishes to shape a better business should use as a primary discourse.
because this discourse may include how much we follow or no the traditional way of framing information: where we state what we will say, say it, and then summarise it, we fit the needs of machines and people trained to think like them.
art should be used to communicate in any forum
‘truth is, when we choose a precise ambiguity (one forged out of the arts — not the confusions — of deep communication), where such ambiguity and the uncertainty it generates may in itself be a necessary part of the communication process’s context — and even content — what value ever is added by telling the speaker and/or writer they are ineffective?
in any case, the public will always have the final vote on this: and if you prefer to communicate in such ways and be not read, why not let it happen?
why choose this kind of nudge to upskill writers in the ways of the machine?
using automated machines to do so, too …!
so what do YOU think? what DO you?
me, what follows is what i want. what no one in tech wants to allow. because i’m not first to the starting-line: i’m last. they decided it didn’t suit their business models decades ago. i decided i didn’t agree. and i still don’t. and neither should you.
on making a systemically distributed intelligence and genius of all human beings … not just an elite
i had an idea way-back-when. i posted it and then talked about it in various forums. i think the first time formally was a berkeley skydeck submission.
then i did an online whitepaper called crime hunch:
it contained a number of different ways of doing crime, ways which lent themselves particularly to the almost infinitely malleable — and therefore unimaginably criminal — world we now live in.
without asking the question as clearly as i could have at the start, the image that follows is really what was at the back of my mind … what i was gnawing away at without being so clear as i could have been at the time:
after the crime hunch page on terror and before the above slide, which in truth was created for a euro-event sponsored by the british organisation PUBLIC, i also had a lengthy video conversation with seven or eight american tech corporation executives. i never saw their faces or knew their names. but the conversation, even so, was valuable. before and after this conversation, i have found it easy to rate positively and highly the corporation in question.
anyway. i asked the assembled the conundrum which the crime hunch terror page poses. however, none of them was prepared to say anything; not even address it to say that it shouldn’t have been posed in the first place.
this was when i began to realise i might have gone too far.
so recently i decided i, myself, would address what could have been hurting people out there: people who otherwise might have seen themselves through to considering it useful to work with me.
i realised, too, i needed to finesse not only my words but also how i might address the challenges being raised: the tool or tools — or conceptual positions — needed.
squaring the circles of human intuition-enhancing #ai (and therefore of creative crimefighting) with traditional #datascience views
less than a month ago i produced a presentation about three kinds of human brains and how we might make it easy for them to work together. i was interested in exploring the weaknesses in my hollywood writers idea, and maybe bring onboard as well the strengths of a more traditional and exclusively automating #ai.
because one of the replies those people who do answer the terror conundrum have previously given is that using both teams of resources is the best solution.
the problem with this however is that it’s not necessarily a solution. we have cultural challenges of simple workplace interactions which inevitably kick in, where differing professional mindsets — necessarily conformist crimefighters (someone has to want to apply the rules) versus nonconformist creatives, for example — may struggle to understand, or even minimally validate, the other’s work and approaches.
what #datascience finds easy — and then, what it really struggles with
i then deepened this perception specifically in relation to the #datascience brain and how it values other, more intuitive ways of thinking.
and this formed the basis of the three brains presentation i mentioned: “fighting fire with fire”:
and what follows from the presentation itself on what i honestly now believe are cultural NOT technological challenges facing us:
i’d like us to focus for the moment on the first slide above:
without intending to or seeing at first what i had done, i was delivering finally on a solution to the conundrum i had — maybe a year or so before — ended up using in good faith but, at the same time, unintentionally hurting the sensibilities and feelings of more than a few.
in this slide we see a process emerging at last where two cultures can work profoundly well together, without having to negotiate anything ever of their own ways of seeing, or of their professional praxis and therefore often unspoken assumptions.
so. to the nitty-gritty.
how would it work?
we take the sorts of minds and creatives i’ve already typed and labelled as “hollywood screenwriters”. but not just hollywood, of course. more widely, the intuitive thinkers; the ones who go with hunches and inventing new future-presents on the basis not of experience exclusively but, rather, in tandem, and deeply so, with what we could call the leaps of faith of what often necessarily leads to genius — whether good guys or criminals.
and then with these brains, in the first stage of our newly creative part but never whole of crimefighting, law enforcement and national & global security, we also type the increasingly unknown unknowns of #darkfigure, and related, which the what and how of terrifyingly unexpected creative criminal activity surely involve.
and with this approach and separation of responsibilities — traditional #datascience and automating #ai on the one hand, creative #intuition-focussed humans to the max on the other — we may now propose using traditional automating #ai as it has functioned to date: that is, where the patterning and recognition of past and present events serves to predict the who and when of future ones. and so, leaving the frighteningly, newly radical and unexpected unknown unknowns of what and how to the creatives.
the value-add of this new process-focussed approach to humanising #ai
never the twain shall meet, maybe? because in a sense, with this separation of responsibilities, established and necessarily conforming security and law-enforcement organisations can advantage themselves of the foresight of creative #intuition and #hunches without losing the purity — if you like — of tried and tested security processes.
and the creative second and third brains below can create and forward-engineer the real evil out there before it becomes a bloody fact — yet without inhibitions or compunctions.
and then, what’s more, both parties — rightly conformist security professionals and effectively nonconformist creative crimefighting professionals — can do to the max, without confusion or shame, what best — and even most emotionally — floats their boats.
initial steps to delivering this process
this is the first steps of process i see and suggest:
final words
so what do you think?
is this a fairer, more inclusive, and frankly practical approach — as well as a way forwards to a real and potential implementation — of the original crime hunch terror conundrum i outlined at the top?
and if so, what would those first steps actually look like? #ai technologies and approaches like this, maybe — coupled closely with an existing #ai where no one would have to change their spots?
ai’s proponents and advocates — of the human-insensitive version of this set of technologies, i mean — have kind of decided on a necessary battlefield between #machines and #humans.
as a #teacher, #trainer and #facilitator during decades this has never been my way. for me, knowledge isn’t how big yours might be but, rather, how well — how pointedly — you learn how to use what you acquire over the years.
speaking well in a language doesn’t require more than 800 words. it’s true. ask #chatgpt-x. what makes the difference is the baggage we bring to each word; the connections; the semantics; the allusions and how we choose not to say exactly what’s expected.
back in 2019 i lost my middle son’s affections. i had to borrow money from him to keep my #startup going. i’ll never get him back — for this and one other, unrelated reason. it was to get the below project off the ground.
in the event, the organisation i submitted to said it was unique (in a good way) and, simultaneously, that it didn’t advance science (in an opposing and bad sense, obviously). they informed me of this unofficially one morning early on — that is, that all my hopes and dreams were dashed — as i stood on a train platform whilst a train came in just that second.
the cctv would have seen me: the organisers themselves could also have seen — if they had wanted or cared to — the cctv of where i was and how i looked. it was obviously a terrible coincidence i resisted the temptation to take advantage of.
none of my three children now speak to me because of #startup-land. but the #philosophy — not the #tech — of the project attached deserves to speak to us, five years later.
let’s allow it to encourage us to be better #techies everywhere. change is inevitable, of course; but in #tech its nature never is. in such moments, in #tech we’re always choosing.