someone asked me this morning why i write. i didn’t answer them.
maybe it was an example of new knowledge for me. my dissertation supervisor, a very brainy person, told me once that we should treasure those moments when we didn’t know how to answer someone: they were examples of new knowledge.
certainly for ourselves, and then again maybe for others too: a wider humanity. in either case, to be valued above almost any other lived experience. because the experience manifests itself in all our endeavours: a common denominator which is neither low nor common, tbh. in work; in academia; at school; in relationships; in a love at first sight … everything i tell you.
why write? not to be read. never. to write in order to be read is to almost surgically remove the very condition good and faithful writing demands to remain faithful and good.
freedom. that’s why i write. to be free. to remain free. to sustain a wider freedom. to ensure liberty remains a goal of all human beings.
you see … to be read is nice but dangerous. to be read is to enter into a dialogue. and in such dialogue we inevitably compromise, fudge, lose our trails of thought, forget the purpose of reflection — and, then, indeed, its power.
that’s not me. and after sixty years of trying to be a writer who is read, i realise it mustn’t be me. because my virtue is that i don’t enter into dialogue before i have my ideas.
actually, that’s not true. by writing, i speak to myself. and this, for me, is key: because it’s truer than true that without this mode of speaking with my being i never am able to know, until i follow the described procedure, what that being thinks.
so if i have to enter into a dialogue with the person who asked me this morning about why i write … well … i write to be free and find out what it is to be me.
is all.
enough?
i give no more.
except a video i just made and then a poem i just wrote this morning at breakfast in stockholm city, sweden.
i’ve been thinking a lot about redemption, ever since a messenger and intermediary said to me in 2016 that my problem was guilt.
she was, on due reflection, wrong. guilt is good, if its reasons for provoking can be assuaged in competent and compassionate manner.
what i still suffer from is an absence of process, in the secular society i cherish, for redemption.
my supposition:
let’s presuppose the following: let’s say that religion served a real positive purpose which in its relative absence now in many of our societies has not been supplanted with other processes as compassionately. i say compassionately with circumspection, of course. religion itself has effected many horrible historical — and even current — events in humanity’s journey.
an example, then, of the redemption i mention?
well. here we are!
discretion is a very humane aspect of criminal justice systems, when used in the spirit of the law and its kindly interpretation.
i studied international criminal justice in 2017 at master’s level and on one occasion stumbled across the following anecdote in the academia i was reading: italy, well known for the misuse of family power and structure, may also invoke the good of family leading to a better criminal justice praxis there.
most crime in all criminal justice systems is committed by young men between the ages of 18 and 26. after that age, almost automagically, its incidence tails off. some suggest there may even exist physiological reasons for this: that young male brains get hard-wired to begin settling at around the upper age band quoted.
either way, we have a criminal justice reality: young men who commit crime are also victims of crime, in the sense that they are the most vulnerable group to enter criminality, and get very little proactive support to stay out of the criminal justice system. more often, in fact, they get targeted — maybe targeted into it — via prejudice and presumption of very many, damning and defining, societal forces.
in italy, then, this was the example: a law-enforcement officer heard of a young man around 17 who had just about committed his first crime; certainly infraction. the officer knew of the family, and instead of “inducting” the youth directly into a path which later would be heading irreversibly towards criminality, he went behind the back of the youngster and straight to his parents.
he explained the situation gently and non-threateningly, explaining that the family could help. here, we could argue, was good discretion operating to the max: even, that it shouldn’t have been necessary to use discretion to keep the young man out of being typed so young as criminality’s cannon fodder. maybe it could be conceivable that the officer’s own kpi-structure and law-enforcement praxis would consist primarily of keeping people out of the system — enabling and allowing them to redeem any initial acts so that criminality became something they themselves wanted to veer from — instead of counting up the number of criminals captured and banged away.
proposal:
on the of the above, and in relation to things i’ve already published on a new concept of criminal justice which i’ve termed natural justice, i’d like to propose that we take the renewed need for a societal infrastructure of redemption to be revisited.
in the absence of father confessors, that is in the absence of many people finding them unsatisfactory to their needs (where they work, no change needed of course!), we should create serious halfway houses between the criminality and zero good of #darkfigure and #neocrime as i understand them at one extreme (the 20 to 40 percent that is the crime and related loopholes invisible to criminal justice) and religiously delivered confession and relief at the other. which for secular societies no longer functions easily.
thinking more philosophically, it’s possible that the behaviours acted out as described in this post, which may then duly and rightfully lead to criminal prosecution, are encouraged because we need to be redeemed — to feel it, i mean. and unless in secular society you enter the criminal justice system, a societal-level redemption is not within reach. if we provided other ways which had nothing to do with criminal justice stigma, perhaps — too! — fewer would wish to be criminals.
i’ve often felt, as a by-the-by and in analogous way, that open-source and social-networked online communities have become so popular and active because in such spaces — the really competent and well-run ones i mean — we find the reality (or even just simulacrum, but at least this) of a democratic discourse that real democracy increasingly is lacking.
what’s clear is there are exist basic human instincts and impulses, and they must always act in pairs.
doing democracy is one; where nowadays the reward for its practice where this doesn’t invoke the relationship of abused partner?
and so doing ill is another; where nowadays the redemption which doesn’t involve punishment and disgrace?
i’m beginning to see a way forwards for my ideas on intuition validation in the context of inspectorial it-tech architectures.
the latter are great at who and when; they’re not fit for purpose — 9/11 showed this clearly — when we’re talking about new kinds of what and how. this, in my view, is because they inevitably inhibit the capacity we otherwise had in pencil & paper days to think profoundly and fearlessly before we showed anything to the outside world. now we simply don’t know who is watching, so not everything we might think even gets thought.
i want us to make the unthinkable as thinkable as possible, in order to prevent the supremely — that is, creatively — bad people on this rock from turning their thoughts into real-world events.
attached some thoughts from my digital note-taking which i’ve delivered this morning.
meantime, here are the slides of one of my recent roadmaps for setting up a company or organisation designed to begin to shape how we might make some of these ideas much more tangible.
over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.
without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.
one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.
but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.
that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.
because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”
or write code with just a verbal instruction, is the latest one.
what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.
or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.
in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.
and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.
and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.
and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.
when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.
i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.
really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?
the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”
🙂
wdyt?
any truth in any of the above?
why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?
i realised this not long ago. i don’t want to work with this company or that. i want to work with people who also, simultaneously, may work for one company or organisation or another.
when the institution overrides the individual from the start is when, even if all the starting-signs are cool, the individuals will one day — especially when huge amounts of money all of a sudden become likely — be inevitably and overwhelmingly overridden by their instituitional framework.
i don’t intend for us to start as i don’t mean us to go on.
so first i want to meet people. i want an organisational structure which generates a hybrid of #holacracy. and i want brains to show themselves the most important factor and matter of all, where the overarching #gutenbergofintuitivethinking and #intuitionvalidationengine projects are concerned.
because if you choose people first, and the people are right for you, then the institutions automagically will remain so too.
at least … whilst your people of choice remain at the institution in question. and they will do in general, for sure — if the institution remains worth then staying. for in the question of #intuitionvalidation there is no building-block which is more significant than the human as individual.
i think i upset a lot of people. i remember a more than hour-long conversation with faceless executives from a big us tech corporation i really value and would love one day to work with.
i say “faceless” neutrally, mind: they had no faces, just circles with initials; and were never introduced to me. six or seven plus the person who organised the video-chat. during lockdown, it was.
i asked them the above question: there was silence for around ten seconds. in the event, no one replied at all. the fear was palpable. the fear that someone would say something which someone else would report back, and forever mark a person’s career, without recourse to explanation.
or so i thought. on reflection, maybe i had gone too far. maybe it was wrong for me to suggest their machines weren’t up to the job of beating creatively criminal terrorists. maybe it was wrong for me to suggest we could do more to creatively crimefight: to make human beings capable of being as nonconformist to the good as the putins et al of recent years have manifestly been longitudinally to the extreme ill.
here’s the thing: maybe i wasn’t wrong but maybe i wasn’t enough right.
obviously, if the exercise was delivered to its full extent, whatever your answer the assembled would inevitably agree that both machines and hollywood scriptwriters (or their analogous: the skillsets of, at least) would be the best solution. but even here problems would exist — and i would go so far as to suggest, actually, real roadblocks.
people who operate by rules and regulations — conformists we all need that make the world function with justice and fairness — don’t find it easy to value the contribution of nonconformists who, more often than not, make their own hugely competent rules. and, then again, of course, vice versa. conformists don’t always float the boats of nonconformists as much as they should.
so to allude to the fact that we need to be as good as the supremely creative criminality out there in our own forging of a singular combination of intuitive arationality with the best machines we can manufacture is NOT the solution.
no.
the solution lies in ensuring the cultures of nonconformism and conformism may come together to facilitate this outcome of creative crimefighting and national security … this … just this … has to be the solution.
if we minimally know our philosophy, a thesis — being that crimefighting and national security need ever more traditional ai to deliver a fearsome capacity to pattern-recognise nonconformist evil out of existence, alongside people who press the operational buttons on the back of such insights — will get, from me, its antithesis: that is … we need just as much, if not more, what we might term the “human good” to battle the “human bad”.
and maybe the machines, too. alongside and in fabulous cahoots.
yes. and maybe, of course, the machines.
but what if we change the process? what if a synthesis? as all good philosophy?
1. to find the nonconformist what and how — the next 9/11 before it arises — we use hollywood and analogous creativity to imagineer such events.
2. and to find the who and when of such newly uncovered neocrimes, we apply the obviously terrifyingly useful pattern-recognition capabilities of the ever more traditional ai. so that their adepts, their supporters, their proponents … and those conformists who more generally are comfortable with such approaches … well … simply be comfortable with this new paradigm i propose.
in this scenario, the suits and the flowery shirts work in consonance but never simultaneously. and so we square the circle of respect amongst the two parties, which long-term would always be difficult to sustainably engineer and forge permanently.
i am wondering this morning if the reason some companies are choosing no longer to embrace #hybridwork is because it may make it easier for their staff to make the #gigeconomy and #sideproject-activities function efficiently and compatibly — but no longer with clear and deep #manager-oversight. the option to buy into new #ip generated by such processes then, of course, wholly escapes them.
previous to this, the #gigeconomy was a control mechanism of quite some power as far as salaries and conditions were concerned. but now #hybrid and #homeworking may actually turn this balance upside down. and some companies may not have the relevant cultures to be able to embrace the advantages that might make it all work for all.
either way, i think the idea of a #nonbinary #distributedprivilege to further the interests of a more efficient socieconomic network of common future-present activity would serve to further the interests, also, of #diversity and #creativity in our western liberal democracies.
attached, a (big!) little something i’ve been working on yesterday afternoon.
a slide-deck of 33 slides down to about six.
content itself still to do, mind … but you can see where my #roadmap, already.
and i’ll be taking it with me to #sweden next week: a country which has striven so long to truly, actually, sincerely deliver a society of #distributedprivilege.
what i’ve been aiming at all this time i now realise, with ALL my projects around #intuitionvalidation.
and it makes me happy to begin slowly to know how to express it.
to not connect when something only unsatisfactorily connects is a really important skillset and aspect which the good human brain exhibits.
the capacity to forget for a bit is a human virtue of the highest thinking.
to create a future NOT based out of the past is what separates humans from our tools. (even as sometimes we ourselves are turned abusively into aspects of the latter.)
post:
so: #lean and startup-land solve one problem and one problem always. the problem they solve is not what the customer is hurting most about. it’s not the pain in the customer journey as tech dogma claims it suggests.
no.
it’s the pain on the journey on which both client and supplier begin to ride together. it’s the pain-point that the two of them can agree on as being where they must meet:
not what i as client most need sorting necessarily, but what i can afford right now according to your price list.
not even what you as supplier are most objectively suited to resolving but, rather, what you as supplier find it easiest to convince me is what i need, once you’ve identified where it is i am most prepared to first stump up some dough. (and with this, any dough ffs. just so long as some.)
and finally, most definitely, in no way do such processes and spaces guarantee at all clearly that the world’s most complex (note: not complicated) and, therefore, pressing problems will find their solutions even attended to never mind provided by such startup ecosystems, business mindsets, and wider ways of working deeply.
speculative tech like #ai — it’s supposedly been beating humans incompletely since at least the 1980s (and probably for decades before) — always needs to convince us it’s just about there. the funding, for one thing, demands:
1. until an #ai (let’s use this loose term for now) is able to get so frustrated with an insoluble problem that they end up getting blind drunk … only to wake up in a haze to a solution perfectly formed …
2. until an #ai awakes from a beautiful dream with a fabulous new idea subconsciously imagineered to perfection overnight … for it then, in the second of that awakening, to forget totally the detail — or even the concept itself — for six further months … and then for everything to come flooding back in even more astonishing form, to be finally recorded, scoped and amazingly implemented …
3. until an #ai may choose to NOT make connections or identify patterns or see relationships or deliver on finalities … because something doesn’t fit quite as right as it one day might …
4. until an #ai has a leap of faith based on data it INVENTED OUT OF THE FUTURE … because creation is the nature of the human brain … even when utterly utterly mindlessly terrifying:
until all the above and other things happen .. well … our dearly beloved proponents of #ai (thus loosely described) will continue to argue that we can do everything on the basis of what’s already been patterned. and that the apex of human achievement invokes ONLY copying what others have already done — just better: that is, faster, more accurately, for longer without tired — essentially, an alpha male’s orgasm … no?
🙂
and thus the competent delivery of #ai and similar technologies only ever needs to aspire to this.
the new #ai unleashed onto the world recently — #chatgpt — is praised for writing indistinguishably school and university essays. it’s praised for giving advice crackpot-like or otherwise. it — and similar tech — receive plaudits for faking flemish masters’ painting styles. i haven’t seen but surely imagine that songs in the key of sheeran have already flooded the social networks of this planet, devised at the hands of this sort of code.
we need to stop this.
we need to stop saying what #ai can do.
we need to start asking two questions:
1. if #ai can do what a human being can do, is it actually an achievement of historical record and delivery that another human being can do exactly the same as the human process that the #ai in question is also able to do? that is: why WANT to indistinguishably copy a flemish master? why not always prefer to discover the master — or mistress — deep and unrevealed inside your very own being?
because whilst copying is cool, it’s only cool if it leads to synthesis and final originality. after all, in the act of production we may uncover the output of creation. and then understand, perhaps, that there might inhabit well inside our persons — each of us — geniuses of magnificent shine:
(equally, of course, we might ask — as rightly people have — if a machine so easily reproduces a human activity such as writing an essay for university or school, whether our civilisation is focussed on the right testing regimes in the first place. but that’s for a quite different post.)
2. it’s bad enough to find ourselves needing to ask whether what our tech can duplicate in ourselves involves things which we might continue ourselves — in the absence of such tech — to even consider it societally desirable to carry on doing. it’s much worse to say a whole tech ecosystem — that which is structured around #lean, customer journeys, pain-points and the overbearing need to invoice a digital service or product asap (in short, startup land as we have known and loved it for so long) — doesn’t solve the vast majority of remaining deep and complex world problems because, actually, its practitioners and owners have designed it specifically so it wouldn’t.
there’s much more money in planning a drip-feed of ten years of product releases than in creating a system which might identify the real dangers of immensely, horrifyingly human #nonconformism: that is, its thinking processes and the disadvantage we now face, as we have become progressively incapacitated to do this sort of activity in the corporate-sanctioned dynamics of diluting teamwork.
because whilst we in corporate-land choose to inhibit creative #nonconformists via our tech architectures and business structures, the putins of the world have clearly been doing — longitudinally — quite the opposite.
haven’t they?
so let’s not only stop to duly think whether a tech that can repeat what the majority of us humans do every day is worth the thunder and lightning its proponents will continue to generate. yes: it’s useful in many respects, of course. but we must stop asking what can our tech do and start, instead, to focus our funding on OUTCOMES that need DELIVERY, not BOTTOM LINES that must be DELIVERED.
otherwise, we get what we get: and so we have.
an example? just one? here’s one … a western defensive and attacking capability to identify and forestall future ukraines, and maybe now, as we are allowed to voice these things finally … alongside neo-terrorist pandemics … before they even become a glint in evil organisations’ eyes:
let’s stop fetishising the machines whose investments have cost us so much, and whose future ROIs we have planned out to the millimetre.
let’s be human, instead.
let’s say:
we can make money out of making humans more human, not less significant
we can make money out of prioritising solutions that cure rather than perpetuate
we can choose to focus on complex and spread the word that this won’t mean complicated
and we can finally argue that the startup ecosystems various that we have become sooo used to — as they provided the only furniture with which we have lived our comfortable lives — do have their seriously disadvantageous downsides: whilst simultaneously (of course) solving some of the accumulatively littler things fabulously, equally they have served to diminish the big things gravely — especially when it is becoming increasingly apparent that this has also been a deliberated dynamic all along …
because just because you’ve invented a system which does lots of relatively small things massively — that is, automatedly — doesn’t mean you are anywhere capable or prepared re delivering on the resolution of massive things pointedly.
and, after all, this is a difference we are currently choosing not to even discuss.