three good things happened today: all related to how i perceive the world.
1. first, i do have a death wish: why, when i first read him, hemingway sooo immediately clicked with me.
2. however, i don’t want to be unreasonable or hurtful to others in my goal to achieve this outcome. i also most definitely don’t want support to ameliorate it. amelioration is the biggest wool-over-the-eyes of our western democratic time. i don’t want to be part of a process that perpetuates its cruelties.
3. my strategy — that is, only strategy — will from now on be as follows: i shall say and write about everything that i judge needs to be called out, in such a way that the powerful i will be bringing to book day after day after day will, one day, only have the alternative to literally shoot me down.
in order, then, to make effective the above, i resolve:
a) to solve the problem of my personal debt, acquired mainly due to my startup activities, so the only way in the future that the powerful shall be able to shoot me down is by literally killing me.
for my mistake all along was to sign up to the startup ecosystem, as it stands, as a tool for achieving my personal and professional financial independence:
• startuphunch.com (being my final attempt at making startup human)
as this personal debt is causing me much mental distress and, equally, is clearly a weakness i show to an outside world i now aim to comprehensively and fully deconstruct, as a massive first step, then, i do need to deal with it properly.
b) once a) is resolved, i shall proceed to attack ALL power wherever it most STEALTHILY resides.
that is, i focus on this kind of power: the stealthiest and most cunning versions of.
the ones where it appears we are having favours done for us, for example.
specifically, that is, big tech. but many many others, too.
what essentially constitutes the driving forces behind zemiology, loopholes, neo-crimes, and similar legally accepted but criminally immoral societal harm; all of which, as a general rule, is most difficult right now to track, trace, investigate and prosecute.
this is why i have concluded that my natural place of work is investigative journalism. and where i want to specialise — in this aforementioned sector and field of endeavour — is in the matter of how big tech has destroyed our humanity. but not as any collateral, accidental, or side effect of a principle way of being it may legitimately manifest.
no.
purposefully; deliberately; in a deeply designed way, too … to mainly screw those clients and customers whose societies and tax bases it so voraciously and entirely dismantles.
to screw, and — equally! — control. and then dispose of lightly and casually, when no longer needed, or beneficial to bottom lines various.
and so as a result of all this, i see that having a death wish is beneficial: if channelled properly, as from today i now intend it shall be, then it will make me fearless as never i dared to be. fearless in thought and disposition. fearless even when made fun of.
not in order to take unreasonable risks with my life — or anyone else’s: no.
rather, to know that life doesn’t exist when the things i see clearly are allowed to, equally clearly, continue.
and to want deeply, deeper than ever in my life, to enable a different kind of life for everyone.
NOT just for the self-selected few. those who lead politics, business and the acts of pillage and rape in modern society.
not just for them.
a better life for everyone, i say. everyone.
because i don’t care about mine. i care that mine should make yours fine.
now do you see? this is what makes me feel useful. nothing else. nothing else at all. and certainly not finding personal happiness. that would only blunt the tool.
maybe #ai can do a few things humans are paid to do. but that doesn’t mean what we’re paid to do by businesses everywhere consists of what our real creativity as unpredictable humans is being exhibited — or even widely fomented.
the proposition:
maybe #it-#tech’s architectures have for so long forced us — as the humans we are — into undervaluing, underplaying and underusing our properly creative sides, that what #ai’s proponents determine are human creative capabilities are actually the dumbed-down instincts and impulses of what would otherwise be sincerely creative manifestations of human thinking: that is, where given the architectures i suggest we make more widely available — for example, just to start with, a decent return to a secrecy-positive digital form of pencil & paper so we DON’T consistently inhibit real creativity — and therefore encourage a return to our much more creatively childlike states of undeniably out-of-the-box enquiry …
augmentedintuition.com | a historical whitepaper advocating an augmented human intuition
in this sense, then, the real lessons of recent #gpt-x are quite different: not how great #ai is now delivering, but how fundamentally toxic to human creativity the privacy- and secrecy-destroying direction of ALL #it-#tech over the years has become. because this very same #tech did start out in its early days as hugely secrecy- and privacy-sensitive. one computer station; one hard-drive; no physical connections between yours and mine: digital pencil & paper indeed!
it’s only since we started laying down cables and access points for some, WITHOUT amending the radically inhibiting architecture of all-seeing admins overlording minimally-privileged user, that this state of affairs has come about: an #it-mediated and supremely marshalled & controlled human creativity.
no wonder #ai appears so often to be creative. our own human creativity has become winged fatally by #tech, to the extent that the god which is now erected as #ai has begun to make us entirely in HIS image, NOT extend and enhance our own intrinsic and otherwise innate preferences.
summary:
its not, therefore, that #it-#tech has been making #ai more human: it’s that the people who run #bigtech have been choosing to shape humans out of their most essential humanity.
and so as humans who are increasingly less so, we become prostrate-ducks for their business pleasures and goals.
an alternative? #secrecypositive, yet #totalsurveillance-compliant software and hardware architectures: back, then, to recreating the creativity-expanding, enhancing and upskilling tools that a digital pencil & paper used to deliver:
secrecy.plus/spt-it | a return to a secrecy- and privacy-positive “digital pencil & paper”
a final thought:
in a sense, even from #yahoo and #google #search onwards, both the #internet and the #web were soon designed (it’s always a choice, this thing we call change: always inevitable, true, it’s a fact … but the “how” — its nature — is never inevitable) … so from #search onwards, it all — in hindsight — become an inspectorial, scraping set of tools to inhibit all human creative conditions absolutely.
the rationale? well, the rationale being that #bigmoney needed consumers who thought they were creators, not creators who would create distributed and uncontrollable networks of creation under the radar.
and then with the advent of newer #ai tools, which serve primarily to deliver on the all-too-human capability to bullshit convincingly, #it and related are finally, openly, brazenly, shamelessly being turned on all human beings who don’t own the means of production.
we were given the keys to the kingdom, only to discover it was a #panopticon we would never escape from. because instead of becoming the guards, that is to say the watchers, we discovered — too late — we were forcefully assigned the roles of the watched:
and so not owning the means of production, with its currently hugely toxic concentrations of wealth and riches, means that 99.9 percent of us are increasingly zoned out of the minimum conditions a real human creativity needs to even begin to want to function in a duly creative manner at all.
that is to say, imho, practically everything we see in corporate workplaces which claims the tag of creativity is simple repurposing of the existing. no wonder the advocates of #ai are able to gleefully proclaim their offspring’s capabilities to act as substitutes of such “achievements”.
i realised this not long ago. i don’t want to work with this company or that. i want to work with people who also, simultaneously, may work for one company or organisation or another.
when the institution overrides the individual from the start is when, even if all the starting-signs are cool, the individuals will one day — especially when huge amounts of money all of a sudden become likely — be inevitably and overwhelmingly overridden by their instituitional framework.
i don’t intend for us to start as i don’t mean us to go on.
so first i want to meet people. i want an organisational structure which generates a hybrid of #holacracy. and i want brains to show themselves the most important factor and matter of all, where the overarching #gutenbergofintuitivethinking and #intuitionvalidationengine projects are concerned.
because if you choose people first, and the people are right for you, then the institutions automagically will remain so too.
at least … whilst your people of choice remain at the institution in question. and they will do in general, for sure — if the institution remains worth then staying. for in the question of #intuitionvalidation there is no building-block which is more significant than the human as individual.
i think i upset a lot of people. i remember a more than hour-long conversation with faceless executives from a big us tech corporation i really value and would love one day to work with.
i say “faceless” neutrally, mind: they had no faces, just circles with initials; and were never introduced to me. six or seven plus the person who organised the video-chat. during lockdown, it was.
i asked them the above question: there was silence for around ten seconds. in the event, no one replied at all. the fear was palpable. the fear that someone would say something which someone else would report back, and forever mark a person’s career, without recourse to explanation.
or so i thought. on reflection, maybe i had gone too far. maybe it was wrong for me to suggest their machines weren’t up to the job of beating creatively criminal terrorists. maybe it was wrong for me to suggest we could do more to creatively crimefight: to make human beings capable of being as nonconformist to the good as the putins et al of recent years have manifestly been longitudinally to the extreme ill.
here’s the thing: maybe i wasn’t wrong but maybe i wasn’t enough right.
obviously, if the exercise was delivered to its full extent, whatever your answer the assembled would inevitably agree that both machines and hollywood scriptwriters (or their analogous: the skillsets of, at least) would be the best solution. but even here problems would exist — and i would go so far as to suggest, actually, real roadblocks.
people who operate by rules and regulations — conformists we all need that make the world function with justice and fairness — don’t find it easy to value the contribution of nonconformists who, more often than not, make their own hugely competent rules. and, then again, of course, vice versa. conformists don’t always float the boats of nonconformists as much as they should.
so to allude to the fact that we need to be as good as the supremely creative criminality out there in our own forging of a singular combination of intuitive arationality with the best machines we can manufacture is NOT the solution.
no.
the solution lies in ensuring the cultures of nonconformism and conformism may come together to facilitate this outcome of creative crimefighting and national security … this … just this … has to be the solution.
if we minimally know our philosophy, a thesis — being that crimefighting and national security need ever more traditional ai to deliver a fearsome capacity to pattern-recognise nonconformist evil out of existence, alongside people who press the operational buttons on the back of such insights — will get, from me, its antithesis: that is … we need just as much, if not more, what we might term the “human good” to battle the “human bad”.
and maybe the machines, too. alongside and in fabulous cahoots.
yes. and maybe, of course, the machines.
but what if we change the process? what if a synthesis? as all good philosophy?
1. to find the nonconformist what and how — the next 9/11 before it arises — we use hollywood and analogous creativity to imagineer such events.
2. and to find the who and when of such newly uncovered neocrimes, we apply the obviously terrifyingly useful pattern-recognition capabilities of the ever more traditional ai. so that their adepts, their supporters, their proponents … and those conformists who more generally are comfortable with such approaches … well … simply be comfortable with this new paradigm i propose.
in this scenario, the suits and the flowery shirts work in consonance but never simultaneously. and so we square the circle of respect amongst the two parties, which long-term would always be difficult to sustainably engineer and forge permanently.
to not connect when something only unsatisfactorily connects is a really important skillset and aspect which the good human brain exhibits.
the capacity to forget for a bit is a human virtue of the highest thinking.
to create a future NOT based out of the past is what separates humans from our tools. (even as sometimes we ourselves are turned abusively into aspects of the latter.)
post:
so: #lean and startup-land solve one problem and one problem always. the problem they solve is not what the customer is hurting most about. it’s not the pain in the customer journey as tech dogma claims it suggests.
no.
it’s the pain on the journey on which both client and supplier begin to ride together. it’s the pain-point that the two of them can agree on as being where they must meet:
not what i as client most need sorting necessarily, but what i can afford right now according to your price list.
not even what you as supplier are most objectively suited to resolving but, rather, what you as supplier find it easiest to convince me is what i need, once you’ve identified where it is i am most prepared to first stump up some dough. (and with this, any dough ffs. just so long as some.)
and finally, most definitely, in no way do such processes and spaces guarantee at all clearly that the world’s most complex (note: not complicated) and, therefore, pressing problems will find their solutions even attended to never mind provided by such startup ecosystems, business mindsets, and wider ways of working deeply.
speculative tech like #ai — it’s supposedly been beating humans incompletely since at least the 1980s (and probably for decades before) — always needs to convince us it’s just about there. the funding, for one thing, demands:
1. until an #ai (let’s use this loose term for now) is able to get so frustrated with an insoluble problem that they end up getting blind drunk … only to wake up in a haze to a solution perfectly formed …
2. until an #ai awakes from a beautiful dream with a fabulous new idea subconsciously imagineered to perfection overnight … for it then, in the second of that awakening, to forget totally the detail — or even the concept itself — for six further months … and then for everything to come flooding back in even more astonishing form, to be finally recorded, scoped and amazingly implemented …
3. until an #ai may choose to NOT make connections or identify patterns or see relationships or deliver on finalities … because something doesn’t fit quite as right as it one day might …
4. until an #ai has a leap of faith based on data it INVENTED OUT OF THE FUTURE … because creation is the nature of the human brain … even when utterly utterly mindlessly terrifying:
until all the above and other things happen .. well … our dearly beloved proponents of #ai (thus loosely described) will continue to argue that we can do everything on the basis of what’s already been patterned. and that the apex of human achievement invokes ONLY copying what others have already done — just better: that is, faster, more accurately, for longer without tired — essentially, an alpha male’s orgasm … no?
🙂
and thus the competent delivery of #ai and similar technologies only ever needs to aspire to this.
the new #ai unleashed onto the world recently — #chatgpt — is praised for writing indistinguishably school and university essays. it’s praised for giving advice crackpot-like or otherwise. it — and similar tech — receive plaudits for faking flemish masters’ painting styles. i haven’t seen but surely imagine that songs in the key of sheeran have already flooded the social networks of this planet, devised at the hands of this sort of code.
we need to stop this.
we need to stop saying what #ai can do.
we need to start asking two questions:
1. if #ai can do what a human being can do, is it actually an achievement of historical record and delivery that another human being can do exactly the same as the human process that the #ai in question is also able to do? that is: why WANT to indistinguishably copy a flemish master? why not always prefer to discover the master — or mistress — deep and unrevealed inside your very own being?
because whilst copying is cool, it’s only cool if it leads to synthesis and final originality. after all, in the act of production we may uncover the output of creation. and then understand, perhaps, that there might inhabit well inside our persons — each of us — geniuses of magnificent shine:
(equally, of course, we might ask — as rightly people have — if a machine so easily reproduces a human activity such as writing an essay for university or school, whether our civilisation is focussed on the right testing regimes in the first place. but that’s for a quite different post.)
2. it’s bad enough to find ourselves needing to ask whether what our tech can duplicate in ourselves involves things which we might continue ourselves — in the absence of such tech — to even consider it societally desirable to carry on doing. it’s much worse to say a whole tech ecosystem — that which is structured around #lean, customer journeys, pain-points and the overbearing need to invoice a digital service or product asap (in short, startup land as we have known and loved it for so long) — doesn’t solve the vast majority of remaining deep and complex world problems because, actually, its practitioners and owners have designed it specifically so it wouldn’t.
there’s much more money in planning a drip-feed of ten years of product releases than in creating a system which might identify the real dangers of immensely, horrifyingly human #nonconformism: that is, its thinking processes and the disadvantage we now face, as we have become progressively incapacitated to do this sort of activity in the corporate-sanctioned dynamics of diluting teamwork.
because whilst we in corporate-land choose to inhibit creative #nonconformists via our tech architectures and business structures, the putins of the world have clearly been doing — longitudinally — quite the opposite.
haven’t they?
so let’s not only stop to duly think whether a tech that can repeat what the majority of us humans do every day is worth the thunder and lightning its proponents will continue to generate. yes: it’s useful in many respects, of course. but we must stop asking what can our tech do and start, instead, to focus our funding on OUTCOMES that need DELIVERY, not BOTTOM LINES that must be DELIVERED.
otherwise, we get what we get: and so we have.
an example? just one? here’s one … a western defensive and attacking capability to identify and forestall future ukraines, and maybe now, as we are allowed to voice these things finally … alongside neo-terrorist pandemics … before they even become a glint in evil organisations’ eyes:
let’s stop fetishising the machines whose investments have cost us so much, and whose future ROIs we have planned out to the millimetre.
let’s be human, instead.
let’s say:
we can make money out of making humans more human, not less significant
we can make money out of prioritising solutions that cure rather than perpetuate
we can choose to focus on complex and spread the word that this won’t mean complicated
and we can finally argue that the startup ecosystems various that we have become sooo used to — as they provided the only furniture with which we have lived our comfortable lives — do have their seriously disadvantageous downsides: whilst simultaneously (of course) solving some of the accumulatively littler things fabulously, equally they have served to diminish the big things gravely — especially when it is becoming increasingly apparent that this has also been a deliberated dynamic all along …
because just because you’ve invented a system which does lots of relatively small things massively — that is, automatedly — doesn’t mean you are anywhere capable or prepared re delivering on the resolution of massive things pointedly.
and, after all, this is a difference we are currently choosing not to even discuss.