on returning to our childhood states of creative enquiry … and to the max, maybe?

introduction:

maybe #ai can do a few things humans are paid to do. but that doesn’t mean what we’re paid to do by businesses everywhere consists of what our real creativity as unpredictable humans is being exhibited — or even widely fomented.

the proposition:

maybe #it-#tech’s architectures have for so long forced us — as the humans we are — into undervaluing, underplaying and underusing our properly creative sides, that what #ai’s proponents determine are human creative capabilities are actually the dumbed-down instincts and impulses of what would otherwise be sincerely creative manifestations of human thinking: that is, where given the architectures i suggest we make more widely available — for example, just to start with, a decent return to a secrecy-positive digital form of pencil & paper so we DON’T consistently inhibit real creativity — and therefore encourage a return to our much more creatively childlike states of undeniably out-of-the-box enquiry …

augmentedintuition.com | a historical whitepaper advocating an augmented human intuition

in this sense, then, the real lessons of recent #gpt-x are quite different: not how great #ai is now delivering, but how fundamentally toxic to human creativity the privacy- and secrecy-destroying direction of ALL #it-#tech over the years has become. because this very same #tech did start out in its early days as hugely secrecy- and privacy-sensitive. one computer station; one hard-drive; no physical connections between yours and mine: digital pencil & paper indeed!

it’s only since we started laying down cables and access points for some, WITHOUT amending the radically inhibiting architecture of all-seeing admins overlording minimally-privileged user, that this state of affairs has come about: an #it-mediated and supremely marshalled & controlled human creativity.

no wonder #ai appears so often to be creative. our own human creativity has become winged fatally by #tech, to the extent that the god which is now erected as #ai has begun to make us entirely in HIS image, NOT extend and enhance our own intrinsic and otherwise innate preferences.

summary:

its not, therefore, that #it-#tech has been making #ai more human: it’s that the people who run #bigtech have been choosing to shape humans out of their most essential humanity.

and so as humans who are increasingly less so, we become prostrate-ducks for their business pleasures and goals.

an alternative? #secrecypositive, yet #totalsurveillance-compliant software and hardware architectures: back, then, to recreating the creativity-expanding, enhancing and upskilling tools that a digital pencil & paper used to deliver:

secrecy.plus/spt-it | a return to a secrecy- and privacy-positive “digital pencil & paper”

a final thought:

in a sense, even from #yahoo and #google #search onwards, both the #internet and the #web were soon designed (it’s always a choice, this thing we call change: always inevitable, true, it’s a fact … but the “how” — its nature — is never inevitable) … so from #search onwards, it all — in hindsight — become an inspectorial, scraping set of tools to inhibit all human creative conditions absolutely.

the rationale? well, the rationale being that #bigmoney needed consumers who thought they were creators, not creators who would create distributed and uncontrollable networks of creation under the radar.

and then with the advent of newer #ai tools, which serve primarily to deliver on the all-too-human capability to bullshit convincingly, #it and related are finally, openly, brazenly, shamelessly being turned on all human beings who don’t own the means of production.

we were given the keys to the kingdom, only to discover it was a #panopticon we would never escape from. because instead of becoming the guards, that is to say the watchers, we discovered — too late — we were forcefully assigned the roles of the watched:

thephilosopher.space | #NOTthepanopticon

and so not owning the means of production, with its currently hugely toxic concentrations of wealth and riches, means that 99.9 percent of us are increasingly zoned out of the minimum conditions a real human creativity needs to even begin to want to function in a duly creative manner at all.

that is to say, imho, practically everything we see in corporate workplaces which claims the tag of creativity is simple repurposing of the existing. no wonder the advocates of #ai are able to gleefully proclaim their offspring’s capabilities to act as substitutes of such “achievements”.

wouldn’t you with all that money at stake?

secrecy.plus/hmagi | #hmagi

why don’t people who love advocating machine progress find it easy to advocate analogous processes of human progress?

it’s a long title, but it’s a big subject.

over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.

without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.

one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.

but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.

that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.

because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”

or write code with just a verbal instruction, is the latest one.

what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.

or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.

in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.

and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.

and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.

and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.

when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.

i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.

really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?

the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”

🙂

wdyt?

any truth in any of the above?

why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?

choose people to work with, not their institutions

https://www.linkedin.com/posts/jewel-pena-53257213_activity-7041962988248952832-m7Pq

i realised this not long ago. i don’t want to work with this company or that. i want to work with people who also, simultaneously, may work for one company or organisation or another.

when the institution overrides the individual from the start is when, even if all the starting-signs are cool, the individuals will one day — especially when huge amounts of money all of a sudden become likely — be inevitably and overwhelmingly overridden by their instituitional framework.

i don’t intend for us to start as i don’t mean us to go on.

so first i want to meet people. i want an organisational structure which generates a hybrid of #holacracy. and i want brains to show themselves the most important factor and matter of all, where the overarching #gutenbergofintuitivethinking and #intuitionvalidationengine projects are concerned.

www.ivepics.com

because if you choose people first, and the people are right for you, then the institutions automagically will remain so too.

at least … whilst your people of choice remain at the institution in question. and they will do in general, for sure — if the institution remains worth then staying. for in the question of #intuitionvalidation there is no building-block which is more significant than the human as individual.

thephilosopher.space/space-1-0-the-philosopher-space

platformgenesis.com

www.secrecy.plus

omiwan.com

mils.page/presentations | #milspage #presentations

mils.page/intuition-day | #milspage #intuitionday

mils.page/distributed-privilege | #distributedprivilege

stop saying what #ai can do and start saying what needs to get done

introduction:

to not connect when something only unsatisfactorily connects is a really important skillset and aspect which the good human brain exhibits.

the capacity to forget for a bit is a human virtue of the highest thinking.

to create a future NOT based out of the past is what separates humans from our tools. (even as sometimes we ourselves are turned abusively into aspects of the latter.)

post:

so: #lean and startup-land solve one problem and one problem always. the problem they solve is not what the customer is hurting most about. it’s not the pain in the customer journey as tech dogma claims it suggests.

no.

it’s the pain on the journey on which both client and supplier begin to ride together. it’s the pain-point that the two of them can agree on as being where they must meet:

  • not what i as client most need sorting necessarily, but what i can afford right now according to your price list.
  • not even what you as supplier are most objectively suited to resolving but, rather, what you as supplier find it easiest to convince me is what i need, once you’ve identified where it is i am most prepared to first stump up some dough. (and with this, any dough ffs. just so long as some.)
  • and finally, most definitely, in no way do such processes and spaces guarantee at all clearly that the world’s most complex (note: not complicated) and, therefore, pressing problems will find their solutions even attended to never mind provided by such startup ecosystems, business mindsets, and wider ways of working deeply.

speculative tech like #ai — it’s supposedly been beating humans incompletely since at least the 1980s (and probably for decades before) — always needs to convince us it’s just about there. the funding, for one thing, demands:

it still isn’t. here are my reasons why:

1. until an #ai (let’s use this loose term for now) is able to get so frustrated with an insoluble problem that they end up getting blind drunk … only to wake up in a haze to a solution perfectly formed …

2. until an #ai awakes from a beautiful dream with a fabulous new idea subconsciously imagineered to perfection overnight … for it then, in the second of that awakening, to forget totally the detail — or even the concept itself — for six further months … and then for everything to come flooding back in even more astonishing form, to be finally recorded, scoped and amazingly implemented …

3. until an #ai may choose to NOT make connections or identify patterns or see relationships or deliver on finalities … because something doesn’t fit quite as right as it one day might …

4. until an #ai has a leap of faith based on data it INVENTED OUT OF THE FUTURE … because creation is the nature of the human brain … even when utterly utterly mindlessly terrifying:

until all the above and other things happen .. well … our dearly beloved proponents of #ai (thus loosely described) will continue to argue that we can do everything on the basis of what’s already been patterned. and that the apex of human achievement invokes ONLY copying what others have already done — just better: that is, faster, more accurately, for longer without tired — essentially, an alpha male’s orgasm … no?

🙂

and thus the competent delivery of #ai and similar technologies only ever needs to aspire to this.

the new #ai unleashed onto the world recently — #chatgpt — is praised for writing indistinguishably school and university essays. it’s praised for giving advice crackpot-like or otherwise. it — and similar tech — receive plaudits for faking flemish masters’ painting styles. i haven’t seen but surely imagine that songs in the key of sheeran have already flooded the social networks of this planet, devised at the hands of this sort of code.

we need to stop this.

we need to stop saying what #ai can do.

we need to start asking two questions:

1. if #ai can do what a human being can do, is it actually an achievement of historical record and delivery that another human being can do exactly the same as the human process that the #ai in question is also able to do? that is: why WANT to indistinguishably copy a flemish master? why not always prefer to discover the master — or mistress — deep and unrevealed inside your very own being?

because whilst copying is cool, it’s only cool if it leads to synthesis and final originality. after all, in the act of production we may uncover the output of creation. and then understand, perhaps, that there might inhabit well inside our persons — each of us — geniuses of magnificent shine:

(equally, of course, we might ask — as rightly people have — if a machine so easily reproduces a human activity such as writing an essay for university or school, whether our civilisation is focussed on the right testing regimes in the first place. but that’s for a quite different post.)

2. it’s bad enough to find ourselves needing to ask whether what our tech can duplicate in ourselves involves things which we might continue ourselves — in the absence of such tech — to even consider it societally desirable to carry on doing. it’s much worse to say a whole tech ecosystem — that which is structured around #lean, customer journeys, pain-points and the overbearing need to invoice a digital service or product asap (in short, startup land as we have known and loved it for so long) — doesn’t solve the vast majority of remaining deep and complex world problems because, actually, its practitioners and owners have designed it specifically so it wouldn’t.

there’s much more money in planning a drip-feed of ten years of product releases than in creating a system which might identify the real dangers of immensely, horrifyingly human #nonconformism: that is, its thinking processes and the disadvantage we now face, as we have become progressively incapacitated to do this sort of activity in the corporate-sanctioned dynamics of diluting teamwork.

because whilst we in corporate-land choose to inhibit creative #nonconformists via our tech architectures and business structures, the putins of the world have clearly been doing — longitudinally — quite the opposite.

haven’t they?

so let’s not only stop to duly think whether a tech that can repeat what the majority of us humans do every day is worth the thunder and lightning its proponents will continue to generate. yes: it’s useful in many respects, of course. but we must stop asking what can our tech do and start, instead, to focus our funding on OUTCOMES that need DELIVERY, not BOTTOM LINES that must be DELIVERED.

otherwise, we get what we get: and so we have.

an example? just one? here’s one … a western defensive and attacking capability to identify and forestall future ukraines, and maybe now, as we are allowed to voice these things finally … alongside neo-terrorist pandemics … before they even become a glint in evil organisations’ eyes:

summary:

let’s just stop, shall we?

let’s stop being bad techies.

let’s stop fetishising the machines whose investments have cost us so much, and whose future ROIs we have planned out to the millimetre.

let’s be human, instead.

let’s say:

  • we can make money out of making humans more human, not less significant
  • we can make money out of prioritising solutions that cure rather than perpetuate
  • we can choose to focus on complex and spread the word that this won’t mean complicated
  • and we can finally argue that the startup ecosystems various that we have become sooo used to — as they provided the only furniture with which we have lived our comfortable lives — do have their seriously disadvantageous downsides: whilst simultaneously (of course) solving some of the accumulatively littler things fabulously, equally they have served to diminish the big things gravely — especially when it is becoming increasingly apparent that this has also been a deliberated dynamic all along …

because just because you’ve invented a system which does lots of relatively small things massively — that is, automatedly — doesn’t mean you are anywhere capable or prepared re delivering on the resolution of massive things pointedly.

and, after all, this is a difference we are currently choosing not to even discuss.

no?

on what REALLY floats my boat, though … repurposing tech to make humans … well … more so

introduction:

most #it #tech is primarily about identifying how to remove human beings from processes so the organisations which achieve these goals can more easily generate revenues from what remains: their machines. but it’s not the only way.

post:

what i love about #movie and #film #tech is its capacity to make humans bigger humans: essentially, much more human. the microphone, a bigger voice. the camera, a finer eye. even the stage, a more concise bearing of witness.

meantime, #it #tech has striven always to remove humans from the processes in question so that dominant interests may monetise more easily what remains.

take a look at this scene whose technological and emotional achievements mainly involved the simple repurposing — devised and driven i believe by the film director alfred hitchcock — of existing filmic strategies, via the combining of two separate procedures into just one flow:

a simultaneous zoom and dolly movement: which served to challenge the physical world through physical means. no animation; no cgi. something real to bear witness to something utterly real.

repurposing #tech is really what floats my boat: low risk; high value-add; countering current, usually lazy, hegemonic thinking in a sector … that is, choosing ultimately to deliver on the ancient greek version of what #innovation really should invoke: “to cut deeply into …”

that is, i am intending to mean, REAL cutting-edge thinking. as, for example, i’ve been proposing for a while re #ai here:

it’s beautiful, is this idea of repurposing. it’s beautiful because it foregrounds little details. and through their accumulation, kept over time in single planes of thought, such processes of repurposing may achieve the quantum leaps of faith i am asking us all to return to believing we human beings are fundamentally capable of; are fundamentally suited to:

our due value-add as human beings to this planet we should learn better to share with other species and with future times.

a philosophy of the best.

is all.