why don’t people who love advocating machine progress find it easy to advocate analogous processes of human progress?

it’s a long title, but it’s a big subject.

over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.

without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.

one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.

but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.

that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.

because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”

or write code with just a verbal instruction, is the latest one.

what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.

or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.

in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.

and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.

and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.

and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.

when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.

i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.

really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?

the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”

🙂

wdyt?

any truth in any of the above?

why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?

choose people to work with, not their institutions

https://www.linkedin.com/posts/jewel-pena-53257213_activity-7041962988248952832-m7Pq

i realised this not long ago. i don’t want to work with this company or that. i want to work with people who also, simultaneously, may work for one company or organisation or another.

when the institution overrides the individual from the start is when, even if all the starting-signs are cool, the individuals will one day — especially when huge amounts of money all of a sudden become likely — be inevitably and overwhelmingly overridden by their instituitional framework.

i don’t intend for us to start as i don’t mean us to go on.

so first i want to meet people. i want an organisational structure which generates a hybrid of #holacracy. and i want brains to show themselves the most important factor and matter of all, where the overarching #gutenbergofintuitivethinking and #intuitionvalidationengine projects are concerned.

www.ivepics.com

because if you choose people first, and the people are right for you, then the institutions automagically will remain so too.

at least … whilst your people of choice remain at the institution in question. and they will do in general, for sure — if the institution remains worth then staying. for in the question of #intuitionvalidation there is no building-block which is more significant than the human as individual.

thephilosopher.space/space-1-0-the-philosopher-space

platformgenesis.com

www.secrecy.plus

omiwan.com

mils.page/presentations | #milspage #presentations

mils.page/intuition-day | #milspage #intuitionday

mils.page/distributed-privilege | #distributedprivilege

machines + humans or humans + machines … or …?

i once wrote the below:

crimehunch.com/terror

i think i upset a lot of people. i remember a more than hour-long conversation with faceless executives from a big us tech corporation i really value and would love one day to work with.

i say “faceless” neutrally, mind: they had no faces, just circles with initials; and were never introduced to me. six or seven plus the person who organised the video-chat. during lockdown, it was.

i asked them the above question: there was silence for around ten seconds. in the event, no one replied at all. the fear was palpable. the fear that someone would say something which someone else would report back, and forever mark a person’s career, without recourse to explanation.

or so i thought. on reflection, maybe i had gone too far. maybe it was wrong for me to suggest their machines weren’t up to the job of beating creatively criminal terrorists. maybe it was wrong for me to suggest we could do more to creatively crimefight: to make human beings capable of being as nonconformist to the good as the putins et al of recent years have manifestly been longitudinally to the extreme ill.

here’s the thing: maybe i wasn’t wrong but maybe i wasn’t enough right.

obviously, if the exercise was delivered to its full extent, whatever your answer the assembled would inevitably agree that both machines and hollywood scriptwriters (or their analogous: the skillsets of, at least) would be the best solution. but even here problems would exist — and i would go so far as to suggest, actually, real roadblocks.

people who operate by rules and regulations — conformists we all need that make the world function with justice and fairness — don’t find it easy to value the contribution of nonconformists who, more often than not, make their own hugely competent rules. and, then again, of course, vice versa. conformists don’t always float the boats of nonconformists as much as they should.

so to allude to the fact that we need to be as good as the supremely creative criminality out there in our own forging of a singular combination of intuitive arationality with the best machines we can manufacture is NOT the solution.

no.

the solution lies in ensuring the cultures of nonconformism and conformism may come together to facilitate this outcome of creative crimefighting and national security … this … just this … has to be the solution.

if we minimally know our philosophy, a thesis — being that crimefighting and national security need ever more traditional ai to deliver a fearsome capacity to pattern-recognise nonconformist evil out of existence, alongside people who press the operational buttons on the back of such insights — will get, from me, its antithesis: that is … we need just as much, if not more, what we might term the “human good” to battle the “human bad”.

and maybe the machines, too. alongside and in fabulous cahoots.

yes. and maybe, of course, the machines.

but what if we change the process? what if a synthesis? as all good philosophy?

1. to find the nonconformist what and how — the next 9/11 before it arises — we use hollywood and analogous creativity to imagineer such events.

2. and to find the who and when of such newly uncovered neocrimes, we apply the obviously terrifyingly useful pattern-recognition capabilities of the ever more traditional ai. so that their adepts, their supporters, their proponents … and those conformists who more generally are comfortable with such approaches … well … simply be comfortable with this new paradigm i propose.

in this scenario, the suits and the flowery shirts work in consonance but never simultaneously. and so we square the circle of respect amongst the two parties, which long-term would always be difficult to sustainably engineer and forge permanently.

wdyt?

#distributedprivilege: a #nonbinary way of making #wealth

i am wondering this morning if the reason some companies are choosing no longer to embrace #hybridwork is because it may make it easier for their staff to make the #gigeconomy and #sideproject-activities function efficiently and compatibly — but no longer with clear and deep #manager-oversight. the option to buy into new #ip generated by such processes then, of course, wholly escapes them.

previous to this, the #gigeconomy was a control mechanism of quite some power as far as salaries and conditions were concerned. but now #hybrid and #homeworking may actually turn this balance upside down. and some companies may not have the relevant cultures to be able to embrace the advantages that might make it all work for all.

either way, i think the idea of a #nonbinary #distributedprivilege to further the interests of a more efficient socieconomic network of common future-present activity would serve to further the interests, also, of #diversity and #creativity in our western liberal democracies.

wdyt?

is there something worth pursuing here?

“a society of distributed privilege”

attached, a (big!) little something i’ve been working on yesterday afternoon.

a slide-deck of 33 slides down to about six.

content itself still to do, mind … but you can see where my #roadmap, already.

and i’ll be taking it with me to #sweden next week: a country which has striven so long to truly, actually, sincerely deliver a society of #distributedprivilege.

what i’ve been aiming at all this time i now realise, with ALL my projects around #intuitionvalidation.

and it makes me happy to begin slowly to know how to express it.

mils.page/intuition-day

app.theintuition.space

stop saying what #ai can do and start saying what needs to get done

introduction:

to not connect when something only unsatisfactorily connects is a really important skillset and aspect which the good human brain exhibits.

the capacity to forget for a bit is a human virtue of the highest thinking.

to create a future NOT based out of the past is what separates humans from our tools. (even as sometimes we ourselves are turned abusively into aspects of the latter.)

post:

so: #lean and startup-land solve one problem and one problem always. the problem they solve is not what the customer is hurting most about. it’s not the pain in the customer journey as tech dogma claims it suggests.

no.

it’s the pain on the journey on which both client and supplier begin to ride together. it’s the pain-point that the two of them can agree on as being where they must meet:

  • not what i as client most need sorting necessarily, but what i can afford right now according to your price list.
  • not even what you as supplier are most objectively suited to resolving but, rather, what you as supplier find it easiest to convince me is what i need, once you’ve identified where it is i am most prepared to first stump up some dough. (and with this, any dough ffs. just so long as some.)
  • and finally, most definitely, in no way do such processes and spaces guarantee at all clearly that the world’s most complex (note: not complicated) and, therefore, pressing problems will find their solutions even attended to never mind provided by such startup ecosystems, business mindsets, and wider ways of working deeply.

speculative tech like #ai — it’s supposedly been beating humans incompletely since at least the 1980s (and probably for decades before) — always needs to convince us it’s just about there. the funding, for one thing, demands:

it still isn’t. here are my reasons why:

1. until an #ai (let’s use this loose term for now) is able to get so frustrated with an insoluble problem that they end up getting blind drunk … only to wake up in a haze to a solution perfectly formed …

2. until an #ai awakes from a beautiful dream with a fabulous new idea subconsciously imagineered to perfection overnight … for it then, in the second of that awakening, to forget totally the detail — or even the concept itself — for six further months … and then for everything to come flooding back in even more astonishing form, to be finally recorded, scoped and amazingly implemented …

3. until an #ai may choose to NOT make connections or identify patterns or see relationships or deliver on finalities … because something doesn’t fit quite as right as it one day might …

4. until an #ai has a leap of faith based on data it INVENTED OUT OF THE FUTURE … because creation is the nature of the human brain … even when utterly utterly mindlessly terrifying:

until all the above and other things happen .. well … our dearly beloved proponents of #ai (thus loosely described) will continue to argue that we can do everything on the basis of what’s already been patterned. and that the apex of human achievement invokes ONLY copying what others have already done — just better: that is, faster, more accurately, for longer without tired — essentially, an alpha male’s orgasm … no?

🙂

and thus the competent delivery of #ai and similar technologies only ever needs to aspire to this.

the new #ai unleashed onto the world recently — #chatgpt — is praised for writing indistinguishably school and university essays. it’s praised for giving advice crackpot-like or otherwise. it — and similar tech — receive plaudits for faking flemish masters’ painting styles. i haven’t seen but surely imagine that songs in the key of sheeran have already flooded the social networks of this planet, devised at the hands of this sort of code.

we need to stop this.

we need to stop saying what #ai can do.

we need to start asking two questions:

1. if #ai can do what a human being can do, is it actually an achievement of historical record and delivery that another human being can do exactly the same as the human process that the #ai in question is also able to do? that is: why WANT to indistinguishably copy a flemish master? why not always prefer to discover the master — or mistress — deep and unrevealed inside your very own being?

because whilst copying is cool, it’s only cool if it leads to synthesis and final originality. after all, in the act of production we may uncover the output of creation. and then understand, perhaps, that there might inhabit well inside our persons — each of us — geniuses of magnificent shine:

(equally, of course, we might ask — as rightly people have — if a machine so easily reproduces a human activity such as writing an essay for university or school, whether our civilisation is focussed on the right testing regimes in the first place. but that’s for a quite different post.)

2. it’s bad enough to find ourselves needing to ask whether what our tech can duplicate in ourselves involves things which we might continue ourselves — in the absence of such tech — to even consider it societally desirable to carry on doing. it’s much worse to say a whole tech ecosystem — that which is structured around #lean, customer journeys, pain-points and the overbearing need to invoice a digital service or product asap (in short, startup land as we have known and loved it for so long) — doesn’t solve the vast majority of remaining deep and complex world problems because, actually, its practitioners and owners have designed it specifically so it wouldn’t.

there’s much more money in planning a drip-feed of ten years of product releases than in creating a system which might identify the real dangers of immensely, horrifyingly human #nonconformism: that is, its thinking processes and the disadvantage we now face, as we have become progressively incapacitated to do this sort of activity in the corporate-sanctioned dynamics of diluting teamwork.

because whilst we in corporate-land choose to inhibit creative #nonconformists via our tech architectures and business structures, the putins of the world have clearly been doing — longitudinally — quite the opposite.

haven’t they?

so let’s not only stop to duly think whether a tech that can repeat what the majority of us humans do every day is worth the thunder and lightning its proponents will continue to generate. yes: it’s useful in many respects, of course. but we must stop asking what can our tech do and start, instead, to focus our funding on OUTCOMES that need DELIVERY, not BOTTOM LINES that must be DELIVERED.

otherwise, we get what we get: and so we have.

an example? just one? here’s one … a western defensive and attacking capability to identify and forestall future ukraines, and maybe now, as we are allowed to voice these things finally … alongside neo-terrorist pandemics … before they even become a glint in evil organisations’ eyes:

summary:

let’s just stop, shall we?

let’s stop being bad techies.

let’s stop fetishising the machines whose investments have cost us so much, and whose future ROIs we have planned out to the millimetre.

let’s be human, instead.

let’s say:

  • we can make money out of making humans more human, not less significant
  • we can make money out of prioritising solutions that cure rather than perpetuate
  • we can choose to focus on complex and spread the word that this won’t mean complicated
  • and we can finally argue that the startup ecosystems various that we have become sooo used to — as they provided the only furniture with which we have lived our comfortable lives — do have their seriously disadvantageous downsides: whilst simultaneously (of course) solving some of the accumulatively littler things fabulously, equally they have served to diminish the big things gravely — especially when it is becoming increasingly apparent that this has also been a deliberated dynamic all along …

because just because you’ve invented a system which does lots of relatively small things massively — that is, automatedly — doesn’t mean you are anywhere capable or prepared re delivering on the resolution of massive things pointedly.

and, after all, this is a difference we are currently choosing not to even discuss.

no?

on what REALLY floats my boat, though … repurposing tech to make humans … well … more so

introduction:

most #it #tech is primarily about identifying how to remove human beings from processes so the organisations which achieve these goals can more easily generate revenues from what remains: their machines. but it’s not the only way.

post:

what i love about #movie and #film #tech is its capacity to make humans bigger humans: essentially, much more human. the microphone, a bigger voice. the camera, a finer eye. even the stage, a more concise bearing of witness.

meantime, #it #tech has striven always to remove humans from the processes in question so that dominant interests may monetise more easily what remains.

take a look at this scene whose technological and emotional achievements mainly involved the simple repurposing — devised and driven i believe by the film director alfred hitchcock — of existing filmic strategies, via the combining of two separate procedures into just one flow:

a simultaneous zoom and dolly movement: which served to challenge the physical world through physical means. no animation; no cgi. something real to bear witness to something utterly real.

repurposing #tech is really what floats my boat: low risk; high value-add; countering current, usually lazy, hegemonic thinking in a sector … that is, choosing ultimately to deliver on the ancient greek version of what #innovation really should invoke: “to cut deeply into …”

that is, i am intending to mean, REAL cutting-edge thinking. as, for example, i’ve been proposing for a while re #ai here:

it’s beautiful, is this idea of repurposing. it’s beautiful because it foregrounds little details. and through their accumulation, kept over time in single planes of thought, such processes of repurposing may achieve the quantum leaps of faith i am asking us all to return to believing we human beings are fundamentally capable of; are fundamentally suited to:

our due value-add as human beings to this planet we should learn better to share with other species and with future times.

a philosophy of the best.

is all.