when big politics and business make the customer a kleenex

three good things happened today: all related to how i perceive the world.

1. first, i do have a death wish: why, when i first read him, hemingway sooo immediately clicked with me.

2. however, i don’t want to be unreasonable or hurtful to others in my goal to achieve this outcome. i also most definitely don’t want support to ameliorate it. amelioration is the biggest wool-over-the-eyes of our western democratic time. i don’t want to be part of a process that perpetuates its cruelties.

3. my strategy — that is, only strategy — will from now on be as follows: i shall say and write about everything that i judge needs to be called out, in such a way that the powerful i will be bringing to book day after day after day will, one day, only have the alternative to literally shoot me down.

in order, then, to make effective the above, i resolve:

a) to solve the problem of my personal debt, acquired mainly due to my startup activities, so the only way in the future that the powerful shall be able to shoot me down is by literally killing me.

for my mistake all along was to sign up to the startup ecosystem, as it stands, as a tool for achieving my personal and professional financial independence:

startuphunch.com (being my final attempt at making startup human)

as this personal debt is causing me much mental distress and, equally, is clearly a weakness i show to an outside world i now aim to comprehensively and fully deconstruct, as a massive first step, then, i do need to deal with it properly.

b) once a) is resolved, i shall proceed to attack ALL power wherever it most STEALTHILY resides.

that is, i focus on this kind of power: the stealthiest and most cunning versions of.

the ones where it appears we are having favours done for us, for example.

specifically, that is, big tech. but many many others, too.

what essentially constitutes the driving forces behind zemiology, loopholes, neo-crimes, and similar legally accepted but criminally immoral societal harm; all of which, as a general rule, is most difficult right now to track, trace, investigate and prosecute.

crimehunch.com/neocrime

crimehunch.com/loopholes

www.secrecy.plus/law | legalallways.com

www.sverige2.earth/example

this is why i have concluded that my natural place of work is investigative journalism. and where i want to specialise — in this aforementioned sector and field of endeavour — is in the matter of how big tech has destroyed our humanity. but not as any collateral, accidental, or side effect of a principle way of being it may legitimately manifest.

no.

purposefully; deliberately; in a deeply designed way, too … to mainly screw those clients and customers whose societies and tax bases it so voraciously and entirely dismantles.

to screw, and — equally! — control. and then dispose of lightly and casually, when no longer needed, or beneficial to bottom lines various.

and so as a result of all this, i see that having a death wish is beneficial: if channelled properly, as from today i now intend it shall be, then it will make me fearless as never i dared to be. fearless in thought and disposition. fearless even when made fun of.

not in order to take unreasonable risks with my life — or anyone else’s: no.

rather, to know that life doesn’t exist when the things i see clearly are allowed to, equally clearly, continue.

and to want deeply, deeper than ever in my life, to enable a different kind of life for everyone.

NOT just for the self-selected few. those who lead politics, business and the acts of pillage and rape in modern society.

not just for them.

a better life for everyone, i say. everyone.

because i don’t care about mine. i care that mine should make yours fine.

now do you see? this is what makes me feel useful. nothing else. nothing else at all. and certainly not finding personal happiness. that would only blunt the tool.

🙂

on a “human-sensitive ai”

ai’s proponents and advocates — of the human-insensitive version of this set of technologies, i mean — have kind of decided on a necessary battlefield between #machines and #humans.

as a #teacher, #trainer and #facilitator during decades this has never been my way. for me, knowledge isn’t how big yours might be but, rather, how well — how pointedly — you learn how to use what you acquire over the years.

speaking well in a language doesn’t require more than 800 words. it’s true. ask #chatgpt-x. what makes the difference is the baggage we bring to each word; the connections; the semantics; the allusions and how we choose not to say exactly what’s expected.

back in 2019 i lost my middle son’s affections. i had to borrow money from him to keep my #startup going. i’ll never get him back — for this and one other, unrelated reason. it was to get the below project off the ground.

in the event, the organisation i submitted to said it was unique (in a good way) and, simultaneously, that it didn’t advance science (in an opposing and bad sense, obviously). they informed me of this unofficially one morning early on — that is, that all my hopes and dreams were dashed — as i stood on a train platform whilst a train came in just that second.

the cctv would have seen me: the organisers themselves could also have seen — if they had wanted or cared to — the cctv of where i was and how i looked. it was obviously a terrible coincidence i resisted the temptation to take advantage of.

none of my three children now speak to me because of #startup-land. but the #philosophy — not the #tech — of the project attached deserves to speak to us, five years later.

let’s allow it to encourage us to be better #techies everywhere. change is inevitable, of course; but in #tech its nature never is. in such moments, in #tech we’re always choosing.

let’s choose wiser. please.

https://mils.page/ai

yeah?

on returning to our childhood states of creative enquiry … and to the max, maybe?

introduction:

maybe #ai can do a few things humans are paid to do. but that doesn’t mean what we’re paid to do by businesses everywhere consists of what our real creativity as unpredictable humans is being exhibited — or even widely fomented.

the proposition:

maybe #it-#tech’s architectures have for so long forced us — as the humans we are — into undervaluing, underplaying and underusing our properly creative sides, that what #ai’s proponents determine are human creative capabilities are actually the dumbed-down instincts and impulses of what would otherwise be sincerely creative manifestations of human thinking: that is, where given the architectures i suggest we make more widely available — for example, just to start with, a decent return to a secrecy-positive digital form of pencil & paper so we DON’T consistently inhibit real creativity — and therefore encourage a return to our much more creatively childlike states of undeniably out-of-the-box enquiry …

augmentedintuition.com | a historical whitepaper advocating an augmented human intuition

in this sense, then, the real lessons of recent #gpt-x are quite different: not how great #ai is now delivering, but how fundamentally toxic to human creativity the privacy- and secrecy-destroying direction of ALL #it-#tech over the years has become. because this very same #tech did start out in its early days as hugely secrecy- and privacy-sensitive. one computer station; one hard-drive; no physical connections between yours and mine: digital pencil & paper indeed!

it’s only since we started laying down cables and access points for some, WITHOUT amending the radically inhibiting architecture of all-seeing admins overlording minimally-privileged user, that this state of affairs has come about: an #it-mediated and supremely marshalled & controlled human creativity.

no wonder #ai appears so often to be creative. our own human creativity has become winged fatally by #tech, to the extent that the god which is now erected as #ai has begun to make us entirely in HIS image, NOT extend and enhance our own intrinsic and otherwise innate preferences.

summary:

its not, therefore, that #it-#tech has been making #ai more human: it’s that the people who run #bigtech have been choosing to shape humans out of their most essential humanity.

and so as humans who are increasingly less so, we become prostrate-ducks for their business pleasures and goals.

an alternative? #secrecypositive, yet #totalsurveillance-compliant software and hardware architectures: back, then, to recreating the creativity-expanding, enhancing and upskilling tools that a digital pencil & paper used to deliver:

secrecy.plus/spt-it | a return to a secrecy- and privacy-positive “digital pencil & paper”

a final thought:

in a sense, even from #yahoo and #google #search onwards, both the #internet and the #web were soon designed (it’s always a choice, this thing we call change: always inevitable, true, it’s a fact … but the “how” — its nature — is never inevitable) … so from #search onwards, it all — in hindsight — become an inspectorial, scraping set of tools to inhibit all human creative conditions absolutely.

the rationale? well, the rationale being that #bigmoney needed consumers who thought they were creators, not creators who would create distributed and uncontrollable networks of creation under the radar.

and then with the advent of newer #ai tools, which serve primarily to deliver on the all-too-human capability to bullshit convincingly, #it and related are finally, openly, brazenly, shamelessly being turned on all human beings who don’t own the means of production.

we were given the keys to the kingdom, only to discover it was a #panopticon we would never escape from. because instead of becoming the guards, that is to say the watchers, we discovered — too late — we were forcefully assigned the roles of the watched:

thephilosopher.space | #NOTthepanopticon

and so not owning the means of production, with its currently hugely toxic concentrations of wealth and riches, means that 99.9 percent of us are increasingly zoned out of the minimum conditions a real human creativity needs to even begin to want to function in a duly creative manner at all.

that is to say, imho, practically everything we see in corporate workplaces which claims the tag of creativity is simple repurposing of the existing. no wonder the advocates of #ai are able to gleefully proclaim their offspring’s capabilities to act as substitutes of such “achievements”.

wouldn’t you with all that money at stake?

secrecy.plus/hmagi | #hmagi

what happens when society is secular but redemption remains a desire and real human need?

introduction:

i’ve been thinking a lot about redemption, ever since a messenger and intermediary said to me in 2016 that my problem was guilt.

she was, on due reflection, wrong. guilt is good, if its reasons for provoking can be assuaged in competent and compassionate manner.

what i still suffer from is an absence of process, in the secular society i cherish, for redemption.

my supposition:

let’s presuppose the following: let’s say that religion served a real positive purpose which in its relative absence now in many of our societies has not been supplanted with other processes as compassionately. i say compassionately with circumspection, of course. religion itself has effected many horrible historical — and even current — events in humanity’s journey.

an example, then, of the redemption i mention?

well. here we are!

discretion is a very humane aspect of criminal justice systems, when used in the spirit of the law and its kindly interpretation.

i studied international criminal justice in 2017 at master’s level and on one occasion stumbled across the following anecdote in the academia i was reading: italy, well known for the misuse of family power and structure, may also invoke the good of family leading to a better criminal justice praxis there.

most crime in all criminal justice systems is committed by young men between the ages of 18 and 26. after that age, almost automagically, its incidence tails off. some suggest there may even exist physiological reasons for this: that young male brains get hard-wired to begin settling at around the upper age band quoted.

either way, we have a criminal justice reality: young men who commit crime are also victims of crime, in the sense that they are the most vulnerable group to enter criminality, and get very little proactive support to stay out of the criminal justice system. more often, in fact, they get targeted — maybe targeted into it — via prejudice and presumption of very many, damning and defining, societal forces.

in italy, then, this was the example: a law-enforcement officer heard of a young man around 17 who had just about committed his first crime; certainly infraction. the officer knew of the family, and instead of “inducting” the youth directly into a path which later would be heading irreversibly towards criminality, he went behind the back of the youngster and straight to his parents.

he explained the situation gently and non-threateningly, explaining that the family could help. here, we could argue, was good discretion operating to the max: even, that it shouldn’t have been necessary to use discretion to keep the young man out of being typed so young as criminality’s cannon fodder. maybe it could be conceivable that the officer’s own kpi-structure and law-enforcement praxis would consist primarily of keeping people out of the system — enabling and allowing them to redeem any initial acts so that criminality became something they themselves wanted to veer from — instead of counting up the number of criminals captured and banged away.

proposal:

on the of the above, and in relation to things i’ve already published on a new concept of criminal justice which i’ve termed natural justice, i’d like to propose that we take the renewed need for a societal infrastructure of redemption to be revisited.

in the absence of father confessors, that is in the absence of many people finding them unsatisfactory to their needs (where they work, no change needed of course!), we should create serious halfway houses between the criminality and zero good of #darkfigure and #neocrime as i understand them at one extreme (the 20 to 40 percent that is the crime and related loopholes invisible to criminal justice) and religiously delivered confession and relief at the other. which for secular societies no longer functions easily.

yes: a natural justice, after all.

final observations:

thinking more philosophically, it’s possible that the behaviours acted out as described in this post, which may then duly and rightfully lead to criminal prosecution, are encouraged because we need to be redeemed — to feel it, i mean. and unless in secular society you enter the criminal justice system, a societal-level redemption is not within reach. if we provided other ways which had nothing to do with criminal justice stigma, perhaps — too! — fewer would wish to be criminals.

i’ve often felt, as a by-the-by and in analogous way, that open-source and social-networked online communities have become so popular and active because in such spaces — the really competent and well-run ones i mean — we find the reality (or even just simulacrum, but at least this) of a democratic discourse that real democracy increasingly is lacking.

what’s clear is there are exist basic human instincts and impulses, and they must always act in pairs.

doing democracy is one; where nowadays the reward for its practice where this doesn’t invoke the relationship of abused partner?

and so doing ill is another; where nowadays the redemption which doesn’t involve punishment and disgrace?

curie + foucault … and then a crime-free world?

foucault said everything is dangerous: and more reason, for this reason, to study everything more deeply.

curie said we shouldn’t fear understanding: almost that it was our duty.

i want, now, to set up a national security facility which uses curie’s approach for its outer core, where our good people learn in supported ways to fight bad people.

and i want then, once we have fashioned the necessary tools, to develop an inner core which gets as pointed as foucault’s persistence re the dangerous.

at the #nobelprize museum today i saw two words on the floor near the entrance, amongst many others. the two i recognised and stood near were in english. i hope one day others i am able to recognise will be in swedish.

my words of preference were “persistence” and “disrespect”. of the two, the one i stood next to first was “disrespect”. not gratuitous: measured. that’s me. and that will always be me.

and that’s what i want to make of the aforementioned national security facility: something deeply infused with a profound lack of respect to the shibboleths of crime and … to what we can or can’t do to stop and dismantle them.

let’s do it.

it’s time we did. time to have confidence in our abilities. our competences. and our integrity.

why don’t people who love advocating machine progress find it easy to advocate analogous processes of human progress?

it’s a long title, but it’s a big subject.

over the years, since i’ve started proposing seeing intuitive thinking as a logical dataset we should spend a lot more money on capturing, verifying and validating in systemic, human-empowering, inside-out ways, i’ve spoken to a lot of technologists.

without exception — except tbh just this last wednesday when i was at an aws-organised event in stockholm — ALL software engineers and imagineers of this kind have shown fabulous and rightful enthusiasm for the demonstrable machine progress we’ve historically witnessed since the start of humanity’s extension via tools — and yet have been absolutely resistant, sometimes to the point of rudeness, to the idea that we may move human goalposts in equal measure: that is, decide to promote the 90 percent of the human brain most of us are still apparently unable to advantage ourselves of.

one super-interesting aws person i spoke to on wednesday, for most of the evening in fact, on and off, told me at one point that the human brain only uses around 40 watts to do all the amazing things practically all examples of the same which have populated this rock since human history began have clearly been able to deliver on. compare and contrast this with the megawatts needed to run a data centre, able even now only to approach human creative capabilities.

but even on wednesday at the aws event, tbh, techie people were unable to show as deep an enthusiasm for progressing humans in the way i would advocate: not within a lifetime as we have been encouraged to assume are the only goalposts we can move, but intergenerationally, which is what i am increasingly proposing.

that is, actually create a tech philosophy which mimics what film and movie tech have done for over a hundred years: make humans more important to all industrial process through dynamics of industrialisation, instead of ensuring we are less substantial and significant through procedures of automation, obviously designed to reduce our future-present relevance.

because when you hear proponents of generative ai, of any ai, the excitement is palpable: “look what they can now do: write school and university essays that look academically rigorous.”

or write code with just a verbal instruction, is the latest one.

what they don’t ask is whether it was a task which human beings should have been asked to do in the first place.

or, more pointedly, a task which the human beings who did do it competently should have been remunerated to the extreme levels they have historically been remunerated to, for carrying out in ways that — privately speaking, admit it! — became so easy for them to charge exorbitantly for.

in my own auto-ethnographic case, i always got lower marks in my education than my brains indicated i deserved. my latest master was in international criminal justice: during the 2016-2017 academic year in the uk. i always assumed i was lazy. you see, i used a process which wasn’t academically orthodox: i’d create through my brain’s tangential procedures a brand new idea (new for me, anyways), and only then proceed to read relevant literature … if, that is, it existed. back to front. altogether. and marked down, completely all the time.

and in the light of chatgpt’s achievements, i also begin to wonder: because this kind of tech, to me, is nothing more than incredibly deepened search engines. but weren’t the humans who did such jobs also “only” this? really, only this.

and so people who scored well in analogous manual activities were therefore good not at creating new worlds with their academia and coding and software development but, rather, capable at little more than grandiosely integrating essentially tech-informed step-by-step approaches into the otherwise naturally, ingeniously and much more multi-layered human mind.

and so these kinds of students used procedures which were far more appropriate to it-tech environments, and thus scored highly in such education systems.

when truly we should have long ago considered such procedures an absolute anathema to all that COULD make human thought magnificent.

i mean … think the aforementioned 90 percent of the brain whose employment we may still not manage to optimise. and then consider a software or tech platform where its creators tolerate not using 90 percent of its monetising abilities.

really, it’s this: that is, my experience with technologists of all kinds who work in it-tech, where automation is absolute king. (and i say “king” advisably.) they love telling the world how their latest robot — or whatever — will soon be indistinguishable from real human beings in how it looks, speaks, moves and interacts more widely. but why?

the technical achievement is clear. the monetisation opportunities of convincing solitary individuals they need robotic company in place of other human beings are also manifest. but the “why” we should be such advocates of machine progress and yet, simultaneously, UTTERLY INCAPABLE of showing the same levels of enthusiasm for considering we might create environments and thinking-spaces — as i have been suggesting for five or more years — that make intergenerational human advancement possible with the support and NOT domination of tech (that is, as per what movies and films have delivered for humans for that hundred years or so, and NOT as per the relationship between human beings and vast swathes of it-land since, say, the 1950s) … well, this is surely difficult for anyone to understand and explain. unless, of course, we really are talking womb envy: “i can’t bring a human being into the world as completely as a woman can, so instead i’ll make machines that do what humans do, only allegedly better.”

🙂

wdyt?

any truth in any of the above?

why do the boys in tech-land get so enthusiastic about the latest technologies that overtake apparently deep human capabilities — and yet reject so fervently and consistently the possibility that humans might also be able to use equal but repurposed tech to make humans more? but as humans?

choose people to work with, not their institutions

https://www.linkedin.com/posts/jewel-pena-53257213_activity-7041962988248952832-m7Pq

i realised this not long ago. i don’t want to work with this company or that. i want to work with people who also, simultaneously, may work for one company or organisation or another.

when the institution overrides the individual from the start is when, even if all the starting-signs are cool, the individuals will one day — especially when huge amounts of money all of a sudden become likely — be inevitably and overwhelmingly overridden by their instituitional framework.

i don’t intend for us to start as i don’t mean us to go on.

so first i want to meet people. i want an organisational structure which generates a hybrid of #holacracy. and i want brains to show themselves the most important factor and matter of all, where the overarching #gutenbergofintuitivethinking and #intuitionvalidationengine projects are concerned.

www.ivepics.com

because if you choose people first, and the people are right for you, then the institutions automagically will remain so too.

at least … whilst your people of choice remain at the institution in question. and they will do in general, for sure — if the institution remains worth then staying. for in the question of #intuitionvalidation there is no building-block which is more significant than the human as individual.

thephilosopher.space/space-1-0-the-philosopher-space

platformgenesis.com

www.secrecy.plus

omiwan.com

mils.page/presentations | #milspage #presentations

mils.page/intuition-day | #milspage #intuitionday

mils.page/distributed-privilege | #distributedprivilege

machines + humans or humans + machines … or …?

i once wrote the below:

crimehunch.com/terror

i think i upset a lot of people. i remember a more than hour-long conversation with faceless executives from a big us tech corporation i really value and would love one day to work with.

i say “faceless” neutrally, mind: they had no faces, just circles with initials; and were never introduced to me. six or seven plus the person who organised the video-chat. during lockdown, it was.

i asked them the above question: there was silence for around ten seconds. in the event, no one replied at all. the fear was palpable. the fear that someone would say something which someone else would report back, and forever mark a person’s career, without recourse to explanation.

or so i thought. on reflection, maybe i had gone too far. maybe it was wrong for me to suggest their machines weren’t up to the job of beating creatively criminal terrorists. maybe it was wrong for me to suggest we could do more to creatively crimefight: to make human beings capable of being as nonconformist to the good as the putins et al of recent years have manifestly been longitudinally to the extreme ill.

here’s the thing: maybe i wasn’t wrong but maybe i wasn’t enough right.

obviously, if the exercise was delivered to its full extent, whatever your answer the assembled would inevitably agree that both machines and hollywood scriptwriters (or their analogous: the skillsets of, at least) would be the best solution. but even here problems would exist — and i would go so far as to suggest, actually, real roadblocks.

people who operate by rules and regulations — conformists we all need that make the world function with justice and fairness — don’t find it easy to value the contribution of nonconformists who, more often than not, make their own hugely competent rules. and, then again, of course, vice versa. conformists don’t always float the boats of nonconformists as much as they should.

so to allude to the fact that we need to be as good as the supremely creative criminality out there in our own forging of a singular combination of intuitive arationality with the best machines we can manufacture is NOT the solution.

no.

the solution lies in ensuring the cultures of nonconformism and conformism may come together to facilitate this outcome of creative crimefighting and national security … this … just this … has to be the solution.

if we minimally know our philosophy, a thesis — being that crimefighting and national security need ever more traditional ai to deliver a fearsome capacity to pattern-recognise nonconformist evil out of existence, alongside people who press the operational buttons on the back of such insights — will get, from me, its antithesis: that is … we need just as much, if not more, what we might term the “human good” to battle the “human bad”.

and maybe the machines, too. alongside and in fabulous cahoots.

yes. and maybe, of course, the machines.

but what if we change the process? what if a synthesis? as all good philosophy?

1. to find the nonconformist what and how — the next 9/11 before it arises — we use hollywood and analogous creativity to imagineer such events.

2. and to find the who and when of such newly uncovered neocrimes, we apply the obviously terrifyingly useful pattern-recognition capabilities of the ever more traditional ai. so that their adepts, their supporters, their proponents … and those conformists who more generally are comfortable with such approaches … well … simply be comfortable with this new paradigm i propose.

in this scenario, the suits and the flowery shirts work in consonance but never simultaneously. and so we square the circle of respect amongst the two parties, which long-term would always be difficult to sustainably engineer and forge permanently.

wdyt?

#distributedprivilege: a #nonbinary way of making #wealth

i am wondering this morning if the reason some companies are choosing no longer to embrace #hybridwork is because it may make it easier for their staff to make the #gigeconomy and #sideproject-activities function efficiently and compatibly — but no longer with clear and deep #manager-oversight. the option to buy into new #ip generated by such processes then, of course, wholly escapes them.

previous to this, the #gigeconomy was a control mechanism of quite some power as far as salaries and conditions were concerned. but now #hybrid and #homeworking may actually turn this balance upside down. and some companies may not have the relevant cultures to be able to embrace the advantages that might make it all work for all.

either way, i think the idea of a #nonbinary #distributedprivilege to further the interests of a more efficient socieconomic network of common future-present activity would serve to further the interests, also, of #diversity and #creativity in our western liberal democracies.

wdyt?

is there something worth pursuing here?