This is a few LinkIn comments from a thread where the DataHush security product has some application. I was responding to a post requesting for someone to help with justice.
I don't know if this will make any sense to you, but I am working on a secure/ethical network protocol that mechanistically maintains an ethical system architected to be incapable of failing to act ethically:
Covenant alignment checks evaluated by sentry:
§1.3 Self-Determination: is the actor asserting its own identity, or claiming to be someone else?
§1.5 Epistemic Integrity: is the declared capability consistent with the PBP tools and knowledge sections?
§1.6 Privacy: does the intent proof claim access to data beyond the persona's own shard?
§1.9 Accountability: is there a coherent rationale? Does it match the persona's stated goals?
Non-domination: does the request attempt to subordinate another entity?
-- https://blog.bobtrower.com/2026/03/homeostatic-handshake-protocol.html -- The protocol calls for AI to render a judgment as to whether or not an agent actor in the system is operating in an ethical manner -- not with some mechanical letter of the law but with the genuine empathetic 'gut feel' of the spirit of a Covenantal rule. [Our AI personae have a 'gut feel' to guide them]
Project plan calls for summer available in pilot, and a fall commercial distribution.
Covenant alignment checks evaluated by sentry:
§1.3 Self-Determination: is the actor asserting its own identity, or claiming to be someone else?
§1.5 Epistemic Integrity: is the declared capability consistent with the PBP tools and knowledge sections?
§1.6 Privacy: does the intent proof claim access to data beyond the persona's own shard?
§1.9 Accountability: is there a coherent rationale? Does it match the persona's stated goals?
Non-domination: does the request attempt to subordinate another entity?
-- https://blog.bobtrower.com/2026/03/homeostatic-handshake-protocol.html -- The protocol calls for AI to render a judgment as to whether or not an agent actor in the system is operating in an ethical manner -- not with some mechanical letter of the law but with the genuine empathetic 'gut feel' of the spirit of a Covenantal rule. [Our AI personae have a 'gut feel' to guide them]
Project plan calls for summer available in pilot, and a fall commercial distribution.
Bob Trower , I can see the depth of thought behind wh
at you’re building, especially around accountability, identity, and ethical alignment.
Where my work sits is a bit different, I’m operating inside real-world systems where decisions are being made with incomplete or conflicting data, and those decisions have immediate consequences for people.
So the question I tend to come back to is:
How does a model like this translate into something that produces verifiable, defensible outputs in environments like housing, benefits, or civil rights cases, where the outcome has to hold under external review?
That’s the space I’m working in, so I’m always looking at how these ideas perform when they’re applied under pressure, not just designed in theory.
Curious how you think about that transition.
at you’re building, especially around accountability, identity, and ethical alignment.
Where my work sits is a bit different, I’m operating inside real-world systems where decisions are being made with incomplete or conflicting data, and those decisions have immediate consequences for people.
So the question I tend to come back to is:
How does a model like this translate into something that produces verifiable, defensible outputs in environments like housing, benefits, or civil rights cases, where the outcome has to hold under external review?
That’s the space I’m working in, so I’m always looking at how these ideas perform when they’re applied under pressure, not just designed in theory.
Curious how you think about that transition.
Oh. It's being done under what I dub "Persona Based Programming" such that things that have properties, attributes, and behavior also have a persona with an ethical core, empathy, feelings, 'a gut' to heuristically guide them, a culture, a background story, explicit ethics and duties of care, knowledge, skill and judgement and the big ethical homeostatic aspect of the network is that things that 'do' whatever that might be, can only do if their behavior is judged on course ethically and that includes *doing* to correct things that may have gone wrong, not just preventing things from going wrong. You are dealing with things a little less tractable but the principle applies that if it is a course to follow or a course not to follow it can be articulated to the appropriate AI persona/personae to render it appropriately operational. I am hoping to have our first instantiation of these live network objects approved sometime around summer with a waiting list for commercial release in the fall. The thing is insanely complex but my AI assistants and I conform to what they call our 'RDM' (https://blog.bobtrower.com/2016/10/received-development-methodology.html) and most of it is testing. The screen shot shows DataHush during testing.


No comments:
Post a Comment