Psychology Network Pty Ltd

AI Safety & Explanation

The Psychology Network Pty Ltd conducts research on AI safety and explainability. Our core technology, JoBot™ combines symbolic rules with large language models to ensure rule compliance, traceability, and adaptability to evolving standards.

JoBot™ allows large language models to operate under explicit rule frameworks that can be modified and re-loaded at runtime. This creates a living architecture where ethical, legal and professional standards can be incorporated directly into the behaviour of AI systems. The approach supports verification, interpretability and auditability—critical requirements for safety-critical domains such as healthcare.

Research

Our research explores how symbolic rules can be combined with deep-learning systems to create transparent and trustworthy models. We study the mathematical foundations of rule-guided large language models and develop proofs that show how constraints can influence model behaviour with predictable outcomes. This work provides a theoretical foundation for aligning complex models with human-defined standards.

In applied projects, we evaluate JoBot's™ reasoning processes, analysing how rule-guided architectures improve consistency and explainability in daily workflows. The team also investigates how “localist representations”, compact, interpretable neural network units, can enhance mechanistic interpretability while maintaining strong generalisation performance. Read more about our current research.

Patents

The company’s patent portfolio reflects decades of work on AI safety, explainability, and alignment. This includes Localist LLMs, a novel framework for training large language models with continuously adjustable internal representations that span the full spectrum from localist (interpretable, rule-based) to distributed (generalizable, efficient) encodings. The key innovation is a locality dial, a tunable parameter that dynamically controls the degree of localization during both training and inference without requiring model retraining.

Other inventions cover methods for dynamically injecting and updating rules within neural systems, an approach sometimes referred to as “hot reloading.” This enables continuous compliance with evolving professional or regulatory standards without retraining the underlying model.

Other patents focus on extracting symbolic rules from deep neural networks to produce human-readable rationales and visual explanations. These techniques provide the link between the opaque internal computations of large models and the requirements of legal, or organisational accountability.

Collectively, these patents define a framework for building AI systems that are traceable, auditable, and adaptable: properties that are essential in safety-critical domains. View our full list of patents.

Software

Psychology Network develops modular software components that operationalise JoBot's™ safety principles. These include rule engines for real-time constraint enforcement, Python toolkits for alignment testing, and integration interfaces for existing data and workflow systems. The components are designed for clarity, verifiability, and ease of integration.

The software library is continually evolving through research collaborations and internal testing. Upcoming versions will include expanded documentation, visual inspection tools, and multi-media explanation generation. Explore our software offerings.

Health Focus

Healthcare presents unique challenges for artificial intelligence: data sensitivity, regulatory oversight, and the need for professional accountability. Our work addresses these challenges by embedding ethical and procedural safeguards directly into AI behaviour. JoBot's™ rule-based architecture ensures that clinical reasoning remains transparent and reviewable at every stage.

We collaborate with clinicians and researchers to model standard psychological and medical workflows—such as intake interviews, structured assessments, and report generation—within rule-compliant AI frameworks. The resulting systems can explain their recommendations, reference their governing rules, and provide clear audit trails for human supervisors.

By aligning computational intelligence with established clinical practice, the Psychology Network Pty Ltd aims to advance safe and trustworthy AI in healthcare. Our mission is to ensure that AI complements professional judgement rather than replaces it, maintaining both human oversight and technological innovation.