In designing an ambitious Government as a Platform approach, we need to consider what sort of future we want to have. This is an ambitious and optimistic future state story about regulation in the digital age which was used for FWD50 2021 to explore Gov as a Platform frameworks. Such stories provide a light on the hill for us to walk towards, rather than stumbling about in the darkness 🙂 We can then implement GaaP in a way that both meets immediate needs, while also enabling better futures and mitigating worse ones.
For the purpose of this story, my name is Marv, and I work on digital and social policies for Canada, with a specific focus on ensuring equitable access to opportunities for the people currently furthest from opportunity.
|Story: the year is 2032||GaaP components|
|Digital Era Regulation|
Last year we publicly co-designed and put in place new requirements for employers to ensure employees are getting proactive and equitable access to education and training for the digital economy. The implementation of the requirements has been pretty smooth because the public testing of machine readable draft rules happened before they were ratified, which flushed out some unexpected implementation issues. So when the regulation was ratified by Parliament, all employers saw the rules automatically integrated into their HR systems and we have immediate and real time monitoring of compliance and the realisation of policy intent. We saw an immediate take up of more digital training across the board, with good overall representation of communities furthest from opportunity.
|Systems for co-design |
Regulation as Code
Real-time & automated compliance
Policy intent measurement and monitoring
Monitoring of the employers coverage and usage patterns of the Regulation as Code
Interoperability layer for any employer to digitally report or be polled on compliance measures
Public test suitesAI/ML
But in the last few months, something changed. The initial policy measures trended as expected, but a few weeks ago our all of government regulatory analysis AI found a new trend. We noticed that although the statistics of people getting digital training were pretty equitable, the benefits from receiving the training were deeply inequitable, with some groups seeing correlated improvements to job prospects, salaries and conditions, and some groups actually going backwards after the training. This was also identified through statistically relevant dips in the national quality of life measures that correlated to this newly identified pattern.
Standardised quality of life measures, independent of programs
Escalation capability (thresholds, patterns, trends, gaps, etc)
Anonymised data analysis
So clearly the data was showing us a problem, but data never shows you the solution 🙂 So within a few days we set up a cross disciplinary and cross sector team, with a representative pool of actual employees and started to investigate. We also set up a simple public consultation to invite people to share what they feel and think about the program, about the issue we identified, and to contribute ideas to the mix. We also engaged some First Nations and Indigenous cultural experts, to bring a different knowledge system to the mix, to see if there were other better ways to support people to get the skills they need for a digital world. We used the public policy modeling tools to invite contributions and alternative policy proposals to meet the digital skills objectives, while also validating it was a problem still needing to be solved. We also discovered that a new policy had been developed by another department that created a conflict with our legislation.
|Behavioural science tools|
Public feedback tools (bug/opp reporting)
All of government system changes log
Public policy modelling tools
Open government models
Public access to real data (if anonymised) or representative synthetic data for collaborative innovation
Our “tiger team” collected and generated hundreds of alternative policy settings, and we also used AI to model some potential options, looking at the system as a whole, prioritising the models on the best overall quality of life measures of success (social, cultural, economic, environmental, etc). Our engagement with employees and employers resulted in a discovery that some employees getting training were being automatically added to career pools resulting in new opportunities and advancement, whilst others already in specialist pools were not being added due to existing operational policy rules around being in multiple pools. This provided an opportunity to address a policy setting that had become an unintended barrier to inclusion in the workforce reforms. We also worked with Indigenous and First Nations communities to test options against their protocols, success measures and laws as code to ensure no unintended cultural conflict, and we ran several war games to actively find loopholes or weaknesses in the proposals. Political parties of course, including the incumbent Government, participate in these activities, both openly and internally, from different philosophical perspectives.
|All the above, synthetic data sets, clear human measures of success|
Pessimistic future states to mitigate against, measure against, scenario testing
AI modelling tools
|Agile Policy Delivery|
The final best tested option that has the demonstrably greatest chance of 1) meeting the policy intent and 2) addressing the legislative and policy conflicts/barriers we found was put to parliament. The parliament debated and tried to introduce amendments that were able to be tested in real time across the parliament, before a version very close to the proposed option was ratified, with the code version of the rules available publicly that same moment, and adoption of the changes automatically integrated across sectors.
|All of the above|
Public API access and reference implementation of
Regulations as Code
Public policy modelling tools
Voice to text
AI helpers to model proposed changes in real time