NATION

PASSWORD

[RE-DRAFT] Improving Safety in Deep Learning Development

Where WA members debate how to improve the world, one resolution at a time.
User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

[RE-DRAFT] Improving Safety in Deep Learning Development

Postby Haymarket Riot » Tue Apr 30, 2024 5:30 am

Hi, this is my first General Assembly draft. I'd appreciate general feedback as well as any advice for future resolutions.
Edit: My first GA proposal! https://www.nationstates.net/page=UN_vi ... 1715605149
Title: Improving Safety in Deep Learning Development

Category: Regulation

Area of Effect: Safety

The World Assembly,

Appreciating that deep learning artificial intelligence automates data processing in a manner that improves the efficiency of previously difficult or impossible data analysis,

Applauding the impact these technologies are already having and will continue to have on driving the increased productivity of myriad industries and government programs,

Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for humans to investigate the origins of algorithmic parameters and outputs,

Concerned at the mounting evidence that as deep learning systems are implemented, preconceived biases in human inputting, labeling, and pre-processing of data inevitably lead to discrimination where artificial intelligence is socially applied in ways that cannot be fully discerned or predicted,

Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,

Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,

  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of neural network(s) with at least one hidden layer.
    2. ‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Developing entity’: Any corporation, government, or individual seeking to label data for, train, and/or release a deep learning system that interacts with the public.
    4. ‘Institutional review board’: A group internal to a developing entity composed of individuals from both inside and outside the developing entity who are qualified to make ethical decisions regarding a specific deep learning system.
  2. Enacts the following:
    1. If member states have technology to achieve deep learning system development, and intend to pursue such development, they shall be required to develop an appropriate comprehensive training and evaluation program for deep learning developer licensure, which may include classes, workshops, and/or seminars.
    2. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require comprehensive training and evaluation in avoiding discrimination and unintended outcomes in deep learning.
    3. Prior to development, developing entities shall submit to their nation’s government a project summary. The government shall then decide whether and at which stages an institutional review board is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
    4. All deep learning systems actively operating or being developed at the time of this resolution's passage must also have a project summary submitted for them by their respective developing entities within six months of this resolution's passage, and may also be subject to an institutional review board as previously described.
  3. Implements the following standards for institutional review board oversight:
    1. An institutional review board may oversee any or all of the following steps in deep learning system development: data processing, algorithmic development, and post-release review.
    2. Concerns raised by an institutional review board must be adequately addressed within six months, or any further deployment, use, or development of that deep learning system shall be suspended by that nation’s government until the concerns are addressed.


Title: Improving Safety in Deep Learning Development

Category: Regulation

Area of Effect: Safety

The World Assembly,

Appreciating that deep learning artificial intelligence automates data processing in a manner that improves the efficiency of previously difficult or impossible data analysis,

Applauding the impact these technologies are already having and will continue to have on driving the increased productivity of myriad industries and government programs,

Noting that deep learning systems comprise ‘black box’ technology, wherein it is made very difficult for humans to investigate the origins of algorithmic parameters and outputs,

Concerned at the mounting evidence that as deep learning systems are implemented, preconceived biases in human inputting, labeling, and pre-processing of data inevitably lead to discrimination where artificial intelligence is socially applied in ways that cannot be fully discerned or predicted,

Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,

Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,

  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of multi-layered neural network(s).
    2. ‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Deep learning stakeholders’: Members of any community directly impacted by a specific deep learning system.
    4. ‘Survey group’: A sample of deep learning stakeholders, proportionally weighted to the relative populations of, and relative impacts of the deep learning system on minority and marginalized groups within stakeholder communities.
  2. Requires the following:
    1. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require training and evaluation in avoiding discrimination in deep learning.
    2. Prior to development, deep learning developers shall submit to their nation’s government a list of deep learning stakeholders to be considered, which that government must approve before proceeding with development.
    3. During deep learning development, a survey group of deep learning stakeholders shall be collected. This shall occur at three stages of development:
      1. Data processing: all data input into the deep learning system must be described quantitatively and qualitatively to the survey group before it is trained.
      2. Algorithmic development: during the process of training the deep learning system pre-release, preliminary results must be described to the survey group.
      3. Post-release analysis: every five years post-release, a survey group must be informed about post-release results in comparison to preliminary results, and any new factors that have arisen must be communicated.
    4. Concerns raised by the survey group must be adequately addressed within six months, or any further deployment, use, or development of the deep learning system shall be suspended by that nation’s government until the concerns are addressed.

Title: Improving Safety in Deep Learning Development

Category: Regulation

Area of Effect: Safety

The World Assembly,

Appreciating that deep learning artificial intelligence automates data processing in a manner that improves the efficiency of previously difficult or impossible data analysis,

Applauding the impact these technologies are already having and will continue to have on driving the increased productivity of myriad industries and government programs,

Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for humans to investigate the origins of algorithmic parameters and outputs,

Concerned at the mounting evidence that as deep learning systems are implemented, preconceived biases in human inputting, labeling, and pre-processing of data inevitably lead to discrimination where artificial intelligence is socially applied in ways that cannot be fully discerned or predicted,

Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,

Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,

  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of multi-layered neural network(s).
    2. ‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Developing entity’: Any corporation, government, or individual seeking to train and/or release a deep learning system that interacts with the public.
    4. ‘Institutional review board’: A group internal to a developing entity composed of deep learning developers, leaders in communities impacted by the specific deep learning system, and at least one government official, the purpose of which is to make ethical recommendations for training, use, and deployment of deep learning systems.
  2. Enacts the following:
    1. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require training and evaluation in avoiding discrimination in deep learning.
    2. Prior to development, developing entities shall submit to their nation’s government a deep learning system project summary. The government shall then decide whether and at which stages an institutional review board is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
    3. An institutional review board may oversee any or all of the following steps in deep learning system development: data processing, algorithmic development, and post-release review.
    4. Concerns raised by an institutional review board must be adequately addressed (as determined by the board) within six months, or any further deployment, use, or development of that deep learning system shall be suspended by that nation’s government until the concerns are addressed.

Title: Improving Safety in Deep Learning Development

Category: Regulation

Area of Effect: Safety

The World Assembly,

Appreciating that deep learning artificial intelligence automates data processing in a manner that improves the efficiency of previously difficult or impossible data analysis,

Applauding the impact these technologies are already having and will continue to have on driving the increased productivity of myriad industries and government programs,

Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for humans to investigate the origins of algorithmic parameters and outputs,

Concerned at the mounting evidence that as deep learning systems are implemented, preconceived biases in human inputting, labeling, and pre-processing of data inevitably lead to discrimination where artificial intelligence is socially applied in ways that cannot be fully discerned or predicted,

Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,

Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,

  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of multi-layered neural network(s).
    2. ‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Developing entity’: Any corporation, government, or individual seeking to label data for, train, and/or release a deep learning system that interacts with the public.
    4. ‘Institutional review board’: A group internal to a developing entity composed of deep learning developers, leaders in communities impacted by the specific deep learning system, and at least one government official, the purpose of which is to make ethical recommendations for training, use, and deployment of deep learning systems.
  2. Enacts the following:
    1. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require comprehensive training and evaluation in avoiding discrimination and unintended outcomes in deep learning.
    2. Prior to development, developing entities shall submit to their nation’s government a deep learning system project summary. The government shall then decide whether and at which stages an institutional review board is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
      1. This requirement shall not apply to non-research educational use of deep learning systems.
    3. All deep learning systems actively operating or being developed at the time of this resolution's passage must also have a project summary submitted for them by their respective developing entities within six months of this resolution's passage, and may also be subject to an institutional review board as previously described.
    4. An institutional review board may oversee any or all of the following steps in deep learning system development: data processing, algorithmic development, and post-release review.
    5. Concerns raised by an institutional review board must be adequately addressed within six months, or any further deployment, use, or development of that deep learning system shall be suspended by that nation’s government until the concerns are addressed.

Title: Improving Safety in Deep Learning Development

Category: Regulation

Area of Effect: Safety

The World Assembly,

Appreciating that deep learning artificial intelligence automates data processing in a manner that improves the efficiency of previously difficult or impossible data analysis,

Applauding the impact these technologies are already having and will continue to have on driving the increased productivity of myriad industries and government programs,

Noting that deep learning systems are a ‘black box’ technology, i.e., it is very difficult for humans to investigate the origins of algorithmic parameters and outputs,

Concerned at the mounting evidence that as deep learning systems are implemented, preconceived biases in human inputting, labeling, and pre-processing of data inevitably lead to discrimination where artificial intelligence is socially applied in ways that cannot be fully discerned or predicted,

Appalled at the lack of consideration in the artificial intelligence industry for mitigating unforeseen impacts of deep learning systems,

Desiring to prevent discriminatory and unforeseen outcomes from impacting the safety of communities where deep learning is used, whether by governments or by corporations,

  1. Defines the following for the purposes of this resolution:
    1. ‘Deep learning system’: A machine learning system composed of multi-layered neural network(s).
    2. ‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.
    3. ‘Developing entity’: Any corporation, government, or individual seeking to label data for, train, and/or release a deep learning system that interacts with the public.
    4. ‘Institutional review board’: A group internal to a developing entity composed of individuals from both inside and outside the developing entity who are qualified to make ethical decisions regarding a specific deep learning system.
  2. Enacts the following:
    1. If member states have technology to achieve deep learning system development, and intend to pursue such development, they shall be required to develop an appropriate training and evaluation program for deep learning developer licensure.
    2. Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require comprehensive training and evaluation in avoiding discrimination and unintended outcomes in deep learning.
    3. Prior to development, developing entities shall submit to their nation’s government a project summary. The government shall then decide whether and at which stages an institutional review board is required to convene to oversee the project, and the quantity and variety of members the board must comprise.
    4. All deep learning systems actively operating or being developed at the time of this resolution's passage must also have a project summary submitted for them by their respective developing entities within six months of this resolution's passage, and may also be subject to an institutional review board as previously described.
  3. Implements the following standards for institutional review board oversight:
    1. An institutional review board may oversee any or all of the following steps in deep learning system development: data processing, algorithmic development, and post-release review.
    2. Concerns raised by an institutional review board must be adequately addressed within six months, or any further deployment, use, or development of that deep learning system shall be suspended by that nation’s government until the concerns are addressed.
[/box]
Last edited by Haymarket Riot on Thu May 16, 2024 2:36 pm, edited 18 times in total.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Comfed
Minister
 
Posts: 2280
Founded: Apr 09, 2020
Psychotic Dictatorship

Postby Comfed » Tue Apr 30, 2024 5:19 pm

Some of this resolution is hard to understand due to the presence of a lot of computer jargon - "multi-layered neural network" is probably your worst offender here - but from what I do understand, this seems like an awful lot of red tape to impose on people who create a certain kind of software.

User avatar
Kostane
Minister
 
Posts: 2934
Founded: Nov 07, 2022
Capitalist Paradise

Postby Kostane » Tue Apr 30, 2024 5:24 pm

Article 2.c.i seems impossible considering that it requires "quantitative..." explanation of possible AGI machines that would be very difficult to explain to the general population (""Members of any community directly impacted..."). I also question how the shareholder group would look in relation to large-scale release of programs that affect global populations, and/or populations outside of WA purview.
New sig WIP
The old Kostane can’t come to the phone right now. Why? Cause it’s dead.
Member of MCoTO

User avatar
The Overmind
Diplomat
 
Posts: 946
Founded: Dec 12, 2022
Authoritarian Democracy

Postby The Overmind » Tue Apr 30, 2024 5:44 pm

Comfed wrote:Some of this resolution is hard to understand due to the presence of a lot of computer jargon - "multi-layered neural network" is probably your worst offender here - but from what I do understand, this seems like an awful lot of red tape to impose on people who create a certain kind of software.

This would be a rather difficult topic to write on without using jargon. To the extent that it is reasonable, this can maybe be addressed with a block of (further) selected definitions.
Last edited by The Overmind on Tue Apr 30, 2024 5:44 pm, edited 1 time in total.
Free Palestine

Trans men are men | Trans women are women | Sex is non-binary
Assigned sex isn't biological sex | Trans rights are human rights


Neuroscientist | Formerly Heavens Reach | He/Him/His

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Tue Apr 30, 2024 5:52 pm

Comfed wrote:Some of this resolution is hard to understand due to the presence of a lot of computer jargon - "multi-layered neural network" is probably your worst offender here - but from what I do understand, this seems like an awful lot of red tape to impose on people who create a certain kind of software.

This resolution was always going to be a bit of a compromise between jargon and understandability. That comes with the field of machine learning, as many of these techniques are only describable through highly abstract language. Getting into more detail would confuse things, and simplifying them would make regulation in this space overly broad.

Deep learning algorithms impact a wide variety of societal features that make this red tape necessary, because otherwise corporations simply do not do this work themselves. For example, consider the algorithms behind self-driving cars. Ideally, you would want to see average citizens having input on how they are designed and how they make, for example, ethical considerations while driving. Yet, often, these test groups are limited in scope, and are hired by the corporations. Alternatively, you might have an algorithm that takes aggregated data and makes decisions about consumer car insurance rates based on benign driving behavior that merely vaguely correlates with higher accident rates. Or an algorithm for crime detection which is more likely to pick out suspects of a certain race/gender based off of confounding variables or labelling bias on the part of the human data labeler. These are situations where there is currently zero oversight and corporations and governments are essentially free to do whatever they like, as long as they can buy or obtain the aggregate data and code the model. These are also all real examples of how these tools are used and fail to pass muster.

There are other, more transparent, machine learning models where bias can be teased out more easily (regression models, for example). In other words, this would make it clear that deep learning algorithms are tools only to be used when absolutely necessary for the type of data analysis required.

I am open to suggestions for a less stringent protocol. But this seems to me to be on par with how, for example, review of medical drugs occurs. These algorithms often have a much greater impact on safety than individual drugs do, and yet there is less oversight.
Kostane wrote:Article 2.c.i seems impossible considering that it requires "quantitative..." explanation of possible AGI machines that would be very difficult to explain to the general population (""Members of any community directly impacted..."). I also question how the shareholder group would look in relation to large-scale release of programs that affect global populations, and/or populations outside of WA purview.

I agree that it at times may be difficult to communicate ideas effectively to the general population. However, this is no reason not to have stakeholder input, especially when much of this quantitative analysis amounts to essentially just a graph or series of graphs that can be explained at a middle school algebra level of comprehension. I'm open to having a clause about "reasonably qualified" stakeholders to mitigate any concerns in that regard.

As for populations outside of WA purview, the WA is not permitted to legislate on that. I considered putting in clauses about international law as it pertains to this, but I think individual nations can be trusted to regulate the application of AI that originates from outside their borders.
Last edited by Haymarket Riot on Tue Apr 30, 2024 6:00 pm, edited 2 times in total.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
The Overmind
Diplomat
 
Posts: 946
Founded: Dec 12, 2022
Authoritarian Democracy

Postby The Overmind » Tue Apr 30, 2024 6:01 pm

Support in principle, but I think this has a long way to go.

Haymarket Riot wrote:‘Survey group’: A sample of deep learning stakeholders, proportionally weighted to the relative populations of, and relative impacts of, the deep learning system on, minority and marginalized groups within stakeholder communities.

This definition scans very poorly, to the point of incoherence. I would significantly restructure it.

Haymarket Riot wrote:Prior to development, deep learning developers shall submit to their nation’s government a list of deep learning stakeholders to be considered, which that government must approve before proceeding with development.

I'm afraid I don't know what this clause is trying to accomplish or prevent. Why would a nation's government reject a deep learning [project's]* development based on who the stakeholders are?

*Also, your sentence doesn't make it clear what is being developed. You define "deep learning system" but don't use it here, for instance.

Haymarket Riot wrote:c. During deep learning development, a survey group of deep learning stakeholders shall be collected. This shall occur at three stages of development:
  1. Data processing: all data input into the deep learning system must be described quantitatively and qualitatively to the survey group before it is trained.
  2. Algorithmic development: during the process of training the deep learning system pre-release, preliminary results must be described to the survey group.
  3. Post-release analysis: every five years post-release, a survey group must be informed about post-release results in comparison to preliminary results, and any new factors that have arisen must be communicated.
d. Concerns raised by the survey group must be adequately addressed within six months, or any further deployment, use, or development of the deep learning system shall be suspended by that nation’s government until the concerns are addressed.

This does not seem like an efficient way to discourage bias in deep learning algorithms. Such a "survey group" would need specialized, and potentially quite extensive, training to properly evaluate if a "deep learning system" was producing a biased result or being trained on biased data. It is, in fact, a big part of the job of data scientists to be cognizant of and avoid these issues. Would it not be better to have something similar to the IRB in the US, where you have at least one expert, and at least one community member involved in approving and overseeing a protocol of a given project?

Additionally, as is the case with the IRB, there should be clearly defined levels of intervention and oversight depending on the risk level of the project. A company developing a "deep learning system" for, say, a video game, is engaging in a very low-risk development project and should not face the same oversight or red tape as a hospital developing a "deep learning system" for screening psychiatric illness in patients. You don't have to spell out these levels of intervention and oversight or the level of risk that instantiates them, but there should be flexibility built into the proposal for situations where minimal, if any, intervention is warranted (when the risks are virtually nonexistent, or irrelevant despite the risk of bias.)
Last edited by The Overmind on Tue Apr 30, 2024 6:06 pm, edited 2 times in total.
Free Palestine

Trans men are men | Trans women are women | Sex is non-binary
Assigned sex isn't biological sex | Trans rights are human rights


Neuroscientist | Formerly Heavens Reach | He/Him/His

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Tue Apr 30, 2024 6:10 pm

The Overmind wrote:Support in principle, but I think this has a long way to go.

Haymarket Riot wrote:‘Survey group’: A sample of deep learning stakeholders, proportionally weighted to the relative populations of, and relative impacts of, the deep learning system on, minority and marginalized groups within stakeholder communities.

This definition scans very poorly, to the point of incoherence. I would significantly restructure it.

Haymarket Riot wrote:Prior to development, deep learning developers shall submit to their nation’s government a list of deep learning stakeholders to be considered, which that government must approve before proceeding with development.

I'm afraid I don't know what this clause is trying to accomplish or prevent. Why would a nation's government reject a deep learning [project's]* development based on who the stakeholders are?

*Also, your sentence doesn't make it clear what is being developed. You define "deep learning system" but don't use it here, for instance.

Haymarket Riot wrote:c. During deep learning development, a survey group of deep learning stakeholders shall be collected. This shall occur at three stages of development:
  1. Data processing: all data input into the deep learning system must be described quantitatively and qualitatively to the survey group before it is trained.
  2. Algorithmic development: during the process of training the deep learning system pre-release, preliminary results must be described to the survey group.
  3. Post-release analysis: every five years post-release, a survey group must be informed about post-release results in comparison to preliminary results, and any new factors that have arisen must be communicated.
d. Concerns raised by the survey group must be adequately addressed within six months, or any further deployment, use, or development of the deep learning system shall be suspended by that nation’s government until the concerns are addressed.

This does not seem like an efficient way to discourage bias in deep learning algorithms. Such a "survey group" would need specialized, and potentially quite extensive, training to properly evaluate if a "deep learning system" was producing a biased result or being trained on biased data. It is, in fact, a big part of the job of data scientists to be cognizant of and avoid these issues. Would it not be better to have something similar to the IRB in the US, where you have at least one expert, and at least one community member involved in approving and overseeing a protocol of a given project?

Additionally, as is the case with the IRB, there should be clearly defined levels of intervention and oversight depending on the risk level of the project. A company developing a "deep learning system" for, say, a video game, is engaging in a very low-risk development project and should not face the same oversight or red tape as a hospital developing a "deep learning system" for screening psychiatric illness in patients. You don't have to spell out these levels of intervention and oversight or the level of risk that instantiates them, but there should be flexibility built into the proposal for situations where minimal, if any, intervention is warranted (when the risks are virtually nonexistent, or irrelevant despite the risk of bias.)

I really appreciate this feedback, the last bit especially. I agree that a survey group is probably not the best way to go, it's just what most easily came to mind. I also appreciate the room your suggestion leaves for more tailoring, something I was struggling to do. I'll revise and return with draft two shortly, hopefully with less gobbledygook. :p

edit: draft two up
Last edited by Haymarket Riot on Tue Apr 30, 2024 6:40 pm, edited 2 times in total.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Tigrisia
Envoy
 
Posts: 276
Founded: Dec 22, 2023
Democratic Socialists

Postby Tigrisia » Wed May 01, 2024 11:18 am

While we believe that such a regulation is absolutely necessary, we believe that there is room for improvement.

First of all, we know that there are far more techniques to create significantly advanced systems based on machine learning that might have a problem with bias or lack of explainability. While most systems are based on deep learning, only focusing on them would mean to create legislation with significant loopholes.

What we see as far more important that the technology that is used to create the machine learning application is the way the application is used. One does, for example not need to certify a deep learning application that solely optimizes the utilization of a datacenter. Models that automatically classify sentient species and have a significant impact on them, for example when it comes to insurance policies, credit rating, hiring processes should be well-regulated.

We also believe that the current draft lacks a crucial component: The duties of the organizations creating the data sets. For a simple developer, reviewing the data sets collected by other entities by hand is not feasable, hence, the creators of them should take care that they follow ethical standards. This includes, but is not limited to the compliance with existing law or to ensure that the datasets are unbiased when it comes to age, percieved gender or race.

We also miss a clause concerning legacy models.

Haymarket Riot wrote:‘Deep learning developer’: A human involved in development of a deep learning system, whether through data management, data processing, exploratory data analysis, coding, or training of the deep learning system.


Does that mean that people that label for example images, a job that even nearly untrained persons can do, need a governmental approved license?

Haymarket Riot wrote:‘Developing entity’: Any corporation, government, or individual seeking to train and/or release a deep learning system that interacts with the public.


Does that mean that a system produced and used by the military to filter out targets from civilians (OOC: what Israel has under the name "Lavender", see here: https://www.972mag.com/lavender-ai-israeli-army-gaza/) would not fall under the scope of this regulation, as it is never "interacting with the public"? We see this as as very dangerous, as then entities themselves could develop and use AI systems that are unsave as long as they don't "interact" with the public.

Haymarket Riot wrote: 'Institutional review board’: A group internal to a developing entity composed of deep learning developers, leaders in communities impacted by the specific deep learning system, and at least one government official, the purpose of which is to make ethical recommendations for training, use, and deployment of deep learning systems.


While involvement of the general public is a good idea, the sheer range of affected communities is too broad to hear all stakeholders. (OOC: A system like ChatGPT (or any other LLM) impacts nearly each community in some way. Hence, one would need hundreds or thousands of people in such an "institutional review board"). Only choosing some communities would lead to a significant bias, which may have deverstating unintended consequences.

Haymarket Riot wrote:Deep learning developers shall be required to be licensed under nation-specific laws, and obtaining licensure shall require training and evaluation in avoiding discrimination in deep learning.


While being important, we believe that "avoiding discrimination" is too centered on one safety aspect of side effects of deep learning systems. We therefore recommend to broaden the requirement insofar that each person working in computer science and associated fields should take mandatory courses on ethics in their respective fields, including, but not limited to discrimination (OOC: I am a CS student and we actually had an ethics course (which was not mandatory), where we discussed these subjects. I personally feel that making these courses mandatory would improve technology for everyone.).

Haymarket Riot wrote:Prior to development, developing entities shall submit to their nation’s government a deep learning system project summary. The government shall then decide whether and at which stages an institutional review board is required to convene to oversee the project, and the quantity and variety of members the board must comprise.


We believe that these measures are a bit too strict and would significantly hinder the development of new models, especially by small or medium sized companies or single individuals.

Haymarket Riot wrote:Concerns raised by an institutional review board must be adequately addressed within six months, or any further deployment, use, or development of that deep learning system shall be suspended by that nation’s government until the concerns are addressed.


While stopping the deployment of the model is a good idea, we believe that completely stopping the development of further versions of the model would be counter-productive, as this means that all the already existing process in creating such a model would be lost. We therefore would like to know the reason behind including the clause to mandate stopping the development of such models into the draft.


For the delegation of the Federal Republic of Tigrisia at the World Assembly
Vice-Ambassador Claus Sato
Interim Head of Mission on behalf of Ambassador Thomas Salazar

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Wed May 01, 2024 1:50 pm

Tigrisia wrote:While we believe that such a regulation is absolutely necessary, we believe that there is room for improvement.

First of all, we know that there are far more techniques to create significantly advanced systems based on machine learning that might have a problem with bias or lack of explainability. While most systems are based on deep learning, only focusing on them would mean to create legislation with significant loopholes.

Perhaps, but as has already been discussed, jargon and overbroadness becomes an issue at that point. Unless you can provide a list I’m not certain I can target more types of machine learning models. The black box problem in conjunction with that bias is what makes deep learning models specifically the most dangerous.
What we see as far more important that the technology that is used to create the machine learning application is the way the application is used. One does, for example not need to certify a deep learning application that solely optimizes the utilization of a datacenter. Models that automatically classify sentient species and have a significant impact on them, for example when it comes to insurance policies, credit rating, hiring processes should be well-regulated.

Hence why plans are to be submitted to government regulators, to provide this sort of review to determine if an IRB is necessary.
We also believe that the current draft lacks a crucial component: The duties of the organizations creating the data sets. For a simple developer, reviewing the data sets collected by other entities by hand is not feasable, hence, the creators of them should take care that they follow ethical standards. This includes, but is not limited to the compliance with existing law or to ensure that the datasets are unbiased when it comes to age, percieved gender or race.

This would be much too broad and runs the risk of undermining and overburdening public access, small-scale (e.g., undergrad classes, hobbyist researchers) or massively aggregated data projects, especially those involving citizen contributors. My view is, this only becomes an issue once it’s actively used in a machine learning system, and as a result is the responsibility primarily of the companies using the aggregate data.

We also miss a clause concerning legacy models.

If you mean machine learning models that are not deep learning, they do not fall within the scope of this proposal due to the black box problem. Regulation on these models need not be so stringent.

Does that mean that people that label for example images, a job that even nearly untrained persons can do, need a governmental approved license?

Only if it is going into a deep learning model directly.

Does that mean that a system produced and used by the military to filter out targets from civilians (OOC: what Israel has under the name "Lavender", see here: https://www.972mag.com/lavender-ai-israeli-army-gaza/) would not fall under the scope of this regulation, as it is never "interacting with the public"? We see this as as very dangerous, as then entities themselves could develop and use AI systems that are unsave as long as they don't "interact" with the public.

Identifying civilians vs combatants is interacting with the public by its nature. Civilians are the public.

While involvement of the general public is a good idea, the sheer range of affected communities is too broad to hear all stakeholders. (OOC: A system like ChatGPT (or any other LLM) impacts nearly each community in some way. Hence, one would need hundreds or thousands of people in such an "institutional review board"). Only choosing some communities would lead to a significant bias, which may have deverstating unintended consequences.

Leaving it up to the public is not in this version of the draft. I intentionally leave selection of IRB composition up to national discretion for this reason. I’m not certain how this is read otherwise, could you point me to where I was unclear about this?

While being important, we believe that "avoiding discrimination" is too centered on one safety aspect of side effects of deep learning systems. We therefore recommend to broaden the requirement insofar that each person working in computer science and associated fields should take mandatory courses on ethics in their respective fields, including, but not limited to discrimination (OOC: I am a CS student and we actually had an ethics course (which was not mandatory), where we discussed these subjects. I personally feel that making these courses mandatory would improve technology for everyone.).

How would you recommend broadening this requirement in a manner that would not require people to take a college course? That in my view would be overreach. Additionally, I have left up specific licensure requirements to national governments so they can best make decisions for industries within their respective nations.

We believe that these measures are a bit too strict and would significantly hinder the development of new models, especially by small or medium sized companies or single individuals.

Howso? Small and medium sized companies have to submit regulatory forms all the time, submitting one form for a project summary (which most good companies will already have on hand) is minuscule in terms of workload. As I have discussed previously, not all projects would be required to have an IRB, so I’m not certain why this would be overreach.

While stopping the deployment of the model is a good idea, we believe that completely stopping the development of further versions of the model would be counter-productive, as this means that all the already existing process in creating such a model would be lost. We therefore would like to know the reason behind including the clause to mandate stopping the development of such models into the draft.

This would be resolved by addressing IRB concerns. One remedy to this is to scrap problematic parts of a project but continue with the other aspects, or to begin a new project with a derived model. This is sufficiently narrow, in other words.
Last edited by Haymarket Riot on Wed May 01, 2024 1:51 pm, edited 1 time in total.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Tigrisia
Envoy
 
Posts: 276
Founded: Dec 22, 2023
Democratic Socialists

Postby Tigrisia » Thu May 02, 2024 8:08 am

Haymarket Riot wrote:Perhaps, but as has already been discussed, jargon and overbroadness becomes an issue at that point. Unless you can provide a list I’m not certain I can target more types of machine learning models. The black box problem in conjunction with that bias is what makes deep learning models specifically the most dangerous.


Creating an extensive list is in many cases a bad idea. They often lead to laws being revised in high pace and significantly laying behind certain developments. I recommend a risk-based approach, instead of a technology-based one. That means that not the technology behind the AI does constitute the reason why it needs to be monitored but the risk that is associated with the usage of the technology. (OOC: I recommend to look at the EU AI Act as an inspiration what I mean with that).

Haymarket Riot wrote:Hence why plans are to be submitted to government regulators, to provide this sort of review to determine if an IRB is necessary.


Meaning if a student wants to develop something with deep learning as part of their bachelor thesis, they first need to file a request to government regulators? Come on, that leads to a lot of paperwork and leads to nothing.

Haymarket Riot wrote:This would be much too broad and runs the risk of undermining and overburdening public access, small-scale (e.g., undergrad classes, hobbyist researchers) or massively aggregated data projects, especially those involving citizen contributors. My view is, this only becomes an issue once it’s actively used in a machine learning system, and as a result is the responsibility primarily of the companies using the aggregate data.


OOC: These sets I am talking about are specifically made for deep learning systems. Take the LAION 5B dataset for example that was made to be utilized in AI systems. LAION, on their own, do not create models but just collect and prepare (meaning, you get a large text file that tells you info where all your images are and what they contain and you throw it into your algorithm) data used for AI models. It contains 5 billion images and it is impossible to verify these images in an appropriate timeframe without either massive automatisation (making the verification much less accurate) or by a large number of employees, hindering SMEs or universities from innovating.

Haymarket Riot wrote:If you mean machine learning models that are not deep learning, they do not fall within the scope of this proposal due to the black box problem. Regulation on these models need not be so stringent.


Legacy models are deep learning models that are already in use but would fall under the scope of this regulation if they were to be created afterwards.

Haymarket Riot wrote:Identifying civilians vs combatants is interacting with the public by its nature. Civilians are the public.


Bad example, I agree. Then, let's say an AI monitoring an electricity grid and applying measures if necessary. Not really "interacting."

Haymarket Riot wrote:Leaving it up to the public is not in this version of the draft. I intentionally leave selection of IRB composition up to national discretion for this reason. I’m not certain how this is read otherwise, could you point me to where I was unclear about this?


OOC: The difficult thing is the following: Which communities are affected by your AI system? And if everyone might be affected (Systems like ChatGPT), how can you filter out the ones that are important for your project? I mean, most systems are globally available and may be misused to fuel hatred in some country you know nothing about, aiding in a genocide (which is what actually happened in Myanmar with Facebooks "algorithm", which is also an AI: https://www.amnesty.org/en/latest/news/ ... ew-report/). Hence, how can such a commitee significantly consider all these things while making the monitoring process as streamlined as possible?

Haymarket Riot wrote:How would you recommend broadening this requirement in a manner that would not require people to take a college course? That in my view would be overreach. Additionally, I have left up specific licensure requirements to national governments so they can best make decisions for industries within their respective nations.


Nobody said something about "college courses", but the draft says that developers are required to be trained. Hence, if you want to have comprehensive training, people would need to take some kind of courses. If you don't like the wording "courses", they could be replaced by "comprehensive training", but in the end, people would need to take some kind of courses and get a certificate in the end.

Haymarket Riot wrote:This would be resolved by addressing IRB concerns. One remedy to this is to scrap problematic parts of a project but continue with the other aspects, or to begin a new project with a derived model. This is sufficiently narrow, in other words.


It depends what "addressing" means. If "addressing" is "We are working on it, we won't publish", that's fine. If it means "we have a solution" that might be not. If you want to bring a new car onto the market but the responsible authority says "no, you need to address those issues first", you have all time that you want to bring a new version onto the table. I want to have the same for the AI models.

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Thu May 02, 2024 8:29 am

Tigrisia wrote:Creating an extensive list is in many cases a bad idea. They often lead to laws being revised in high pace and significantly laying behind certain developments. I recommend a risk-based approach, instead of a technology-based one. That means that not the technology behind the AI does constitute the reason why it needs to be monitored but the risk that is associated with the usage of the technology. (OOC: I recommend to look at the EU AI Act as an inspiration what I mean with that).

There is already a risk-based approach built in to this proposal via the IRB. I am not interested currently in regulating other forms of AI for reasons I have already described explicitly.

Meaning if a student wants to develop something with deep learning as part of their bachelor thesis, they first need to file a request to government regulators? Come on, that leads to a lot of paperwork and leads to nothing.

I'm happy to include an exemption for educational purposes, but please also note that this only regulates public-facing deep learning systems. Many ML projects are doable without data involving the public.

OOC: These sets I am talking about are specifically made for deep learning systems. Take the LAION 5B dataset for example that was made to be utilized in AI systems. LAION, on their own, do not create models but just collect and prepare (meaning, you get a large text file that tells you info where all your images are and what they contain and you throw it into your algorithm) data used for AI models. It contains 5 billion images and it is impossible to verify these images in an appropriate timeframe without either massive automatisation (making the verification much less accurate) or by a large number of employees, hindering SMEs or universities from innovating.

I believe they would then count as developing entities, if the datasets are made specifically for deep learning systems. I can rewrite my definition of a developing entity to clarify that.

Legacy models are deep learning models that are already in use but would fall under the scope of this regulation if they were to be created afterwards.

I'm happy to write language about regulating review of prior-developed AIs into this proposal.

Bad example, I agree. Then, let's say an AI monitoring an electricity grid and applying measures if necessary. Not really "interacting."

That's still interacting with data about specific consumers, and thus interacts with the public.

OOC: The difficult thing is the following: Which communities are affected by your AI system? And if everyone might be affected (Systems like ChatGPT), how can you filter out the ones that are important for your project? I mean, most systems are globally available and may be misused to fuel hatred in some country you know nothing about, aiding in a genocide (which is what actually happened in Myanmar with Facebooks "algorithm", which is also an AI: https://www.amnesty.org/en/latest/news/ ... ew-report/). Hence, how can such a commitee significantly consider all these things while making the monitoring process as streamlined as possible?

Regulations in that regard can be nation-specific or treaty-specific. I'm not interested in encroaching on national sovereignty more than is absolutely necessary, and in my view regulating specific systems appropriately as you have described is not in the purview of this proposal. It is possible for nations to ban access to certain platforms, or to ban data exchange with other nations, or with specific companies in other nations, etc. This proposal is meant to regulate development, not trade law.

Nobody said something about "college courses", but the draft says that developers are required to be trained. Hence, if you want to have comprehensive training, people would need to take some kind of courses. If you don't like the wording "courses", they could be replaced by "comprehensive training", but in the end, people would need to take some kind of courses and get a certificate in the end.

That works for me.

It depends what "addressing" means. If "addressing" is "We are working on it, we won't publish", that's fine. If it means "we have a solution" that might be not. If you want to bring a new car onto the market but the responsible authority says "no, you need to address those issues first", you have all time that you want to bring a new version onto the table. I want to have the same for the AI models.

I agree it does depend. That ambiguity is built in on purpose so that a wide variety of situations, from minor data issues to massive overreaches against consumer privacy, can be addressed by one kind of body, the IRB, which can vary in composition and scope appropriately.

Draft 3 incoming.
Edit: it's up
Last edited by Haymarket Riot on Thu May 02, 2024 8:42 am, edited 3 times in total.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Tigrisia
Envoy
 
Posts: 276
Founded: Dec 22, 2023
Democratic Socialists

Postby Tigrisia » Fri May 03, 2024 4:54 am

Haymarket Riot wrote:I am not interested currently in regulating other forms of AI for reasons I have already described explicitly.


What about, let's say, Evolutionary Algorithms that are even able to optimize deep learning systems? See here: https://www.science.org/content/article ... all-itself

Something that should definitely be regulated.

Haymarket Riot wrote:I'm happy to include an exemption for educational purposes, but please also note that this only regulates public-facing deep learning systems. Many ML projects are doable without data involving the public.


We recommend an exception for solely scientific purposes, however, if such a model is used afterwards for "production scenarios" it shall be go through a certification system.

Haymarket Riot wrote:I believe they would then count as developing entities, if the datasets are made specifically for deep learning systems. I can rewrite my definition of a developing entity to clarify that.


We would appreciate that.

Haymarket Riot wrote:I'm happy to write language about regulating review of prior-developed AIs into this proposal.


That would be great. Also, I recommend to add regular reviews, at least for certain types of models used in high-risk applications, so that developments that could not be forseen during the development of the model can be addressed later.

Haymarket Riot wrote:That's still interacting with data about specific consumers, and thus interacts with the public.


Not necessarily. (OOC: At university, we once developed a dummy system for such an application which was based on weather and similar data). These models would most likely work on aggregated data or non-personal data, such as weather data, the total consumption of power in a certain area at a certain point in time, the date, certain events (OOC: the date of the super-bowl is always a very stressful day for people in the energy sector, as everybody watches at the same time) and similar things. When things go wrong in those systems that take care of power supply, the consequences may be devastating.

Haymarket Riot wrote:Regulations in that regard can be nation-specific or treaty-specific. I'm not interested in encroaching on national sovereignty more than is absolutely necessary, and in my view regulating specific systems appropriately as you have described is not in the purview of this proposal. It is possible for nations to ban access to certain platforms, or to ban data exchange with other nations, or with specific companies in other nations, etc. This proposal is meant to regulate development, not trade law.


No, we are not talking about trade law here. We are just talking about the implications a bad AI model can have at the other end of the multiverse.

Haymarket Riot wrote:That works for me.


It would be great if you could formulate this.

For the delegation of the Federal Republic of Tigrisia at the World Assembly,
Vice Ambassador Claus Sato
Chargé d'affaires ad interim

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Fri May 03, 2024 6:29 am

Tigrisia wrote:What about, let's say, Evolutionary Algorithms that are even able to optimize deep learning systems? See here: https://www.science.org/content/article ... all-itself

Something that should definitely be regulated.

I'm aware of genetic algorithms and they aren't inherently black box. If they are involved in optimization of a deep learning algorithm, they can then be regulated under the conditions already set by the proposal.

We recommend an exception for solely scientific purposes, however, if such a model is used afterwards for "production scenarios" it shall be go through a certification system.

That's a step too far. Science can be biased in a similar way to industry, and scientific research requires similar government approval and certification before they can proceed. If it's published in a paper, that has real results on industry, which often follows public research for cues in new technologies and applications. This also leaves room for a place where loopholes already often exist: industry can just fund scientists at research institutions to build their AI, and determine which models may be the most obfuscatory to regulators before getting to a stage where any review occurs. That is a nightmare scenario.

That would be great. Also, I recommend to add regular reviews, at least for certain types of models used in high-risk applications, so that developments that could not be forseen during the development of the model can be addressed later.

Regular reviews can be required by the government if necessary already per the IRB standards.

Not necessarily. (OOC: At university, we once developed a dummy system for such an application which was based on weather and similar data). These models would most likely work on aggregated data or non-personal data, such as weather data, the total consumption of power in a certain area at a certain point in time, the date, certain events (OOC: the date of the super-bowl is always a very stressful day for people in the energy sector, as everybody watches at the same time) and similar things. When things go wrong in those systems that take care of power supply, the consequences may be devastating.

Aggregated public data is still public data, in my view. If it directly measures human impacts on the world on a social scale, to me that is public.

No, we are not talking about trade law here. We are just talking about the implications a bad AI model can have at the other end of the multiverse.

My apologies, I believed you were referring to international trade law here because of your reference to international communities that are affected by one nation's AI development. In the case you are referring to, the answer is fairly simple in my current draft: that's for regulators in the government to decide, not a singular company that may book faces, whatever that means. IRBs and governments are often not perfect because they are composed of fallible humans, and as a result ensuring perfection is not a goal of the World Assembly. If certain types of experts or community members are clearly required, a government should reasonably be able to identify them according to national policies.
Last edited by Haymarket Riot on Sun May 05, 2024 9:43 am, edited 3 times in total.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Tue May 07, 2024 7:40 pm

Edit: Draft four is up (with various edits suggested by colleagues).

As a side note, I ended up deciding that governments themselves can regulate how universities implement these policies (especially when research will already be regulated), and so the burden on university studies would not necessarily be significant if that is a concern (though obviously governments could regulate this as they please). As a result, I have removed the clause exempting educational use of deep learning development from a project summary.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
The Ice States
GA Secretariat
 
Posts: 3026
Founded: Jun 23, 2022
Compulsory Consumerist State

Postby The Ice States » Thu May 09, 2024 3:30 pm

I think the idea is fine but it would be helpful to introduce more detail; for example what does it mean for training to be "comprehensive"?
Factbooks · 46x World Assembly Author · Festering Snakepit Wiki · WACampaign · GA Stat Effects Data

Posts in the WA forums are Ooc and unofficial, absent indication otherwise.
Please check out my roleplay thread The Battle of Glass Tears!
WA 101 Guides to GA authorship, campaigning, and more.

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Thu May 09, 2024 3:59 pm

The Ice States wrote:I think the idea is fine but it would be helpful to introduce more detail; for example what does it mean for training to be "comprehensive"?

Thank you! I'll have to meditate on where I can introduce more specificity into my proposal without obfuscating what's clearly already difficult-to-parse terminology.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Tigrisia
Envoy
 
Posts: 276
Founded: Dec 22, 2023
Democratic Socialists

Postby Tigrisia » Fri May 10, 2024 7:01 am

Haymarket Riot wrote:I'm aware of genetic algorithms and they aren't inherently black box. If they are involved in optimization of a deep learning algorithm, they can then be regulated under the conditions already set by the proposal.


But these systems are also used in highly sensitive areas and also not inherently white box, hence, we still recommend an inclusion.

Haymarket Riot wrote:‘Deep learning system’: A machine learning system composed of multi-layered neural network(s).


Does this just refer to hidden layers or any layer at all?

For the delegation of the Federal Republic of Tigrisia at the World Assembly,
Prof. Dr. Phoenix Kim
Science Attachée
Last edited by Tigrisia on Fri May 10, 2024 8:48 am, edited 2 times in total.

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Fri May 10, 2024 7:07 am

Tigrisia wrote:But these systems are also used in highly sensitive areas and also not inherently white box, hence, we still recommend an inclusion.

We disagree about proposal scope, I think. Arguably, no type of ML is inherently white box if this is the level of scrutiny you are intent on applying.

Does this just refer to hidden layers or any layer at all?

I'll reword this as 'neural networks with hidden layers'. Good catch.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Tigrisia
Envoy
 
Posts: 276
Founded: Dec 22, 2023
Democratic Socialists

Postby Tigrisia » Fri May 10, 2024 8:48 am

Haymarket Riot wrote:We disagree about proposal scope, I think. Arguably, no type of ML is inherently white box if this is the level of scrutiny you are intent on applying.


We do. While you only want to regulate any type of deep learning, we want to regulate all types of artificial intelligence if they have a certain risk in misuse (OOC: I want to more or less take the approach of the EU AI Act.)

For the delegation of the Federal Republic of Tigrisia at the World Assembly,
Prof. Dr. Phoenix Kim
Science Attachée

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Sat May 11, 2024 6:40 am

Alright! I think I'll let this rest for a few days longer if anyone has final comments, but it's close to submittable quality in my view. Thank you everyone for your input, it means a lot to me.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Mon May 13, 2024 6:01 am

Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
The Overmind
Diplomat
 
Posts: 946
Founded: Dec 12, 2022
Authoritarian Democracy

Postby The Overmind » Mon May 13, 2024 9:29 am

Haymarket Riot wrote:https://www.nationstates.net/page=UN_view_proposal/id=haymarket_riot_1715605149

I have submitted my proposal.

I recommend waiting for more feedback before doing this. Only a few people have laid eyes on this and offered feedback. For a seasoned author, a month between drafting and submitting is sometimes quoted as a rule of thumb, but, in reality, doing this process properly such that you maximize your chances at vote is a few weeks to a few months, depending on complexity and engagement level.
Free Palestine

Trans men are men | Trans women are women | Sex is non-binary
Assigned sex isn't biological sex | Trans rights are human rights


Neuroscientist | Formerly Heavens Reach | He/Him/His

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Tue May 14, 2024 5:00 am

The Overmind wrote:
Haymarket Riot wrote:https://www.nationstates.net/page=UN_view_proposal/id=haymarket_riot_1715605149

I have submitted my proposal.

I recommend waiting for more feedback before doing this. Only a few people have laid eyes on this and offered feedback. For a seasoned author, a month between drafting and submitting is sometimes quoted as a rule of thumb, but, in reality, doing this process properly such that you maximize your chances at vote is a few weeks to a few months, depending on complexity and engagement level.

Hi, sorry I didn't see this sooner. In the future, I'll definitely be more mindful of that!
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Thu May 16, 2024 2:37 pm

Hi, I've decided to follow the advice in this thread and take a little more time to tinker with this proposal and incorporate additional feedback. Thank you all for your patience.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell

User avatar
Haymarket Riot
Secretary
 
Posts: 39
Founded: Aug 29, 2023
Scandinavian Liberal Paradise

Postby Haymarket Riot » Thu May 16, 2024 2:37 pm

Hi, I've decided to follow the advice in this thread and take a little more time to tinker with this proposal and incorporate additional feedback. Thank you all for your patience.
Mayor of Ridgefield||Diplomatic Officer of the Augustin Alliance
IC: President Jolene Josephine Jefferson of Haymarket Riot
Formerly: Lieutenant in the Black Hawks, Delegate of Pacifica, Prime Director of Anteria
Author of SC 228 "Commend August"
I'm a chick.
"Love is wise, hatred is foolish" - Bertrand Russell


Advertisement

Remove ads

Return to General Assembly

Who is online

Users browsing this forum: No registered users

Advertisement

Remove ads