Standard for the protection of workers’ rights in the Implementation of Artificial Intelligence in enterprises
Artificial intelligence is rapidly transforming the world of work, bringing both new opportunities and risks. The international community recognizes that a fair future of labor relations requires proactive measures for the responsible implementation of AI.
The mission of this standard is to ensure a human-centered and socially fair approach to digital transformation.
The International Labour Organization (ILO) emphasizes that the potential of artificial intelligence must be realized only with reliable guarantees for workers, to prevent the deepening of inequality.
Several countries are already taking steps in this direction – for example, the EU is introducing rules on algorithmic management of labor (prohibition of dismissals solely by algorithm, the right to human oversight of decisions, etc.), while Spain has adopted a law on algorithmic transparency for workers.
This standard, initiated by the Council of Trade Unions of Central Asian countries, builds on the best international practices (ILO, EU, OECD, and national cases of participating countries) and adapts them to the regional context to ensure decent work in the age of AI.
Legal status and scope of application. The standard is intended for use as the basis for legislative initiatives or the development of regulatory acts adopted at the level of trade unions and employers. Its provisions apply to all sectors and organizations within the Central Asian countries when implementing artificial intelligence systems that affect working conditions, employment, or workers’ rights. The standard sets minimum guarantees, which may be clarified and expanded by national legislation, collective agreements, general and sectoral agreements.
Terms and definitions: For the purposes of this standard, the following basic terms are used:
- Artificial Intelligence (AI) — the functional ability to perform cognitive functions characteristic of humans.
- Artificial Intelligence Systems (AIS) — software systems and algorithms (including machine learning, neural networks, expert systems, etc.) capable of automatically performing tasks or making decisions that may affect the labor process, working conditions, and workers’ rights.
- Worker – a person in an employment relationship with an employer (including under an employment contract, agreement, or actual employment in platform/gig work).
- Employer – an organization or individual hiring workers and implementing AI technologies in their activities.
- Worker’s representatives – bodies or individuals authorized to represent the interests of the workforce (trade unions, elected trustees, etc.).
- AI implementation – the development, acquisition or use by an employer of any artificial intelligence systems that may affect workers – Including automation of production operations, algorithmic personnel management, making decisions on hiring, evaluation, reward or dismissal using AI, monitoring of workers using AI systems, etc.
1. General principles for the fair and ethical implementation of AI in the workplace
1.1 Human-centered approach and social justice. The implementation of AI must be carried out in the interests of people. Technologies should work for people, not against them—improving working conditions and safety, and promoting decent work and social justice. Any use of AI in production must comply with the principle that the rights and dignity of the worker take priority over economic benefit or technical expediency.
1.2 Respect for human and labor rights. AI systems must be developed and used with due regard for the fundamental rights of worker. The use of AI systems that violates labor laws, international labor standards, or ILO conventions is prohibited. In particular, AI must not be used to undermine workers’ rights to freedom of association and collective bargaining (for example, by monitoring or interfering with trade union activity). Algorithms must not discriminate against workers on the basis of gender, age, race, nationality, language, religion, political beliefs, trade union membership, health, or any other characteristics protected by law. Employers are obliged to ensure that the use of AI does not undermine workers’ legally guaranteed rights, worsen their working conditions, or pose risks to their safety or well-being.
1.3 Transparency and accountability. Automated systems in the field of work cannot be a “black box” — their functioning must be as transparent and understandable as possible for those affected by them. Workers and their representatives have the right to know what decisions AI makes and on the basis of which data and rules. The employer is responsible for the consequences of using AI and must take measures to prevent erroneous or unlawful algorithmic decisions. Mechanisms for verification, audit, and accountability must be in place: if AI makes a decision affecting a worker, there must remain the possibility to check the accuracy and validity of that decision and, if necessary, to challenge it (see the section on transparency and control below).
1.4 The principle of complementation, not substitution (together, not instead). The primary ethical goal of AI implementation is to improve the quality of work, not to replace workers. AI should be applied to automate routine, hazardous, or physically demanding tasks so as to free up employees’ time for more creative and complex activities, skill development, and similar purposes. It is unacceptable to use AI solely for staff reductions or labour cost savings — on the contrary, digitalisation should benefit both employers and workers. Work organisation during the introduction of new technologies must be structured to ensure that no one is left behind in the process of digital transformation (the principle of a just transition). All workers whose tasks change due to automation must be given the opportunity to find their place in the new production system, through retraining or new roles.
1.5 Safety, health and personal integrity. The use of AI should not worsen occupational safety and health. Algorithms that manage production or distribute tasks must take into account workload limits, rest periods, and other occupational safety requirements.
Process automation must not lead to work intensification that threatens workers’ health. Likewise, AI must not be used for total digital surveillance that violates the right to privacy: technologies of covert monitoring or the collection of worker data outside the scope of work processes are unacceptable. In particular, analyzing personal information unrelated to work (such as private correspondence, biometric data without consent, emotional state, or personal beliefs) is prohibited.
The confidentiality of workers’ personal data during any automated processing must be strictly maintained in accordance with national legislation and international standards.
1.6 Social dialogue and cooperation between the parties. Fair implementation of AI requires active participation of workers and their representatives at all stages — from planning to system operation. The principle of partnership and consideration of the collective’s opinion is key. Only through dialogue between employers and trade unions can digitalisation be ensured to serve the interests of all parties. In the context of digital transformation, the role of trade unions as defenders of workers’ rights does not weaken but strengthens. This standard is based on the understanding that social partnership is the best tool for preventing negative consequences of AI implementation and for maximising its benefits for both the economy and workers.
2. Mechanisms for taking into account the opinions of workers and their representatives in the implementation of AI
2.1. Mandatory consultations with workers. The employer is obliged to inform workers and their representatives about the intention to introduce a new AI system that affects working conditions and to conduct consultations before its implementation, in accordance with the procedure established in this standard.
In organisations where there is a trade union or works council, the employer must negotiate the conditions and procedures for the introduction of new technologies in the workplace.
2.2. Joint committees and working groups on AI. To systematically take into account workers’ opinions, it is recommended to establish permanent joint committees on digital technologies. Such a body should be composed on a parity basis of representatives of the employer (AI specialists, IT department, managers, HR staff) and representatives of the workers (delegated by the trade union or the workers’ assembly).
The tasks of the committee include: preliminary assessment of AI systems planned for implementation, analysis of their impact on staff, and preparation of proposals for the safe and fair integration of the necessary technologies. The committee must have access to information about the functioning of algorithms and the right to request explanations from developers. The existence of such a mechanism will institutionalise workers’ voice in digital transformation and turn it from a formal, one-off action into an ongoing practice.
2.3. Important to take into account feedback during operation and improving of Systems. It is important to recognise that the introduction of AI is not a one-time act — after a system is launched, feedback from workers must continue to be collected. The employer should provide channels (for example, regular surveys, suggestion boxes, or meetings) to gather proposals and comments from workers regarding the functioning of AI systems (accuracy of decisions, convenience, identified problems).
The feedback received must be discussed with workers’ representatives and taken into account when refining algorithms or related business processes. For example, if a shift-scheduling algorithm causes complaints about inconvenient timetables, the employer should work together with workers to adjust the scheduling rules. Such adaptability, based on staff input, ensures that AI genuinely contributes to improving working conditions rather than causing harm.
2.4. Right to objection and suspension of implementation.
If workers’ representatives conclude that a planned AI system poses significant risks to workers (such as mass layoffs, violations of rights, or safety threats), they have the right to submit a reasoned objection to the employer. In such a case, implementation must be suspended to allow for additional consultations, assessments, or negotiations. The employer is obliged to consider the objections, provide justification on disputed issues, and, where possible, adjust the implementation project or propose compensatory protective measures for workers. If no compromise is reached during the dialogue, the matter may be referred to labour dispute bodies, the labour inspectorate, or other institutions provided by law, in order to resolve the situation in accordance with legislation.
3. Standards of transparency and explainability of AI decisions
3.1. Notification about the use of AI and the data applied. The employer is obliged to inform workers in advance (and, in a special manner, their representative body) about any use of AI that affects their work. The workforce must know: in which processes AI is applied, for what purposes, what decisions it makes or recommends, and what data about workers are collected and analyzed for this purpose.
This information must be provided in an accessible form and in clear language. For example, if an algorithm for shift allocation is introduced, the employer must explain to workers which criteria (qualification, seniority, preferences, performance, etc.) are taken into account by the system when drawing up the schedule. It is unacceptable for employees to be unaware that they are being assessed or monitored by a machine. Moreover, the employer has no right to conceal the fact of AI use: even in cases where the automated decision is only auxiliary (a recommendation for a manager), the worker must be informed of the algorithmic element’s presence.
3.2. Explaining the logic of algorithms to workers representatives. Workers’ representatives have the right to more detailed information about AI systems that affect staff. At the request of the trade union or works council, the employer is obliged to provide information on the algorithms—at a level sufficient to assess their correctness and consequences. In particular, the core logic, parameters, and criteria embedded in the algorithm, as well as the rules by which AI assigns tasks or evaluates workers, must be disclosed. If machine learning systems are used, the employer should explain on which data sets the model was trained and which optimization metrics are applied.
Trade secrets or code protection cannot serve as an absolute barrier: a balance must be found between safeguarding intellectual property and workers’ right to know how decisions affecting them are made.
3.3. Right to explanation and reasons for AI-based decisions.
Every worker who is subject to a decision involving AI (whether refusal of employment, assignment of a lower bonus, change of schedule, reprimand, transfer, dismissal, etc.) has the right to request from the employer an explanation of the reasons and factors underlying that decision. The employer is obliged to provide, within a reasonable timeframe, a clear justification that must explicitly state: the role played by AI (for example, “the performance evaluation algorithm indicated a decline in indicators”), which data and criteria were considered, and how these led to the corresponding decision.
3.4. Right to human consideration. In cases where a worker disagrees with a decision made (or prepared) by AI, they have the right to demand a review of the decision with human involvement. In other words, there must be a procedure for appealing automated decisions: the worker submits a request, and a competent official (manager, committee) is obliged to personally examine the situation, hear the worker, and issue a final decision without blindly relying on the machine’s conclusions. Such a review must be independent and objective. This is particularly important for serious decisions—for example, dismissal, demotion, or deprivation of a significant part of income. In line with best international practices, workers have the right to challenge a decision based on an algorithm and to demand human verification in cases where it substantially affects their labour rights (such as termination of an employment contract or other impacts on the worker’s status). The employer is obliged to ensure that a sufficient number of authorised staff are available to perform this oversight function and to review appeals regarding AI decisions. The final decision must involve a human who has the appropriate authority to amend or overturn the original automated decision.
3.5. Limitations on data collection and use. The principle of transparency also includes clear rules on data processing. Workers have the right to know what personal data about them is collected for the operation of algorithms, who uses it and for what purposes, and how long it is stored. The employer is obliged to limit data collection to only the necessary information related to employment and must not collect excessive information. It is prohibited, without strong justification, to collect or analyse sensitive data (such as health, private life, political views, etc.)—these categories of information are protected by law and may only be processed with the worker’s consent or with a specific legal authorisation.
All data used in AI systems must be processed in accordance with personal data protection laws. Workers have the right to correct their data—if an employee discovers that outdated or incorrect information about them is being used in the system (for example, an error in personal details or performance evaluation results), they may request the employer to update the information. The employer, in turn, should, where technically possible, provide the worker or their representative with a mechanism for making corrections to personal data used in the algorithms. This is essential to ensure the accuracy and fairness of AI decisions.
4. Protection against dismissals and deterioration of working conditions during automation.
This section describes the key decisions that cannot be made solely on the basis of an algorithm. In practice, however, the list of such decisions may be defined by a joint agreement between the employer and the trade union—for example, as an annex to a collective agreement.
4.1. Prohibition of Automatic Dismissals Without Human Involvement. Dismissal of a worker cannot be carried out solely on the basis of an algorithmic decision. A direct ban is introduced on situations where a computer program independently “decides” to dismiss an employee without assessment and confirmation by a responsible official. Any decision to terminate an employment contract must be made by a human being (the employer or an authorised manager), with individual consideration of all circumstances. This ensures protection of workers from impersonal automatic HR decisions.
This standard extends such a guarantee to all workers in all sectors. If, as a result of AI implementation, the employer faces the need to reduce the workforce or staff numbers, this must take place only within the legally established procedures (advance notice within the required period, offering alternative work, severance pay, etc.) and with proper consultation with trade unions.
4.2. Protection against the deterioration of working conditions.
The introduction of AI must not lead to regression in labour rights and conditions. The employer has no right to worsen employees’ working conditions by citing automation. In particular:
- No reduction in pay or loss of benefits is allowed on the grounds that part of the work has been taken over by a machine. If an employee’s duties have decreased due to the introduction of AI, this is not a reason to cut salaries or deprive bonuses — on the contrary, the freed-up time should be used for mastering new tasks or skills upgrading. Workload standards and quotas must not be raised unreasonably.
- The presence of AI assistants or robots must not result in higher workload demands than those specified in the employment contract or established norms prior to automation, without appropriate additional pay or compensation. (For example, if a route-planning algorithm is introduced that speeds up courier work, this does not mean that delivery quotas can automatically be doubled — changes to labour norms must be discussed with employees.)
- Preservation of social guarantees: digitalisation must not erode existing benefits and protections (additional leave, reduced working hours for certain categories, rest breaks, etc.). For instance, if a company had an additional rest break for monotonous manual work, and after introducing a robot some operations are automated, this must not automatically mean the cancellation of the break without analysing the actual workload and agreeing with the workforce.
4.3. Principle of Non-Regression. The implementation of new technologies must not worsen the situation of workers compared to the previous state. This principle aligns with the doctrine of non-regression in labour law. If, as a result of AI introduction, there arises an objective possibility to improve conditions (for example, reduce routine duties), the employer should seek to direct the benefit toward improving job quality, rather than impoverishing the content of work. Savings achieved through automation should, as a matter of fairness, be partially redistributed in favour of people — by reducing work intensity, cutting overtime, increasing pay, or through other forms. With proper, controlled implementation of AI, it is possible to avoid serious negative disruptions and reduce routine tasks without the need for mass dismissals.
4.4. Obligation to seek alternative employment. If the introduction of AI does indeed lead to redundancy in certain jobs (for example, when automation of a specific operation eliminates the need for some positions), the employer is obliged to make efforts to place the affected workers in other positions. Instead of direct dismissal, alternative employment within the organisation (if vacancies exist) or retraining for a new role must be offered. Dismissal is permissible only as a last resort, when all possibilities to preserve employment have been exhausted. Even in such cases, employers are encouraged to cooperate with trade unions and state employment services to help redundant workers find new jobs — through retraining programmes, counselling, and similar measures. No one should be left alone to face technological progress.
4.5. Prohibition of unjustified dismissals under the pretext of AI. This standard introduces a ban on the practice where an employer uses automation as a formal pretext for staff reductions that are in fact unrelated to the actual implementation of technologies. For example, a declaration that “AI will replace everyone” cannot serve as a cover for getting rid of unwanted workers. Any downsizing justified by digitalisation must have a documented basis: the actual introduction of a specific technology that has led to workforce optimisation. At the same time, the trade union or other worker representatives have the right to request justification from the employer: which functions are no longer necessary, by which technology they have been replaced, and to ensure that the decision is not discriminatory or unlawful. In cases where abuses are identified (e.g., fictitious automation that does not actually reduce work but is used as a pretext to dismiss people), workers may challenge such dismissals as lacking a legal basis.
5. Right to retraining and retention of employment
5.1. The right of every worker to training when AI is introduced. Every worker whose job functions are affected by the introduction of new technologies has the right to free training to adapt to the changes. The employer is obliged to organise appropriate programmes for retraining, upskilling, or mastering new skills necessary to work under conditions of digitalisation. Training must be provided at the employer’s expense (or with the support of government programmes, if available) and, where possible, during working hours or with retention of average earnings. The goal is to give workers the knowledge and skills that will allow them to effectively use new AI systems in their work, or to master new job responsibilities if the structure of jobs changes. For example, if a worker previously performed assembly operations manually, and now the production process is automated with robotics, they should be offered training in robot adjustment and monitoring skills. Training should precede or accompany the introduction of AI so that employees are prepared for the changes by the time they occur.
5.2. Retraining for new positions. In cases where automation leads to the disappearance of certain professions or a significant reduction of jobs in a specific field, the employer, together with state employment agencies and trade unions, must ensure retraining programmes for workers into other in-demand professions. In other words, a worker who has lost their previous job due to the introduction of AI has the right to be trained in a new profession and to apply for a vacancy (preferably one with conditions and pay not worse than before). The aim is to maximise the preservation of employment levels. The worker is not simply dismissed, but transferred (after training) to a new job, possibly within the same company or industry.
5.3. Employment guarantees during training. While a worker is undergoing a retraining programme, their job position—or at least a guarantee of employment—must be preserved. An employer introducing automation is obliged, wherever possible, not to dismiss workers immediately, but to first offer them the opportunity to take upskilling or retraining courses. During the training period, the worker retains their position (or a new one is reserved for them) and, if possible, their average earnings (either fully or partially, depending on agreements). This approach prevents situations where a worker loses their job and only afterwards studies something new without any employment guarantee. Ideally, the process should follow this path: old functions are reduced → the worker is retrained → the worker is provided with an updated position. If training is lengthy or it is impossible to keep the job position, it is recommended to pay a stipend or retraining allowance.
5.4. Consideration of different worker groups. Training and retraining programmes must take into account workforce diversity. It is especially important to provide equal learning opportunities for vulnerable groups: older workers, who may find it harder to master new technologies; workers with disabilities, who require special conditions; and low-skilled employees, for whom the transition to the new digital economy is particularly challenging. Employers, together with the state, must develop adapted training courses and use methods such as mentoring and coaching, so that no worker is left without support in acquiring digital skills. Conditions must be created to allow the workforce to evolve together with technology.
5.5. Support from the state and trade unions in training. Retraining personnel in the context of a technological leap requires coordinated efforts by employers, trade unions, and government bodies. It is recommended to conclude tripartite agreements (at national and sectoral levels) on the implementation of advanced training programmes designed for the needs of the digital economy. The state can provide financial support for such programmes (e.g., subsidies for training, tax incentives for employers who train staff), while trade unions can participate in developing curricula, monitoring their quality, and disseminating information on training opportunities to workers. Enterprises can pool resources to create training centres for digital competences. Such cooperation ensures a strategic approach: instead of one-off courses, a system will be built that allows the workforce’s skills to be continuously updated.
5.6. Guarantees in case of refusal to undergo training. The right to retraining implies voluntariness: a worker may refuse the proposed training or a new position. However, such refusal must not be used by the employer as grounds for immediate dismissal. For example, if an employee is close to retirement and does not wish to learn new technologies, alternative options should be considered – transfer to a position where retraining is not required; early retirement on preferential terms; or, as a last resort, dismissal due to staff redundancy with all payments provided by law and additional compensation (increased severance pay). In other words, even if a worker refuses training, a humane approach must be taken, and the best possible solution for their future must be sought.
6. Features of collective bargaining in the context of digital transformation
6.1. Issues related to AI as a subject of collective bargaining. The introduction of AI systems and the consequences of labour digitalisation are classified by this standard as important issues to be discussed within the framework of social partnership. Employers and trade unions must include the topic of AI use on the agenda of collective bargaining at all levels – from enterprise-level (collective agreements) to sectoral and national levels (general agreements, sectoral agreements).
The list of issues for negotiation may include: conditions and procedures for introducing new technologies, staff training, redistribution of jobs, employment guarantees, algorithm oversight, protection of employee data, compensatory measures in the event of redundancies, and others. In fact, everything set out in this standard may and should be specified in collective agreements. For the successful introduction of AI, it is essential to establish agreements between employers and employees so that both sides clearly understand their rights and obligations.
6.2. Adaptation of traditional collective agreements. Trade union organisations are advised to review and supplement standard collective agreements in light of digitalisation. Many existing agreements may have been developed before the mass introduction of IT, never fundamentally revised, and therefore contain no special provisions on AI. Now, sections or annexes should be added addressing: informing the union committee about new technologies; joint analysis of their impact; workers’ rights in the context of AI introduction; training and retraining; job preservation, etc. Such provisions will give contractual and legal force to the principles outlined in this standard and make them part of the employer’s specific obligations towards workers.
6.3. Bargaining on work intensification and distribution of benefits. Another subject of collective bargaining should be productivity and the distribution of the results of automation. Possible topics for negotiation include: reducing the workweek without lowering pay as productivity increases; establishing bonuses or a share for workers in the economic benefits of technology adoption; and measures to prevent excessive workloads. The goal is to share the fruits of digitalisation fairly, improving the position of workers as well as increasing profits. In addition, it is important to agree on acceptable norms of electronic monitoring: trade unions may limit the range of indicators that the employer is entitled to track with AI (for example, not constantly monitoring an employee’s location via GPS outside of work tasks, not recording audio/video except for safety purposes, etc.). All these issues should be settled collectively to prevent businesses from unilaterally imposing inconvenient practices.
6.4. Strengthening the role of trade unions and increasing competence. In the digital economy, trade unions must strengthen their capacity for effective representation. Employers are encouraged to recognise and support the role of trade unions as partners in dialogue on technology. Only with sufficient knowledge can worker representatives participate on equal terms in negotiations on digitalisation. Retraining of trade union leaders, digital negotiators, and lawyers should begin immediately and continue on an ongoing basis.
6.5. Regional and international cooperation of trade unions. For trade unions in Central Asian countries, it is advisable to coordinate efforts to exchange best practices in AI regulation. It may be possible to establish a platform (committee) of trade unions on digitalisation issues, where common approaches and recommendations can be developed and then incorporated into national legislation.
By uniting their voices, regional trade unions can influence the adoption of global standards (in the ILO, UN), as well as negotiate agreements with transnational corporations operating in the region on standards for the use of AI. International solidarity will help build a common front for protecting workers in the digital era.
Our standard seeks to ensure that social dialogue not only does not weaken, but becomes a key factor in the successful and fair introduction of AI.
7. Rules for informing and prior discussion of AI implementation
7.1. Advance notification of the intention to introduce AI. The employer is obliged to give advance notice (no later than a certain period before the system is put into operation – at least 2–3 months is recommended) to workers’ representatives about plans to introduce a new AI system that could affect the workforce. The notification must be in writing (paper or digital, as agreed by the parties), officially sent to the trade union committee, and include key information: what the technology is, its purpose, the expected effect for the organisation and personnel, and the planned implementation timeline. At the same time, the employer should communicate the main information to all workers (e.g., through an internal mailing or at a meeting). The purpose of advance notification is to give workers a chance to prepare and, most importantly, to initiate discussion and consultation before decisions become irreversible.
7.2. Content of information in the notification. In the notification, the employer must provide the most complete possible information about the AI implementation project. The recommended list includes:
- System description: what specific AI or software product is planned (e.g., “video analytics system for monitoring compliance with safety rules,” “algorithm for evaluating sales staff efficiency based on sales,” etc.).
- Purpose of implementation: what results are expected (increased productivity, reduced errors, resource savings, etc.).
- Scope of application: which departments or categories of staff will be affected, and what decisions or processes will be automated.
- Impact on workers: a preliminary assessment of how employees’ work will change — whether some tasks will become easier, whether new skills training will be required, whether staff reductions are planned, whether schedules or pay systems will change, etc.
- Worker protection measures: what steps the employer is ready to take to minimise negative effects (e.g., training programmes, a guarantee against layoffs for a certain period, pilot testing, etc.).
- Timeline: the planned schedule for implementation (dates for testing, pilot operation, full launch).
- Responsible persons: who on the employer’s side oversees the project and whom workers can contact for clarifications.
The more detailed and honest the information provided, the more constructive the subsequent dialogue will be. If the information is insufficient, workers’ representatives have the right to request additional data from the employer, and the employer is obliged to provide it (subject to reasonable limits of commercial confidentiality).
7.3. Consultations before implementation. After notification, the employer must hold consultations with workers’ representatives before putting the AI system into operation. Consultations mean that employers listen to the opinions and proposals of trade unions/works councils, answer their questions, and discuss possible adjustments to the plan. The goal is to reach agreements (where possible) regarding the conditions and procedures of implementation. In practice, such preliminary consultation may take the form of a series of meetings or working groups (for example, with the participation of IT specialists, management, and the union). The consultation protocol records the positions of the parties and any agreements reached. If the union makes proposals (for example, to conduct pilot testing in a limited area, to accompany implementation with a “no layoffs” agreement, or to provide additional guarantees), the employer is obliged to review them and give a reasoned response. The time spent by workers’ representatives participating in consultations is recommended to be counted as paid working time so that there are no obstacles to participation.
7.4. Pilot (trial) implementation and its discussion. In cases of large-scale changes, it is advisable to provide for a pilot period — first launching the AI system in test mode in a limited area or for a limited time (without applying strict sanctions to workers based on its decisions). It is recommended to include in collective agreements or workplace practice a provision that, when introducing new technologies, a trial operation is carried out first, its results discussed with workers, and only then is a decision made on full-scale implementation. At the pilot stage, unions and workers can provide valuable feedback: identifying unforeseen problems, algorithmic errors, redundancy, or, conversely, the usefulness of functions. This discussion should lead either to adjusted implementation (taking feedback into account) or, if the system shows serious shortcomings, to its rejection or replacement with another. The pilot approach reduces risks and helps build trust: staff see that the technology has been tested and adjusted with their input.
7.5. Risk assessment prior to implementation. As part of preparations for implementation, the employer, together with workers’ representatives, must carry out an assessment of potential risks for staff (as part of the overall risk management system when introducing AI). Such an assessment includes analysis of: which jobs may be cut; whether new safety risks may arise; whether there is a risk of discrimination against certain categories (for example, if the algorithm evaluates workers over a certain age less favorably); or whether the system may create excessive pressure on workers (for example, staff ranking with public disclosure of ratings). Based on this assessment, a report is prepared indicating the identified risks and the proposed mitigation measures. The report is presented to workers’ representatives. Ideally, on the basis of this report, the parties agree on specific steps (or include them in the collective agreement).
7.6. Regulation of hiring and testing with AI. If the employer plans to use AI in recruitment (resume screening, automated interviews, tests), they must inform the union in advance. Where possible, the relevant algorithms should be coordinated with workers’ representatives, especially to prevent discrimination. For example, the union may request anonymized test results for independent evaluation to check whether the system systematically excludes representatives of a particular gender or age group. Transparency for candidates should also be addressed—whether they are informed of the algorithm’s result, given an opportunity to retake the test, or appeal for a live interview. Although the hiring process is not directly regulated by collective agreements (in the case of external candidates), the union may propose ethical AI standards in recruitment that the employer voluntarily adopts.
7.7. Informing and obtaining consent from the worker at the individual level. In addition to collective information, it is important to respect the rights of each individual worker. If an worker becomes subject to AI-driven decisions (for example, the introduction of performance monitoring through an algorithm), it is recommended to obtain their acknowledgment (with a signature or electronically) confirming familiarity with the new rules. The worker must clearly understand what new requirements or criteria apply to them. In some cases—particularly when it concerns the collection of personal data or video surveillance—obtaining the worker’s consent may be required (if mandated by personal data protection laws). The standard requires that no significant algorithmic measures be applied to a person secretly. Transparency at the individual level is key to building trust in technology.
8. mechanisms for monitoring, oversight, and reporting on the use of AI
8.1. Internal control and employer responsibility. An employer introducing AI bears an ongoing obligation to monitor the system’s impact on workers and to comply with the provisions of this standard. For this purpose, the enterprise must designate responsible persons to oversee the operation of AI systems from the perspective of labor rights—such as an AI ethics officer, or by expanding the functions of the occupational safety department or human resources. The task is to regularly verify that algorithms function correctly, do not violate rights, and do not create dangerous situations. All automated decisions affecting people must be recorded and subject to oversight.
8.2. Independent audit of algorithms. It is recommended to conduct periodic independent audits of AI systems in use, to assess their impact on workers. Audits may be carried out either by internal departments not connected to the system developers, or with the involvement of external experts or consultants. The goal of the audit is to identify potential problems: systematic bias in decision-making, classification errors, data leaks, or non-compliance with corporate policies or the law. The results of the audit must be shared with worker representatives. According to best practices, the employer should not only conduct such assessments but also publish the main findings or report to workers on how the algorithms function and what measures are taken to improve them.
8.3. Joint supervision with worker participation. Effective monitoring requires the participation of workers in oversight. Ideally, a joint committee should regularly review reports on the operation of AI systems, be able to request the data necessary for verification, and issue recommendations. Worker representatives have the right to demand an unscheduled audit of an algorithm if signals of malfunctions or unfair decisions emerge. For example, a union may initiate a review of a bonus-calculation algorithm if many workers complain about unclear deductions. The employer is obliged to respond to such requests and work together with workers to resolve the situation. If the review reveals that the algorithm was malfunctioning (for instance, collecting data incorrectly or introducing hidden bias against a particular group), the employer must immediately take corrective measures. Joint monitoring helps prevent conflicts and allows deficiencies to be addressed promptly before they result in serious harm to workers’ rights.
8.4. Regular reporting and auditing. It is recommended that collective agreements establish the employer’s obligation to provide the union or other worker representatives with an annual report on the use of AI systems in the company. The report should reflect: which AI systems were used during the period; which workers or processes they affected; whether any incidents or complaints arose and how they were addressed; what training measures were conducted; and which technology updates are being planned. Such a report can be discussed at a joint meeting of management and workers. Every few years (e.g., every three years), a full review of corporate AI policy should be conducted: whether new rules are needed, or existing standards require updates in light of new technologies, operational experience, or legislative changes. Trade unions must be equally involved with the employer in updating agreements and rules to match current realities.
8.5. Government control. It is proposed that enterprises introducing algorithmic labor management be subject to supervision by the labor inspectorate or an equivalent authority. Labor inspectors should be granted the authority to verify compliance with this standard: to request information about AI systems, check whether consultations were held, ensure no unlawful dismissals were carried out through automation, confirm that training obligations are met, and verify respect for workers’ rights, among other things. Governments may need to build up expertise in digital technologies. In addition, a mandatory reporting system could be introduced for large employers on the use of AI—for example, submitting annual reports to a designated authority with information on the algorithms used, their purposes, the number of workers affected, and the measures taken to protect them. Such data would help track trends and, if necessary, prompt regulatory changes.
8.6. Worker complaint mechanism. Workers must have a simple and effective way to file a complaint or report concerns regarding the operation of an AI system if they believe it is acting unfairly or unlawfully. The company should establish an internal procedure—for example, by including AI-related issues within the competence of the labor dispute committee or by appointing a responsible officer to handle such submissions. A worker’s complaint must be reviewed in a timely manner, with the involvement of IT specialists and a union representative if necessary. Based on the outcome, the worker should receive a reasoned response, outlining the measures taken (such as corrections, explanations of the system’s logic, etc.). It is also important that filing a complaint cannot result in negative consequences for the claimant—protection from retaliation for criticizing AI is guaranteed. Management should encourage staff to report problems rather than suppress them. If the internal review does not satisfy the worker, they have the right to appeal further to the labor inspectorate or the courts, as with any other issue related to labor rights.
8.7. Review or discontinuation the problematic system. If monitoring reveals that an AI system systematically violates the provisions of this standard or produces undesirable effects (for example, persistent discrimination or widespread employee dissatisfaction), the employer is obliged to suspend or limit the use of such a system until corrections are made. In particularly serious cases, the technology must be fully abandoned. The principle is clear: no innovation can justify violations of workers’ rights. While businesses are free to experiment with new tools, if an experiment fails in terms of social responsibility, it must be acknowledged as a failure and terminated. Trade unions have the right to demand the shutdown of a system they consider to be irreparably harmful to labor relations; in case of disputes, the matter may be referred to government authorities or arbitration to weigh the production benefits against the harm to staff. As an option, legislation may grant labor inspectorates the authority to order an employer to cease using a specific AI system if violations are proven, until deficiencies are remedied. In this way, the filter of public interest takes precedence over technical innovation.
9. Final Provisions
9.1. Adaptation of the standard to national legislation. This draft standard is of a recommendatory nature for the countries of the Central Asian region. It is intended that its provisions be used by trade unions and social partners in developing specific regulatory acts—laws, general and sectoral agreements, collective agreements. When implementing, it is necessary to take into account the specifics of each country’s legal system (identifying responsible bodies, control procedures, etc.). However, the basic principles of the standard are universal and must be preserved: in any adaptation it is important not to weaken the level of protection provided by this document, but, on the contrary, to make it more concrete in practice.
9.2. Monitoring implementation and review. The trade unions that initiated this standard will monitor its implementation in practice. It is recommended that, two years after its initial application, a joint review of its effectiveness be conducted. The standard may be revised and supplemented based on the results of this review, in order to reflect new technological trends and enforcement experience. Updating the standard should also involve all stakeholders—trade unions, employers’ associations, government experts, and academics.
9.3. Promotion of the principles of the standard. Trade unions in the region are encouraged to conduct active information campaigns among workers and employers to explain the provisions of this standard. They should also seek to include these topics in the agendas of government bodies—ministries of labor, parliaments—through seminars, roundtables, and publications. This standard is intended to be the first step on this path—a kind of “social shield” guaranteeing that the digital economy will be an economy with a human face.
9.4. Compliance with International standards. The provisions of the standard are consistent with global principles such as the ILO Recommendations on Fair Digital Transformation of Work, the draft EU AI Act, and the OECD Principles on AI. The implementation of this standard will help the countries of the region fulfill their obligations under international labor conventions (on the right to organize, the right to collective bargaining, occupational safety and health, non-discrimination, etc.) in the new conditions of the digital age.
9.5. Conclusion.
The introduction of artificial intelligence must be accompanied by social intelligence—that is, a well-thought-out policy that takes the human factor into account. This draft standard forms the basis for such a policy. Its adoption and practical implementation will help prevent negative scenarios (mass unemployment, deterioration of working conditions, digital inequality) and ensure that technology becomes a source of progress for all—for both the economy and workers.
As the ILO has emphasized, “we must shape a future in which technology advances social justice”, and it depends on our joint efforts—of workers, employers, and governments—whether AI becomes a tool for improving life or a source of new problems. Trade unions in Central Asia are determined to work towards ensuring the first outcome prevails. The time has come to turn declarations into concrete norms—and this standard is a step in that direction.
Sources and Notes: The provisions of the standard are supported by data and recommendations from the International Labour Organization (ILO), the European Union, the Organisation for Economic Co-operation and Development (OECD), as well as examples of national regulation of AI in the workplace:
- ILO and EESC (2025) – the need for a human-centered approach to AI so that it serves social justice and does not generate new inequalities.
- ILO (Director-General Gilbert F. Houngbo, 2023) – a call to shape AI in such a way that it advances social justice, supports workers with skills and social protection, safeguards their rights, and ensures social dialogue in the digital transition.
- Conference of Central Asian Trade Unions (2025) – recognition of AI’s impact on the world of work and the need to adapt trade union agreements to the new realities of labor relations; emphasis on strengthening the role of social partnership in digital transformation.
- European Parliament (Platform Work Directive, 2024) – prohibits dismissing workers solely based on algorithmic decisions and requires human oversight of important decisions.
- Spain (Royal Decree-Law 9/2021) – introduces the right of workers and their representatives to information about and rules governing algorithms that affect working conditions, setting a precedent for mandatory algorithmic transparency.
- S. Department of Labor (Guidance “AI and Worker Wellbeing,” 2024) – recommends that employers regularly involve workers in AI discussions, ensure good-faith negotiations with unions, provide staff training, avoid relying on AI without human oversight, allow appeals of automated decisions, conduct independent algorithm audits, and report transparently on AI impacts.
- European Commission (draft AI Act) – classifies AI systems in the employment sector as “high-risk,” imposing requirements on their transparency, reliability, and respect for workers’ fundamental rights.
- OECD (Review “Artificial Intelligence, Labour, and Governments,” 2023) – calls on governments to take measures for the trustworthy use of AI and worker training, ensuring no one is “left behind” by digitalization; emphasizes the importance of collective bargaining to support workers and companies during AI-driven transformation.
All the sources cited converge on one point: urgent action is needed to embed emerging artificial intelligence within ethical and socially responsible frameworks in the labor market. This standard is part of this global proactive agenda, adapted for the Central Asian region, taking into account its specific features and challenges.