prompt
stringlengths 22
383k
| prompt_id
stringlengths 32
32
| index
int64 0
500
| annotator
stringclasses 11
values | timestamp
timestamp[s] | rating
stringclasses 2
values | tags
sequence |
---|---|---|---|---|---|---|
what can I adjust on this blade file, please provide me a table format showing current code new code and the justificaiton for the change
Demographic Information
<hr />
<div>
<div>
<div>
Are you eligible to work in the U.S.?
</div>
</div>
<div>
<div>
What language do you speak at home?
@foreach (MyCareerAdvisor\Language::orderBy('name')->get() as $language)
@endforeach
</div>
</div>
</div>
<div>
<div>
<div>
Race
@foreach (MyCareerAdvisor\Race::all() as $race)
@endforeach
</div>
</div>
@if(\MyCareerAdvisor\Setting::where('name', 'enable_latino')->first()->value)
<div>
<div>
Latino or Hispanic Heritage?
</div>
</div>
@endif
</div>
<h2>Educational Background</h2>
<hr />
<div>
<div>
<div>
Highest Grade Completed
@foreach (MyCareerAdvisor\EducationLevel::all() as $education)
@endforeach
</div>
</div>
<div>
<div>
High School Status
@foreach (MyCareerAdvisor\HighSchoolLevel::all()->filter(function ($x) {return strpos($x, 'System Only') == false;}) as $education)
@endforeach
</div>
</div>
</div>
<h2>Military Background</h2>
<hr />
<div>
<div>
<div>
Have you served in the military?
@foreach (MyCareerAdvisor\VeteranStatus::all() as $status)
@endforeach
</div>
</div>
<div>
<div>
Do you have a family member in the military?
</div>
</div>
</div>
| cb58debf83d45bd637f65587b40f032f | 0 | DaehanKim | 2023-05-29T06:21:44 | good | [
"code"
] |
what can I adjust on this blade file, please provide me a table format showing current code new code and the justificaiton for the change
Demographic Information
<hr />
<div>
<div>
<div>
Are you eligible to work in the U.S.?
</div>
</div>
<div>
<div>
What language do you speak at home?
@foreach (MyCareerAdvisor\Language::orderBy('name')->get() as $language)
@endforeach
</div>
</div>
</div>
<div>
<div>
<div>
Race
@foreach (MyCareerAdvisor\Race::all() as $race)
@endforeach
</div>
</div>
@if(\MyCareerAdvisor\Setting::where('name', 'enable_latino')->first()->value)
<div>
<div>
Latino or Hispanic Heritage?
</div>
</div>
@endif
</div>
<h2>Educational Background</h2>
<hr />
<div>
<div>
<div>
Highest Grade Completed
@foreach (MyCareerAdvisor\EducationLevel::all() as $education)
@endforeach
</div>
</div>
<div>
<div>
High School Status
@foreach (MyCareerAdvisor\HighSchoolLevel::all()->filter(function ($x) {return strpos($x, 'System Only') == false;}) as $education)
@endforeach
</div>
</div>
</div>
<h2>Military Background</h2>
<hr />
<div>
<div>
<div>
Have you served in the military?
@foreach (MyCareerAdvisor\VeteranStatus::all() as $status)
@endforeach
</div>
</div>
<div>
<div>
Do you have a family member in the military?
</div>
</div>
</div>
| cb58debf83d45bd637f65587b40f032f | 0 | dianewan | 2023-05-23T23:17:15 | good | [
"code"
] |
what can I adjust on this blade file, please provide me a table format showing current code new code and the justificaiton for the change
Demographic Information
<hr />
<div>
<div>
<div>
Are you eligible to work in the U.S.?
</div>
</div>
<div>
<div>
What language do you speak at home?
@foreach (MyCareerAdvisor\Language::orderBy('name')->get() as $language)
@endforeach
</div>
</div>
</div>
<div>
<div>
<div>
Race
@foreach (MyCareerAdvisor\Race::all() as $race)
@endforeach
</div>
</div>
@if(\MyCareerAdvisor\Setting::where('name', 'enable_latino')->first()->value)
<div>
<div>
Latino or Hispanic Heritage?
</div>
</div>
@endif
</div>
<h2>Educational Background</h2>
<hr />
<div>
<div>
<div>
Highest Grade Completed
@foreach (MyCareerAdvisor\EducationLevel::all() as $education)
@endforeach
</div>
</div>
<div>
<div>
High School Status
@foreach (MyCareerAdvisor\HighSchoolLevel::all()->filter(function ($x) {return strpos($x, 'System Only') == false;}) as $education)
@endforeach
</div>
</div>
</div>
<h2>Military Background</h2>
<hr />
<div>
<div>
<div>
Have you served in the military?
@foreach (MyCareerAdvisor\VeteranStatus::all() as $status)
@endforeach
</div>
</div>
<div>
<div>
Do you have a family member in the military?
</div>
</div>
</div>
| cb58debf83d45bd637f65587b40f032f | 0 | nazneen | 2023-05-22T18:24:46 | good | [
"code"
] |
Why is this returning a key error for displayed_member_id
if member == ' ' or member == None or member == []:
sg.Popup('No Member found for Member id '+id)
window.Element('-firstn-').update('')
window.Element('-lastn-').update('')
else:
member_id = str(member[0][0])
window.Element('-displayed_member_id_number-').update(member_id)
first_name = str(member[0][1])
window.Element('-firstn-').update(first_name)
last_name = str(member[0][2])
window.Element('-lastn-').update(last_name)
conn = Assignment_3_draft_1.dbconnect()
Assignment_3_draft_1.createBookingTable(
conn=conn)
fitness_classes = Assignment_3_draft_1.dbQuery(
conn=conn, sql="""SELECT booking_table.*,fitness_class.*
from booking_table
INNER JOIN fitness_class on fitness_class.fitness_class_id=booking_table.fitness_class_id
WHERE booking_table.id=?
""", params=(id))
fitness_classes_data(fitness_classes)
if len(fitness_classes) < 3:
update_available_classes(id)
# conn.commit()
conn.close()
elif event=='-Submit-':
print(values['-displayed_member_id_number-'])
the original element:
col1a = [[sg.Text('Member ID', size=(20, 1)), sg.Text('First Name', size=(20, 1))],
[sg.Text(key='-displayed_member_id_number-', size=(20, 1),enable_events=True),
sg.Text(key='-firstn-', size=(20, 1))]
] | 40efff8c22f0b85b2f474a31a5e70b2a | 1 | DaehanKim | 2023-05-29T06:26:40 | bad | [] |
Why is this returning a key error for displayed_member_id
if member == ' ' or member == None or member == []:
sg.Popup('No Member found for Member id '+id)
window.Element('-firstn-').update('')
window.Element('-lastn-').update('')
else:
member_id = str(member[0][0])
window.Element('-displayed_member_id_number-').update(member_id)
first_name = str(member[0][1])
window.Element('-firstn-').update(first_name)
last_name = str(member[0][2])
window.Element('-lastn-').update(last_name)
conn = Assignment_3_draft_1.dbconnect()
Assignment_3_draft_1.createBookingTable(
conn=conn)
fitness_classes = Assignment_3_draft_1.dbQuery(
conn=conn, sql="""SELECT booking_table.*,fitness_class.*
from booking_table
INNER JOIN fitness_class on fitness_class.fitness_class_id=booking_table.fitness_class_id
WHERE booking_table.id=?
""", params=(id))
fitness_classes_data(fitness_classes)
if len(fitness_classes) < 3:
update_available_classes(id)
# conn.commit()
conn.close()
elif event=='-Submit-':
print(values['-displayed_member_id_number-'])
the original element:
col1a = [[sg.Text('Member ID', size=(20, 1)), sg.Text('First Name', size=(20, 1))],
[sg.Text(key='-displayed_member_id_number-', size=(20, 1),enable_events=True),
sg.Text(key='-firstn-', size=(20, 1))]
] | 40efff8c22f0b85b2f474a31a5e70b2a | 1 | dianewan | 2023-05-23T23:17:29 | good | [
"code"
] |
Why is this returning a key error for displayed_member_id
if member == ' ' or member == None or member == []:
sg.Popup('No Member found for Member id '+id)
window.Element('-firstn-').update('')
window.Element('-lastn-').update('')
else:
member_id = str(member[0][0])
window.Element('-displayed_member_id_number-').update(member_id)
first_name = str(member[0][1])
window.Element('-firstn-').update(first_name)
last_name = str(member[0][2])
window.Element('-lastn-').update(last_name)
conn = Assignment_3_draft_1.dbconnect()
Assignment_3_draft_1.createBookingTable(
conn=conn)
fitness_classes = Assignment_3_draft_1.dbQuery(
conn=conn, sql="""SELECT booking_table.*,fitness_class.*
from booking_table
INNER JOIN fitness_class on fitness_class.fitness_class_id=booking_table.fitness_class_id
WHERE booking_table.id=?
""", params=(id))
fitness_classes_data(fitness_classes)
if len(fitness_classes) < 3:
update_available_classes(id)
# conn.commit()
conn.close()
elif event=='-Submit-':
print(values['-displayed_member_id_number-'])
the original element:
col1a = [[sg.Text('Member ID', size=(20, 1)), sg.Text('First Name', size=(20, 1))],
[sg.Text(key='-displayed_member_id_number-', size=(20, 1),enable_events=True),
sg.Text(key='-firstn-', size=(20, 1))]
] | 40efff8c22f0b85b2f474a31a5e70b2a | 1 | nazneen | 2023-05-22T18:25:07 | good | [
"code"
] |
Solve the nonlinear optimization problem
min f (x1,x2) = (x1)^2 −x1x^2 +(x2)^2 −2x2
s.t. g(x1,x2) = x1 +x2 −1 ≤0.
(a) Apply the optimality conditions;
(b) Verify your solution in Excel. Show all steps. | 5ce8d70f0c016a654440ba184dbfa43b | 10 | DaehanKim | 2023-05-29T06:38:45 | good | [] |
Solve the nonlinear optimization problem
min f (x1,x2) = (x1)^2 −x1x^2 +(x2)^2 −2x2
s.t. g(x1,x2) = x1 +x2 −1 ≤0.
(a) Apply the optimality conditions;
(b) Verify your solution in Excel. Show all steps. | 5ce8d70f0c016a654440ba184dbfa43b | 10 | dianewan | 2023-05-23T23:27:16 | bad | [] |
소스에서 폼 전송 ajax방식 스크립트 만들어줘
<%@ page contentType="text/html; charset=utf-8" pageEncoding="utf-8"%>
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %>
<%@ taglib prefix="ui" uri="http://egovframework.gov/ctl/ui"%>
<%@ taglib prefix="spring" uri="http://www.springframework.org/tags"%>
<%
/**
* @Class Name : egovSampleList.jsp
* @Description : 비밀번호변경 화면
* @Modification Information
*
* 수정일 수정자 수정내용
* ------- -------- ---------------------------
* 2009.02.01 최초 생성
*
* author 실행환경 개발팀
* since 2009.02.01
*
* Copyright (C) 2009 by MOPAS All right reserved.
*/
%>
<c:import url="../inc/head.jsp">
<c:param name="foo" value="bar"></c:param>
</c:import>
<!-- BEGIN #app -->
<div id="app" class="app app-header-fixed app-sidebar-fixed">
<!-- BEGIN #header -->
<c:import url="../inc/header.jsp">
<c:param name="foo" value="bar"></c:param>
</c:import>
<!-- END #header -->
<!-- BEGIN #sidebar -->
<c:import url="../inc/sidebar.jsp">
<c:param name="foo" value="bar"></c:param>
</c:import>
<!-- END #sidebar -->
<!-- BEGIN #content -->
<div id="content" class="app-content">
<!-- BEGIN breadcrumb -->
<!-- <ol class="breadcrumb float-xl-end">
<li class="breadcrumb-item"><a href="javascript:;">Home</a></li>
<li class="breadcrumb-item"><a href="javascript:;">Library</a></li>
<li class="breadcrumb-item active">Data</li>
</ol> -->
<!-- END breadcrumb -->
<!-- BEGIN page-header -->
<h1 class="page-header">비밀번호 변경 </h1>
<!-- END page-header -->
<!-- BEGIN panel -->
<div class="panel panel-inverse">
<div class="panel-heading">
<h4 class="panel-title">Panel Title here</h4>
<!-- <div class="panel-heading-btn">
<a href="javascript:;" class="btn btn-xs btn-icon btn-default" data-toggle="panel-expand"><i class="fa fa-expand"></i></a>
<a href="javascript:;" class="btn btn-xs btn-icon btn-success" data-toggle="panel-reload"><i class="fa fa-redo"></i></a>
<a href="javascript:;" class="btn btn-xs btn-icon btn-warning" data-toggle="panel-collapse"><i class="fa fa-minus"></i></a>
<a href="javascript:;" class="btn btn-xs btn-icon btn-danger" data-toggle="panel-remove"><i class="fa fa-times"></i></a>
</div> -->
</div>
<div class="panel-body">
<div class="col-md-6">
<form action="/" method="POST">
<fieldset>
<div class="row mb-2">
<label class="form-label col-form-label col-md-3">현재 비밀번호</label>
<div class="col-md-6">
<input type="password" class="form-control" placeholder="Enter password" />
</div>
</div>
<div class="row mb-2">
<label class="form-label col-form-label col-md-3">새 비밀번호</label>
<div class="col-md-6">
<input type="password" class="form-control" placeholder="Enter password" />
</div>
</div>
<div class="row mb-2">
<label class="form-label col-form-label col-md-3">새 비밀번호 확인</label>
<div class="col-md-6">
<input type="password" class="form-control" placeholder="Enter password" />
</div>
</div>
</fieldset>
</form>
</div>
</div>
</div>
<!-- END panel -->
<div class="col-md-6">
<a href="#" class="btn btn-primary">등록</a>
</div>
</div>
<!-- END #content -->
<!-- BEGIN scroll to top btn -->
<a href="javascript:;" class="btn btn-icon btn-circle btn-success btn-scroll-to-top" data-toggle="scroll-to-top"><i class="fa fa-angle-up"></i></a>
<!-- END scroll to top btn -->
</div>
<!-- END #app -->
<!-- ================== BEGIN core-js ================== -->
<c:import url="../inc/footer.jsp">
<c:param name="foo" value="bar"></c:param>
</c:import>
<!-- ================== END core-js ================== -->
<c:import url="../inc/foot.jsp">
<c:param name="foo" value="bar"></c:param>
</c:import>
| 9371046d6ca633fd98d6bcee838e19af | 100 | dianewan | 2023-05-24T23:48:26 | bad | [] |
addapt this code :
def save_data_to_avro(data: dict, channel_name: str, channel_id: int, report_id: str) -> None:
now = datetime.datetime.now()
file_path = Path("yt_data") / f"{report_id}_data" / channel_name / str(now.year) / str(now.month).zfill(2) / str(
now.day).zfill(2) / str(now.hour).zfill(2)
file_path.mkdir(parents=True, exist_ok=True)
file_name = f"{channel_name}_{now:%Y-%m-%d_%H-%M-%S}.avro"
schema = avro.schema.Parse('''
{
"type": "record",
"name": "VideoData",
"fields": [
{"name": "video", "type": "string"},
{"name": "estimatedMinutesWatched", "type": "int"},
{"name": "views", "type": "int"}
]
}
''')
write_avro_file(data, file_path / file_name, schema)
print(f" -> Done for '{channel_name}' channel (ID: {channel_id}) [Report Type: {report_id}]")
def write_avro_file(data: dict, file_path: Path, schema: avro.schema.Schema) -> None:
with open(file_path, 'wb') as out_file:
writer = DataFileWriter(out_file, DatumWriter(), schema)
for row in data['rows']:
video_data = {"video": row[0], "estimatedMinutesWatched": row[1], "views": row[2]}
writer.append(video_data)
writer.close()
there might be sometimes missing elements in the data | c30308f430cf13c7a082640171074915 | 101 | dianewan | 2023-05-24T23:48:37 | bad | [] |
class _BlockingAsyncContextManager(AbstractContextManager):
_enter_future: Future
_exit_future: Future
_exit_event: Event
_exit_exc_info: Tuple[
Optional[Type[BaseException]], Optional[BaseException], Optional[TracebackType]
] = (None, None, None)
def __init__(self, async_cm: AsyncContextManager[T_co], portal: "BlockingPortal"):
self._async_cm = async_cm
self._portal = portal
async def run_async_cm(self) -> Optional[bool]:
try:
self._exit_event = Event()
value = await self._async_cm.__aenter__()
except BaseException as exc:
self._enter_future.set_exception(exc)
raise
else:
self._enter_future.set_result(value)
try:
# Wait for the sync context manager to exit.
# This next statement can raise `get_cancelled_exc_class()` if
# something went wrong in a task group in this async context
# manager.
await self._exit_event.wait()
finally:
# In case of cancellation, it could be that we end up here before
# `_BlockingAsyncContextManager.__exit__` is called, and an
# `_exit_exc_info` has been set.
result = await self._async_cm.__aexit__(*self._exit_exc_info)
return result
def __enter__(self) -> T_co:
self._enter_future = Future()
self._exit_future = self._portal.start_task_soon(self.run_async_cm)
cm = self._enter_future.result()
return cast(T_co, cm)
def __exit__(
self,
__exc_type: Optional[Type[BaseException]],
__exc_value: Optional[BaseException],
__traceback: Optional[TracebackType],
) -> Optional[bool]:
self._exit_exc_info = __exc_type, __exc_value, __traceback
self._portal.call(self._exit_event.set)
return self._exit_future.result()
| 54d2edba388b1627e70e6529d91289bd | 102 | dianewan | 2023-05-24T23:48:52 | bad | [] |
I suggested the below GitHub Charter.md document. It was WAYYY to technical, specific, or otherwise not accurate to the goals of the maintainer. I want you to re-write the charter document but make it have emojis, be more lighthearted, acknowledge the one-person-show, say that this is a hobby project, mention no intent to add more collaborators, and otherwise fit this new direction. Here is a quote from a GitHub Issue that describes the intended new direction:
Hi I'm sorry you feel I'm not communicative enough, you are right. but sadly I cannot invest more time that I already am and not intending adding collaborators at this time.
I would suggest take things with more proportions tho. I aim this repo to be a nice cute side project, easy and lighthearted 🤷 There is no commercial product on the line here... No deadlines... No urgency...
Yes, It may take me few weeks to answer at times. But unless its a user breaking bug, so what?
Im sorry but I have no solutions for this at this time... 😞
Here's the original charter Markdown code:
```md
<!--
This is an official document. Use AP-style title case for headings. This
document MAY be moved to a CHARTER.md or similar location if a more user-focused
README.md is provided.
The convention is "dev containers" in formal settings, but "devcontainers" in
informal settings. The official specification and documentation use "dev
containers", but most external documents use "devcontainers". Both of these are
nouns, not names, and can as such be capitalized.
-->
# [Community Dev Containers] Charter
The [Community Dev Containers] project aims to document, maintain, and improve
on the official devcontainers images, features, and templates to improve the
developer experience for end-users.
## Guiding Principles
The [Community Dev Containers] project operates under principles similar to
those of the [OpenJS Foundation]. It operates collaboratively, openly,
ethically, and transparently. Project proposals, ideas, timelines, and statuses
should not merely be open and available, but also easily visible to outsiders.
Outside users and developers should be able to participate in discussions,
ideas, issues, and make contributions with as little effort as possible.
## Problem
The current official [@devcontainers] project's scope is limited to features,
images, and templates that serve only the highest level of most popular
programming environments.
For instance, the [devcontainers/features] repository contains only 28 different
addons. The [devcontainers/images] repository only has a selection of 15
different development environment baselines.
This project is the centralized unofficial hub for other features, images, and
templates that would not be permitted to fall under the official
[@devcontainers] umbrella.
## Scope
The [@devcontainers-contrib] organization maintains and evolves the images,
features, templates, and documentation content for aspects not covered by the
official [@devcontainers] organization. It provides a stable, yet flexible, set
of supporting software for users who use dev containers that is governed to
accomodate their desires. This project's value is its balance between
administration and community adherence. It's scope includes listening and
responding to user feedback.
### In-scope
- Docker images for ecosystems not covered an official dev containers image
- Feature collections for popular tools that are typical on developer computers
- Templates for popular project configurations and setups
- Conventions and best-practices to follow architecting a dev container
- Make it easy to propose, add, or otherwise discuss additional features,
images, and templates
Tutorials and guides on how to use dev containers are not covered under the
umbrella of the [Community Dev Containers] project. These items can be pursued
by individual members, but are not endorsed or covered under this organization's
scope.
### Out-of-Scope
- Creating a dev container feature for every single Node.js or Python package
- Maintaining a dev container image for all permutations of tooling mashups
- Fixing users' broken dev container configurations
- Answering questions about decisions made by the official [@devcontainers]
organization
[openjs foundation]: https://openjsf.org/
[@devcontainers]: https://github.com/devcontainers
[@devcontainers-contrib]: https://github.com/devcontainers-contrib
[community dev containers]: https://github.com/devcontainers-contrib
[devcontainers/features]: https://github.com/devcontainers/features#readme
[devcontainers/images]: https://github.com/devcontainers/images#readme
``` | 413c49c83a435e02b7775484d0d9f430 | 103 | dianewan | 2023-05-24T23:49:39 | bad | [] |
I have this code where the header slides up on scroll down and slides down on scroll up. The problem is if there's a page that doesn't scroll, the navigation goes away on scroll even though the page is not tall enough to scroll. Write JS for Vue2 that checks if the page is tall enough. If its not tall enough, then disable the navigation going away. If its tall enough and scrollable, then enable the slide up/down. Here's the code to edit:
<template>
<header ref="nav" class="global-header" :style="{ transform: `translateY(${windowWidth >= 1024 ? navTranslateY : 0}px)` }">
<div class="global-header-logo">
<NuxtLink to="/" class="global-header-logo-link">
<Logo :fullLogo="fullLogo"></Logo>
</NuxtLink>
</div>
<nav class="global-header-nav">
<Navigation></Navigation>
</nav>
</header>
</template>
<script>
export default {
props: ['fullLogo'],
data() {
return {
navTranslateY: 0,
previousScrollY: 0,
windowWidth: 0
};
},
mounted() {
this.windowWidth = window.innerWidth;
window.addEventListener("resize", this.handleResize);
window.addEventListener("scroll", this.handleScroll);
},
beforeDestroy() {
window.removeEventListener("resize", this.handleResize);
window.removeEventListener("scroll", this.handleScroll);
},
methods: {
handleResize() {
this.windowWidth = window.innerWidth;
},
handleScroll() {
if (window.scrollY > this.previousScrollY && this.windowWidth >= 1024) {
this.navTranslateY = -200;
} else {
this.navTranslateY = 0;
}
this.previousScrollY = window.scrollY;
}
}
}
</script>
| 66b3fa3a017c89186b0b24f108ec6fd3 | 104 | dianewan | 2023-05-24T23:49:52 | good | [
"code"
] |
```bash
```
#!/bin/bash
set -eu
snapsToRemove=$(snap list --all | awk '/disabled/{print $1, $2, $3}')
while read snapname version revision; do
if [[ "$revision" == *[a-zA-z]* ]]; then
# Version field is empty. Revision is in second field
revision=$version
fi
snap remove "$snapname" --revision="$revision"
done <<< $snapsToRemove
sudo snap set system refresh.retain=2
``` | 965b6b14c7b8eaeb2c1e543877c44804 | 105 | dianewan | 2023-05-24T23:50:06 | bad | [] |
Hi can you create a .net web api using entity framework based off of this postgres query? SELECT
ch."ChapterId" as "chapterId",
ch."IsActive" as "isActive",
ch."NameEn" as "name",
ch."SortOrder" as "sortOrder",
ch."SubjectId" as "subjectId",
count(ls."MmLessonSetId") as mmLessonSetCount,
SUM(CASE WHEN ls."IsActive" = TRUE THEN 1 ELSE 0 END) as mmLessonSetActiveCount
FROM
"Chapters" as ch
LEFT JOIN
"MmLessonSets" as ls
ON
ls."ChapterId" = ch."ChapterId"
WHERE
ch."SubjectId" = 'SubjectId0011'
AND
ch."IsActive" = TRUE
GROUP BY
ch."ChapterId", ch."IsActive", ch."NameEn", ch."SortOrder", ch."SubjectId"
ORDER BY
ch."SortOrder" | be4c56b02e0e02a09f2944d04415a539 | 106 | dianewan | 2023-05-24T23:50:17 | good | [
"code"
] |
improve this code
<Row>
{hasDiscount ? (
<>
<Col>
<Paragraph className="price-discount">
{listPrice} {t("home.riyal")}
</Paragraph>
</Col>
<Col>
<Paragraph className="discount-amount">
{t("home.discount")} %{Math.ceil(+discountAmount)}
</Paragraph>
</Col>
</>
) : (
<Paragraph className="tabby-cost">
{listPrice > tabbyCost
? (listPrice / 4).toFixed(2) + `${t("home.a_month")}`
: ""}
</Paragraph>
)}
</Row> | 957ae63cdbb2d7f2950527470ddcaeea | 107 | dianewan | 2023-05-24T23:50:20 | good | [
"code"
] |
Please create sample forms based on the following views: from django.shortcuts import render, get_object_or_404, redirect
from django.contrib.auth.decorators import login_required
from django.contrib import messages
from .models import Resident, School, Billing, Municipality
from .forms import ResidentForm, SchoolForm, BillingForm
def homepage(request):
return render(request, 'homepage.html')
@login_required
def resident_detail(request, resident_id):
resident = get_object_or_404(Resident, pk=resident_id)
return render(request, 'resident_detail.html', {'resident': resident})
@login_required
def resident_edit(request, resident_id):
resident = get_object_or_404(Resident, pk=resident_id)
if request.method == "POST":
form = ResidentForm(request.POST, instance=resident)
if form.is_valid():
resident = form.save(commit=False)
resident.save()
messages.success(request, 'Resident details updated successfully!')
return redirect('resident_detail', resident_id=resident.id)
else:
form = ResidentForm(instance=resident)
return render(request, 'resident_edit.html', {'form': form})
@login_required
def school_list(request):
schools = School.objects.all()
return render(request, 'school_list.html', {'schools': schools})
@login_required
def school_detail(request, school_id):
school = get_object_or_404(School, pk=school_id)
return render(request, 'school_detail.html', {'school': school})
@login_required
def school_edit(request, school_id):
school = get_object_or_404(School, pk=school_id)
if request.method == "POST":
form = SchoolForm(request.POST, instance=school)
if form.is_valid():
school = form.save(commit=False)
school.save()
messages.success(request, 'School details updated successfully!')
return redirect('school_detail', school_id=school.id)
else:
form = SchoolForm(instance=school)
return render(request, 'school_edit.html', {'form': form})
@login_required
def billing_list(request):
bills = Billing.objects.all()
return render(request, 'billing_list.html', {'bills': bills})
@login_required
def billing_detail(request, bill_id):
bill = get_object_or_404(Billing, pk=bill_id)
return render(request, 'billing_detail.html', {'bill': bill})
@login_required
def billing_edit(request, bill_id):
bill = get_object_or_404(Billing, pk=bill_id)
if request.method == "POST":
form = BillingForm(request.POST, instance=bill)
if form.is_valid():
bill = form.save(commit=False)
bill.save()
messages.success(request, 'Billing details updated successfully!')
return redirect('billing_detail', bill_id=bill.id)
else:
form = BillingForm(instance=bill)
return render(request, 'billing_edit.html', {'form': form})
@login_required
def municipality_list(request):
municipalities = Municipality.objects.all()
return render(request, 'municipality_list.html', {'municipalities': municipalities}) | a49ffd3e334f48138589d1910a0f546b | 108 | dianewan | 2023-05-24T23:50:32 | good | [
"code"
] |
So, I want to create a async data holder in C++ which can be got only if it has been initialized asynchronously previously. If it has not been initialized, it will return the default value of the type. Is this sufficient?
template<typename T>
class AsyncDataHolder {
enum class Status { NOT_INITIALIZED, INITIALIZING, INITIALIZED };
std::function<T()> initializer_;
Status status_{NOT_INITIALIZED};
mutable bthread::ConditionVariable cv_;
mutable bthread::Mutex mtx_;
T value_;
public:
AsyncDataHolder(std::function<T()> initializer) : initializer_(initializer) {
}
void InitializeAsync() {
{
std::unique_lock lock(mtx_);
status_ = Status::INITIALIZING;
}
recall::runDetachedBthread([&]() {
std::unique_lock lock(mtx_);
value_ = initializer_();
status_ = Status::INITIALIZED;
cv_.notify_all();
});
}
const T& Get() const {
{
std::shared_lock lock(mtx_);
if (status_ == Status::NOT_INITIALIZED) {
return value_;
}
}
std::unique_lock lock(mtx_);
while (status_ != Status::INITIALIZED) {
cv_.wait(lock);
}
return value_;
}
}; | 24a62a4172a0149a62cc1bb49fab8331 | 109 | dianewan | 2023-05-25T00:19:11 | good | [
"code"
] |
breed [species3 a-species3] ;;
turtles-own [energy rnumber dnumber]
to setup
clear-all
settle-species1
set-default-shape species3 "frog top"
create-species3 sp3-start-number [
set color green
set size 2.5
setxy random-xcor random-ycor
set energy 15
set rnumber random-normal 50 5
set dnumber random-normal 52 5
] ;; use slider for initial number, starting energy level is 15
reset-ticks
if (file-exists? "speciesgrowth.csv")
[file-delete "speciesgrowth.csv"]
file-open "speciesgrowth.csv"
file-type "time,"
file-type "N1,"
file-print "N3"
file-close
ifelse count turtles = 0 [ ]
[file-open "speciesgrowth.csv"
ask one-of turtles
[
file-type (word ticks ",")
file-type (word count patches with [ pcolor = 5 ] ",")
file-print count species3
]
file-close
]
if ( file-exists? "sp3output.csv") ; output file setup
[file-delete "sp3output.csv"]
file-open "sp3output.csv"
file-type "individual,"
file-type "rnumber,"
file-print "dnumber,"
file-close
ifelse count turtles = 0 [ ]
[file-open "sp3output.csv"
ask turtles
[
file-type (word who "," )
file-type (word rnumber ",")
file-print (word dnumber )
]
file-close
]
end
to go
grow-species1
ask species3
[ move
sp3-eat-species1
sp3-reproduce
death ]
file-open "speciesgrowth.csv"
file-type (word ticks ",")
file-type (word count patches with [ pcolor = 5 ] ",")
file-print count species3
file-close
tick
end
to settle-species1 ;; slider determines initial number of species 1 patches
ask n-of sp1-start-number patches [ set pcolor 5 ]
end
to grow-species1 ;; determines numbers of species 1 added each step
ask n-of sp1-growth-rate patches [ set pcolor 5 ]
end
to sp3-eat-species1
if pcolor = 5
[ set pcolor black
set energy energy + 5 ]
end
to sp3-reproduce
if energy > 100
[ set energy (energy - (0.25 * energy))
hatch 1 [
set color green
set rnumber random-normal 50 5
set dnumber random-normal 52 5
set energy ((energy / 0.75) - energy)
bk 1 ] ]
end
to move ;; turtle with cost of 1 energy for each movement
rt random 50
lt random 50
fd 2
set energy energy - 1
end
to death
if energy < 0 [ die
]
end | ae601deb4f596aa3dc75e06deec61e0a | 11 | DaehanKim | 2023-05-29T06:39:30 | bad | [] |
breed [species3 a-species3] ;;
turtles-own [energy rnumber dnumber]
to setup
clear-all
settle-species1
set-default-shape species3 "frog top"
create-species3 sp3-start-number [
set color green
set size 2.5
setxy random-xcor random-ycor
set energy 15
set rnumber random-normal 50 5
set dnumber random-normal 52 5
] ;; use slider for initial number, starting energy level is 15
reset-ticks
if (file-exists? "speciesgrowth.csv")
[file-delete "speciesgrowth.csv"]
file-open "speciesgrowth.csv"
file-type "time,"
file-type "N1,"
file-print "N3"
file-close
ifelse count turtles = 0 [ ]
[file-open "speciesgrowth.csv"
ask one-of turtles
[
file-type (word ticks ",")
file-type (word count patches with [ pcolor = 5 ] ",")
file-print count species3
]
file-close
]
if ( file-exists? "sp3output.csv") ; output file setup
[file-delete "sp3output.csv"]
file-open "sp3output.csv"
file-type "individual,"
file-type "rnumber,"
file-print "dnumber,"
file-close
ifelse count turtles = 0 [ ]
[file-open "sp3output.csv"
ask turtles
[
file-type (word who "," )
file-type (word rnumber ",")
file-print (word dnumber )
]
file-close
]
end
to go
grow-species1
ask species3
[ move
sp3-eat-species1
sp3-reproduce
death ]
file-open "speciesgrowth.csv"
file-type (word ticks ",")
file-type (word count patches with [ pcolor = 5 ] ",")
file-print count species3
file-close
tick
end
to settle-species1 ;; slider determines initial number of species 1 patches
ask n-of sp1-start-number patches [ set pcolor 5 ]
end
to grow-species1 ;; determines numbers of species 1 added each step
ask n-of sp1-growth-rate patches [ set pcolor 5 ]
end
to sp3-eat-species1
if pcolor = 5
[ set pcolor black
set energy energy + 5 ]
end
to sp3-reproduce
if energy > 100
[ set energy (energy - (0.25 * energy))
hatch 1 [
set color green
set rnumber random-normal 50 5
set dnumber random-normal 52 5
set energy ((energy / 0.75) - energy)
bk 1 ] ]
end
to move ;; turtle with cost of 1 energy for each movement
rt random 50
lt random 50
fd 2
set energy energy - 1
end
to death
if energy < 0 [ die
]
end | ae601deb4f596aa3dc75e06deec61e0a | 11 | dianewan | 2023-05-23T23:27:29 | bad | [] |
make this more efficient: public class QuickSort {
public static void quickSort(int[] arr, int low, int high) {
if (arr == null || arr.length == 0)
return;
if (low >= high)
return;
// pick the pivot
int middle = low + (high - low) / 2;
int pivot = arr[middle];
// make left < pivot and right > pivot
int i = low, j = high;
while (i <= j) {
while (arr[i] < pivot) {
i++;
}
while (arr[j] > pivot) {
j--;
}
if (i <= j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
i++;
j--;
}
}
// recursively sort two sub parts
if (low < j)
quickSort(arr, low, j);
if (high > i)
quickSort(arr, i, high);
}
public static void main(String[] args) {
int[] arr = {25, 2, 8, 60, 3, 1, 90};
quickSort(arr, 0, arr.length-1);
for(int i : arr){
System.out.print(i);
System.out.print(" ");
}
}
} | 2c5124d38fdab218351300b2f371bb8d | 110 | dianewan | 2023-05-25T00:19:16 | good | [
"code"
] |
pathao
Pathao is Bangladesh’s Largest Consumer Tech Company •• h•“”o
#1
E-Commerce
Logistics
$187M
2022 GMV
$23M
2022 Revenue
#1
2-Wheeler Rides
EBITDA & Cashflow positive
at the Vertical and Consolidated level
#2
4-Wheeler Rides
#2
Food Delivery
#1
8M+
Unique Users
3OOK+
Drivers & Couriers
6OK+
1.7M+
MAU
0.8M
Paid Monthly Users
Digital Consumer Credit
Restaurants & Merchants
pathao
Bangladesh is the Fastest Growing Economy in Asia
poth *'â
8th most populous country and 30th largest economy
Half the population of the US in a country the size of New York State
Population of 165mn and a GDP of $416bn
Consistent GDP Growth
Bangladesh’s GDP per capita (g2,520) is higher than that of India
GDP growth from $1 29bn to
$416bn (2011-2021) with an average GDP growth of 6.5%
Sources: World Bank, CIA World Fac tbook 2020, Bangladesh Census
Young and surging Middle Class
Median age of 27; 86M people in the
Middle Class, expected to
Increase 1.4x to 119M by 2025
Rapid Smartphone Adoption
Smartphone penetration is catching up with regional peers, and is expected that
Adoption will grow to 63% by 2025
pathao
Pathao is uniquely positioned to become the largest data ••'h•”o
driven digital financial services provider
81 Financial Services Provider to the Digital Economy
Digital Credit
Payments Data-Driven Credit Scoring
PathaoPay to launch in QI42022
On-Demand Transport & Logistics Platform
pathao
We are the best known digital brand for urban millennials
pothâ*'â
19M
22M
Target Customer Types
Urban Millennial
(21 -40 Years, Digitallv Savvv. 1 st / 2nd Job)
Navid, 24
Customer Representative
64M
66M
Anika, 27
Graphic Designer
Development
Source: BCG
pathao
We will expand our product to serve the evolving financial •• h•”o
service needs of our customers
Young Family
Rising household credit needs
rot/ng I/l/orying Professional (eg. Motorbike, Pefrigerator)
Established Household Rising household credit needs (eg. house)
35-40
Fresher/ First/ob Basic credit needs — loans for expenses or small purchases
21 -25
Expanding credit needs (eg. AC, Smartphone)
25-30
30-35
Home Loan
CASA CASA
Credit Card
Credit Card
Credit Card
Consumer Finance
Pay Later
Consumer Finance
Pay Later
Consumer Finance
Pay Later
Consumer Finance
Pay Later
Financial Services has a TAM of >$100 bn
USD 5O .OB 1
Digital Payment Transaction Volume
USD 47B1
Digital P2P Money Transfer
USD 2OB4
Offline Retail
Market
USD 9B5
Bill and Education Payments
USD 3B5
Mobile Top—Up Market
USD 2OOM
Digital Commerce Market
SU s B
USD 13B2
Consumer Lending
USD 100M5
2W and MW
Ride-sharing 2022
market size
2025
USD 3OOM
USD 22B3
MSME Lending
USD 100Ms 0 5
Current Food Delivery Market
USD 1.5B°
Insurance Premium
5 0 5
USD 50
Current Annual
No. of Shipments
Sources: 'Bangladesh Bank (BB); 2(includes micro—finance, consumer lending) BB, MRA; *BB; ’Bain Capital Report; 5Company estimates, various sources; ^IDRA pathao
PathaoPay will make digital payments seamless and drive ••'h•”o
data collection (launch in Q2 2023)
pothâ*'âpog
Pathao has recently been awarded a rare Payment Services Provider (PSP) license to offer e-wallet services with the product PathaoPay
Discovery
o•
Agribusiness
Catering Ecommerce
Kitchen
p0S
Phone TopUp
SOCial
CNG
Bus
pathao
Young Bangladeshis have abysmally low access to credit
Millennial consumers hardly have any credit history, making lenders risk averse and reluctant to provide unsecured consumer finance.
poth *'â
No. of Credit Cards per eligible Individual (based on income)
67.2 N
41.6 N
Credit card penetration <1 %1
| 1.7 M I
! 0.64
i i
Bankable Unique Population Bank Accounts
Debit Cards Credit Cards j
Sources: Bangladesh Bank; World Bank Data; Statista
pathao
PathaoPayLater has been designed to address the financial •• h•“”o
needs of urban millennials
pothb'*"opogIoter
Access
Only 1.7M people (<1°/. of population) have access to a credit card
Speed
Credit card from a bank takes upto 2 months and requires multiple pieces of paperwork. We do it in a matter of seconds
User Experience
Ecommerce merchants in Bangladesh face high cart abandonment rates due to payment friction
Usage Online or Offline
pathao
pathao
We are expanding our Financial Services offering for the Ecosystem
poth *'â
."—• + .
jjy Consumer Leasing
Our Target Group:
• 8mn registered 21-40 yr old users
• 300k+ drivers and couriers
Lease Financing for:
• Mobile phones & other electronics
• Household appliances
• Motorbikes
• Partnering with banks/NBFls
Merchant Financing
Our Target Group:
• 40,000+ e-commerce (courier) merchants and restaurant merchants
Working Capital Financing for:
• Working capital for e-comm merchants
• Receivable financing by Pathao
• Partnering with banks/NBFIs
pathao
We assess credit risk using both traditional and alternative ••'h•”o
data sources
Identity verification Location
Database
Pathao Platform
In-app activities (last activity, types)
Acquisition behavior
Purchase and payment behavior Rate of paid transactions
User contactabilit v
Demographic data (age, gender) Employment (salary, experience) Place of residence
Salary & wealth information
Self Reported
Sources of Data
E-Commerce
PathaoPay
Pathao Courier has 2M+ verified shipping
addresses
Shopping categories & frequency Account age
Payment methods used
Mobile top-up
Online & offline payments Utility bill
Salary
Balance information
Past borrowing & repayment historv Credit Blacklist status Information Credit outstanding
Bureau”
Note: Presently banks and financial institutions have access to CIB
Device Based ntelligence
GPS / Location SMS Metadata Downloaded apps
Device details (phone brand, price) Mobile activity (metadata, top up info)
pathao
Pathao is poised to be the country’s largest Digital Financial ••'h•“”o
Services platform
Pharma
Health
Tong
Car
Bike
Food
pOthrf"opoy
8M+ 3OOK+ 6OK+
Unique Users Drivers & Couriers Restaurants & Merchants
potho Fin«nciaIServices
*op-Up BangD
Games
Shop
Parcel
pathao
Strong management team backed by seasoned
investors
poth *'â
Fahim Ahmed
CEO
Ex Goldman Sachs, , SEAF 20 years in tech, banking, & investing; 5 years at Pathao
Asif Khondaker
svp, Accounts
Ex Accenture, Union Capital; 13 years of experience; 5 years at Pathao
Abdullah Noman
Director, Core Product
Ex Kiekback Inc., Bu zzA Ily; 8 years of experience;
/+ years at Pathao
Ahmed Maruf
Director, Courier
Shifat Adnan
Co-Founder & CTO
EX Dugdugi, Hack House; 6 years at Pathao
Razwan Kader Mishu
VP, Engineering
Ex Widespace AB, AdPeople, AIUB; 15 years of experience;
/+ years at Pathao
Nazmul Haque
Director, Pay Product
Ex iPay, Reef Technology; 14 years of experience;
4 years at Pathao
A.T.M Hassan Uzzaman
Director, Pay Engineering
Ahmed Fahad
SVP, Product
Founding Team; 6 years at
Pathao
Abdullah Anwar
ve, oata Engineering
Ex Metigy, Widespace; 15 years experience; 4 years at Pathao
Nabila Mahabub
Director, Marketing
Ex Banglalink (Veon); 10 years experience; 5 years at Pathao
Rahat Ashekeen
Director, Digital Credit
Raisul Amin
svp, Corp Finance
EX IPE Capital, BD Venture; 10 years in finance; 5 years at Pathao
Ehsanul Karim
Director, Data Science
Ex Xero, Airtasker, Hireup; 10 years of experience;
/+ years at 9athao
Asif Shahnewaz
Director, Operations
EX lubee, Truck Lagbe; 1 3 years of experience; 4 years at Pathao
H.A Sweety
Director, HR & Culture
Ex Rocket Internet SE; years of experience;
5 Years at Pathao
Ex HashKloud Pty. Ltd, The Jaxara It Ltd, bGIobaI Sourcing LLC; 16 years of experience; 1 year at Pathao
AN C H O R LE SS
Ex Stripe, Cabbage, JP Morgan; 6 years of experience,
Existing Investors
openspoce VENTU 2E
Ex SSL Wireless;
9 years of experience; 4 years at Pathao
Battery Road
BAN G LAD ES H
Digital Holdings
gojek pathao
Contact Us:
potháÖ
pe
pathao
We have grown our business with a positive Net Inflow Margin
GMV (USD mn)
poth *'â
Revenue Net Inflyo r EBITDA
2018
2019
2020 2021
2022
pathao
We have transformed our business to become EBITDA & Cash •• h•o“
Flow Positive
GMV (USD mn)
HO
(O( )
lH *on *H 2 O*
Rei enue Net Iliflois EBITDA
pathao
Financial Forecasts
Pathao (Rides, Food & Courier)
Pathao Pay 0.5M 0.4M 0.7M 0.8M 2.1M
0.6M 5.5M
3.0M 9.0M
7.5M
Digital Credit 0.01M 0.05M 0.23M 0.45M
GMV
76.9M
58.2M
117.5M
188.IM
838.4M
3,060.5M
6,591.3M
Pathao (Rides, Food & Courier)
Pathao Pay
410.6M 719.0M 1,028.3M
Digital Credit 0.2M 621.0M
Revenue (Consolidated) 11.OM 7.1M 15.0M 22.9M 48.0M 94.6M 149.1M
Take Rate (Blended) 14.30% 12.20% 12.80% 12.20% 5.70% 3.10% 2.30%
Net Inflow (Consolidated) (0.9M) 1.8M 6.9M 10.5M 15.IM 27.IM 58.1M
Net Infiow (aS ‘to of GMV). -1.10‘Xo 3.10% 5.90% 5.60‘Xo 1.80% 0.90% 0.90‘Xo
EBITDA (Consolidated) (10.3M) (4.7M) (0.6M) (0.3M) (0.2M) (1.2M) 18.6M
EBITDA (as % of Revenue): -93.60% -65.70% -4.20% -1.40% -0.50% -1.20% 12.50%
Capex
0.OM
(0.1M)
(0.4M)
(0.7M)
(1.0M)
(1.5M)
(1.6M)
| 9f286c31c483fc0a7ed0adaeac23fbe2 | 111 | dianewan | 2023-05-25T00:19:34 | bad | [] |
I'm getting time="2023-01-08 20:51:59" level=error msg="[Plugin / Upscale Performer image] Traceback (most recent call last):"
time="2023-01-08 20:51:59" level=error msg="[Plugin / Upscale Performer image] File \"B:\\Stash\\plugins\\Performer_Image_Gigapixel\\upscale.py\", line 379, in <module>"
time="2023-01-08 20:51:59" level=error msg="[Plugin / Upscale Performer image] client.upscale_PerformerImage()"
time="2023-01-08 20:51:59" level=error msg="[Plugin / Upscale Performer image] TypeError: upscale_with.upscale_PerformerImage() missing 1 required positional argument: 'performer_id'"
here is my code import config_manager
import configparser
import requests
import sys
import json
import pathlib
import os
import shutil
import psutil
from gigapixel import Gigapixel, Scale, Mode
from pathlib import Path, WindowsPath
from PIL import Image
# Gigapixel needs to save the images to the filesystem, for the GraphQL image update mutation the image needs to be served from a URL, for this to work you need to setup custom mapped folders in Stash (https://github.com/stashapp/stash/pull/620). Below you need to specify both the location to save the images and also the custom Stash mapping for that folder.
# Path to Stash custom Served Folder from root Stash folder, eg. if C:\Stash\static, just enter static
mapped_path = "B:\Stash\custom\static"
# Url to mapped path
mapped_url = "http://localhost:9999/custom"
# Topaz Gigapixel Settings
exe_path = Path('B:\Program Files\Topaz Labs LLC\Topaz Gigapixel AI\Topaz Gigapixel AI.exe')
# Output file suffix. (e.g. pic.jpg -> pic-gigapixel.jpg)
# You should set same value inside Gigapixel (File -> Preferences -> Default filename suffix).
output_suffix = '-gigapixel'
# Global Gigapixel variables
scale_setting = []
mode_setting = []
# Gigapixel tag names
tag_names = ['Upscale Standard:0.5x','Upscale Standard:2x','Upscale Standard:4x','Upscale Standard:6x','Upscale Lines:0.5x','Upscale Lines:2x','Upscale Lines:4x','Upscale Lines:6x','Upscale Art & CG:0.5x','Upscale Art & CG:2x','Upscale Art & CG:4x','Upscale Art & CG:6x','Upscale Low Resolution:0.5x','Upscale Low Resolution:2x','Upscale Low Resolution:4x','Upscale Low Resolution:6x','Upscale Very Compressed:0.5x','Upscale Very Compressed:2x','Upscale Very Compressed:4x','Upscale Very Compressed:6x']
configpath = os.path.join(pathlib.Path(__file__).parent.resolve(), 'config.ini')
def get_config_value(config_file, section, option):
config = configparser.ConfigParser()
config.read(config_file)
return config.get(section, option)
class upscale_with:
def __init__(self, url):
self.url = url
self.api_key = get_config_value(configpath, 'STASH', 'api_key')
stash_url = get_config_value(configpath, 'STASH', 'url')
if not stash_url:
self.error("You need to set the URL in 'config.ini'")
return None
self.stash_url = stash_url + "/graphql"
self.headers = {
"Accept-Encoding": "gzip, deflate, br",
"Content-Type": "application/json",
"Accept": "application/json",
"Connection": "keep-alive",
"DNT": "1",
"ApiKey": self.api_key
}
def log(self, level, message):
print(f"[{level.upper()}] {message}")
def __prefix(self,levelChar):
startLevelChar = b'\x01'
endLevelChar = b'\x02'
ret = startLevelChar + levelChar + endLevelChar
return ret.decode()
def __log(self,levelChar, s):
if levelChar == "":
return
print(self.__prefix(levelChar) + s + "\n", file=sys.stderr, flush=True)
def trace(self,s):
self.__log(b't', s)
def debug(self,s):
self.__log(b'd', s)
def info(self,s):
self.__log(b'i', s)
def warning(self,s):
self.__log(b'w', s)
def error(self,s):
self.__log(b'e', s)
def progress(self,p):
progress = min(max(0, p), 1)
self.__log(b'p', str(progress))
def __callGraphQL(self, query, variables=None):
json = {}
json['query'] = query
if variables != None:
json['variables'] = variables
# handle cookies
response = requests.post(self.url, json=json, headers=self.headers)
if response.status_code == 200:
result = response.json()
if result.get("error", None):
for error in result["error"]["errors"]:
raise Exception("GraphQL error: {}".format(error))
if result.get("data", None):
return result.get("data")
else:
raise Exception(
"GraphQL query failed:{} - {}. Query: {}. Variables: {}".format(response.status_code, response.content, query, variables))
def listTags(self):
query = """
query {
allTags {
id
name
}
}"""
result = self.__callGraphQL(query)
return result["allTags"]
def findTagIdWithName(self, tag_names):
query = """
query {
allTags {
id
name
}
}
"""
result = self.__callGraphQL(query)
for tag in result["allTags"]:
if tag["name"] in tag_names:
return tag["id"]
return None
def createTagWithName(self, name):
query = """
mutation tagCreate($input:TagCreateInput!) {
tagCreate(input: $input){
id
}
}
"""
variables = {'input': {
'name': name
}}
result = self.__callGraphQL(query, variables)
return result["tagCreate"]["id"]
def removeTagWithID(self, id):
query = """
mutation tagDestroy($input: TagDestroyInput!) {
tagDestroy(input: $input)
}
"""
variables = {'input': {
'id': id
}}
self.__callGraphQL(query, variables)
def findPerformersByTag(self, id):
query = """query performer_images($performer_filter: PerformerFilterType!) {
findPerformers(performer_filter: $performer_filter filter: {per_page: -1}){
performers{
id
name
image_path
tags{
name
}
}
}
}"""
variables = {'performer_filter': {
'tags': {
'value': id, 'modifier': 'INCLUDES', 'depth':1
}
}}
result = self.__callGraphQL(query, variables)
performers = result["findPerformers"]["performers"]
image_paths_tags = [(performer["image_path"], performer["id"]) for performer in performers]
return image_paths_tags
def processPerformerImage(self, image_path, performer_id, scale, mode):
# code to process image using the gigapixel library
# using the scale and mode values passed as arguments
scale = Scale.__getattr__(scale_setting)
mode = Mode.__getattr__(mode_setting)
# Check if Gigapixel is open, if yes close items
for process in psutil.process_iter():
try:
if process.exe() == exe_path:
# The process was found
process_id = process.pid
break
except psutil.AccessDenied:
# The process cannot be accessed, so skip it
pass
# Create Gigapixel instance.
app = Gigapixel(exe_path, output_suffix)
# Set current directory
current_dir = Path().resolve()
# Variable for saved image
image_name = "image_to_process.jpg"
image_saved = current_dir / image_name
# Download image data from the image URL
r = requests.get(image_path, stream = True)
r.raw.decode_content = True
with open(image_saved, 'wb') as f:
shutil.copyfileobj(r.raw, f)
# Process the image
output_path = app.process(image_saved, scale=scale, mode=mode)
# Delete the original image
os.remove(image_saved)
# move the processed image to custom mapped folder
image_moved = shutil.move(output_path, mapped_path)
image_upload_url = mapped_url + image_name
# Update performer with new image
query = """
mutation performerUpdate($performer_update_input: PerformerUpdateInput!){
performerUpdate(input: $performer_update_input){
id
}
}
"""
variables = {"performer_update_input": {"image": image_upload_url, "id": performer_id}}
result = self.__callGraphQL(query, variables)
# Print the result of the GraphQL mutation
self.error("result: "+str(result))
# Delete the processed image
os.remove(image_moved)
def setup_tags(self):
for tag_name in tag_names:
tag_id = self.findTagIdWithName(tag_name)
if tag_id == None:
tag_id = self.createTagWithName(tag_name)
self.debug("Adding tag: "+tag_name)
else:
self.error("Tag already exists: "+tag_name)
def remove_tags(self):
for tag_name in tag_names:
tag_id = self.findTagIdWithName(tag_name)
if tag_id == None:
self.error("Error: "+tag_name + "doesn't exist")
else:
self.debug("Removing tag: , "+tag_name)
tag_id = self.removeTagWithID(tag_id)
def upscale_PerformerImage(self, performer_id):
global scale_setting
global mode_setting
# Find performers by tag
performers = self.findPerformersByTag(performer_id)
for performer in performers:
tags = performer['tags']
# Check if performer is only tagged with one value
if len(tags) == 1:
tag = tags[0]
# Check if tag is in tag_names
if tag in tag_names:
# Put tag value into a variable
tag_value = tag
# Set variables for scale
if '0.5x' in tag_value:
scale_setting = 'Scale.X05'
elif "2x" in tag_value:
scale_setting = 'Scale.X2'
elif "4x" in tag_value:
scale_setting = 'Scale.X4'
elif "6x" in tag_value:
scale_setting = 'Scale.X6'
else:
print('No scale in tag name')
# Set variables for mode
if 'Standard' in tag_value:
mode_setting = 'Mode.STANDARD'
elif "Lines" in tag_value:
mode_setting = 'Mode.Lines'
elif "Art & CG" in tag_value:
mode_setting = 'Mode.ART_AND_CG'
elif "Low Resolution" in tag_value:
mode_setting = 'Mode.LOW_RESOLUTION'
elif "Very Compressed" in tag_value:
mode_setting = 'Mode.VERY_COMPRESSED'
else:
print('No mode in tag name')
else:
self.error(f"Tag '{tag}' is not a valid tag name")
continue
else:
self.error(f"Performer with ID '{performer_id}' is tagged with more than one value")
continue
tag_id = self.findTagIdWithName(tag_name)
if tag_id is None:
self.error("Tag missing:"+tag_name)
else:
self.info("Tag exists: "+tag_name)
image_paths_tags = self.findPerformersByTag(tag_id)
for image_path, performer_id in image_paths_tags:
scale = Scale.__getattr__(scale_setting)
mode = Mode.__getattr__(mode_setting)
self.processPerformerImage(image_path, performer_id, scale_setting, mode_setting)
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
if len(sys.argv) > 1:
url = "http://localhost:9999/graphql"
if len(sys.argv) > 2:
url = sys.argv[2]
if sys.argv[1] == "setup":
client = upscale_all(url)
client.setup_tags()
elif sys.argv[1] =="upscale_all":
client = upscale_all(url)
client.upscale_PerformerImage()
elif sys.argv[1] =="remove_tags":
client = upscale_all(url)
client.remove_tags()
elif sys.argv[1]== "api":
fragment = json.loads(sys.stdin.read())
scheme=fragment["server_connection"]["Scheme"]
port=fragment["server_connection"]["Port"]
domain="localhost"
if "Domain" in fragment["server_connection"]:
domain = fragment["server_connection"]["Domain"]
if not domain:
domain='localhost'
url = scheme + "://" + domain + ":" +str(port) + "/graphql"
client=upscale_with(url)
mode=fragment["args"]["mode"]
client.debug("Mode: "+mode)
if mode == "setup":
client.setup_tags()
elif mode == "upscale_all":
client.upscale_PerformerImage()
elif mode == "remove_tags":
client.remove_tags()
else:
print("")
| 85f5ffa986d162960b2651560d442e35 | 112 | dianewan | 2023-05-25T00:19:54 | bad | [] |
Rewrite my BotService class so it schedules jobs and runs generateScreenshot, show only the relevant parts of my code
This is a JSON object with an example of an array of jobs.
[{caption: 'Buen día ${hoy} \n _Lugares disponibles:_ bit.ly/3civct4', report: 'capacidades', chatId: ['[email protected]','[email protected]']}, {caption: 'Buenos días, les comparto su última asistencia registrada.', report: 'asistencia', chatId: ['[email protected]','[email protected]','[email protected]']}, {caption: '!eventos', report: 'eventos', chatId: ['[email protected]','[email protected]','[email protected]','[email protected]','[email protected]']}, {caption: '!acuerdos', report: 'acuerdos-cover', chatId: ['[email protected]']}, {caption: 'Buenos días, les comparto estadísticos Covid a nivel Nacional;\n Confirmados:', report: 'covid/A', chatId: ['[email protected]']}, {caption: 'Defunciones en los últimos 12 días ', report: 'covid/B', chatId: ['[email protected]']}]
## App\Services\BotService\index.ts
import {
LocalAuth,
Client,
WAState,
ClientInfo,
MessageMedia
}
from "whatsapp-web.js";
import fetch from "node-fetch";
import fs from "fs";
import util from 'util';
import qrcode from "qrcode-terminal";
import actions from "App/Services/ActionsService"
import {
scheduleJob
} from 'node-schedule';
import {
Cluster
} from 'puppeteer-cluster';
import xml2js from 'xml2js';
var parser = new xml2js.Parser();
import * as mysql from "mysql";
import moment from 'moment'
import {
createCanvas,
loadImage
} from 'canvas';
import tmp from 'tmp';
const readFile = util.promisify(fs.readFile);
const writeFile = util.promisify(fs.writeFile);
export default class BotService {
public client: Client;
public qr: string | null = null;
public state: WAState = WAState.UNPAIRED;
public ms = 0;
public lastTitle = "";
public parser: parser;
public initialized(ms) {
this.client.sendMessage(
"[email protected]",
"_🐺 🤖 Cliente listo después de " + ms + " milisegundos!_ "
);
}
async function generateScreenshot(jobs) {
// Create a new puppeteer cluster with 2 workers.
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_CONTEXT,
maxConcurrency: 2,
});
// Define a task that each worker will run to generate a screenshot.
cluster.task(async ({
page,
data: job
}) => {
// Set the viewport and user agent (just in case).
await page.setViewport({
width: 1280,
height: 800
});
await page.setUserAgent('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36');
const url = 'https://reportes.casitaapps.com/' + job.report;
// Navigate to the URL specified in the job object and wait for it to load.
await page.goto(url);
// Wait for the page to indicate that it is ready.
const watchDog = page.waitForFunction('window.status === "ready"');
await watchDog;
// Take a screenshot of the element with the specified selector and return it as a buffer.
const element = await page.$('#frame');
const screenshot = await element.screenshot();
return screenshot;
});
// Generate a screenshot for each job in the array.
const promises = jobs.map((job) => cluster.execute(job));
const screenshotData = await Promise.all(promises);
// Close the cluster when all jobs are finished.
await cluster.close();
return screenshotData;
}
public async delay(): Promise < void > {
const min = Math.ceil(3000);
const max = Math.floor(10000);
const ms = Math.floor(Math.random() * (max - min)) + min;
await new Promise((resolve) => setTimeout(resolve, ms));
}
constructor() {
const authStrategy: LocalAuth = new LocalAuth({
clientId: "client-one",
dataPath: "C:/Server/htdocs/Adonis/whatsapp/.wwebjs_auth/",
});
this.actions = new actions();
this.client = new Client({
puppeteer: {
headless: false,
executablePath: 'C:/Program Files (x86)/Google/Chrome/Application/chrome.exe'
},
takeoverOnConflict: true,
takeoverTimeoutMs: 10,
authStrategy,
});
this._listenForReady();
this._listenIncoming();
this._listenForQRCode();
this._listenForChangeState();
this._initializeClient()
.then(async () => {
const version = await this.client.getWWebVersion();
console.log(`WHATSAPP WEB VERSION: v${version}`);
this.state = WAState.CONNECTED;
const {
default: EvaService
} = await import('@ioc:App/Services/EvaService')
this.actions.eva = EvaService;
})
.catch((err) => {
console.log(err);
});
}
protected async _initializeClient(): Promise < void > {
return this.client.initialize();
}
protected _listenForQRCode(): void {
this.client.on("qr", (qr: string) => {
console.log("QR RECEIVED", qr);
this.qr = qr;
qrcode.generate(qr, {
small: true
});
});
}
protected _listenForReady(): void {
this.client.on("ready", () => {
console.log("Client is ready!", "Checking SSN...");
this._checkSSN();
// this._clearMysql();
this._checkScheduled();
this.state = WAState.CONNECTED;
});
}
protected _listenIncoming(): void {
this.client.on("message", async (message) => {
if (message.id.fromMe) return false;
try {
this.actions.response(message, this.client);
} catch (err) {
console.log('An error occurrsed in the responses logic', err);
}
});
this.client.on("message_create", async (message) => {
if (!message.id.fromMe) return false;
try {
this.actions.response(message, this.client);
} catch (err) {
console.log('An error occurrsed in the responses logic', err);
}
});
}
protected _listenForChangeState(): void {
this.client.on("change_state", (state: WAState) => {
console.log("State changed: ");
this.state = state;
});
}
protected _clearMysql(): void {
// setInterval(async () => {
const db = mysql.createConnection({
host: "localhost",
user: "root",
password: "Nicole10*",
database: 'control_coordinaciones'
});
var data = [];
const promise = new Promise((resolve, reject) => {
db.query("SELECT CONCAT('KILL ',id,';') as query FROM information_schema.processlist WHERE command='Sleep' AND time > 900;")
.on("error", (err) => {
console.error(err);
reject(err);
})
.on("result", (row) => {
db.pause();
data.push(row.query);
db.resume();
})
.on("end", () => {
resolve(data);
});
});
promise.then((data) => {
//now execute each result as a query and close mysql connection after all queries have been executed.
const promise2 = new Promise((resolve, reject) => {
const killQueries = data.map((killQuery) => {
return new Promise((resolve, reject) => {
db.query(killQuery, (err, result) => {
if (err) {
reject(err);
} else {
resolve(result);
}
});
});
});
Promise.all(killQueries).then(() => {
db.end();
resolve();
}).catch((err) => {
db.end();
reject(err);
});
});
promise2.then(() => {
// console.log("All queries executed successfully");
}).catch((err) => {
console.error(err);
});
}).catch((err) => {
console.error(err);
});
// }, 60000);
}
protected _checkScheduled(): void {
setInterval(async () => {
var db = mysql.createConnection({
host: "localhost",
user: "root",
password: "Nicole10*",
database: 'casitaiedis',
charset: 'utf8mb4'
});
var data = [];
const promise = new Promise((resolve, reject) => {
db.query("SELECT id, message, scheduled_at, target_id FROM scheduled_messages WHERE scheduled_at <= NOW() AND scheduled_at >= DATE_SUB(NOW(), INTERVAL 1 MINUTE) AND sent = 0")
.on("error", (err) => {
console.error(err);
reject(err);
})
.on("result", (row) => {
db.pause();
data.push(row);
db.resume();
})
.on("end", () => {
resolve(data);
});
});
promise.then((data) => {
if (data.length > 0) {
for (var i = 0; i < data.length; i++) {
this.client.sendMessage(
data[i]['target_id'],
data[i].message
);
db.query(`UPDATE scheduled_messages SET sent = '1' WHERE id = '${data[i].id}'`).on("error", (err) => {
console.error(err);
})
db.end();
this._postal();
// this._clearMysql();
}
} else {
db.end();
this._postal();
// this._clearMysql();
console.log(moment().format('HH:mm:ss'), WAState.CONNECTED);
}
});
}, 50000);
}
protected _checkSSN(): void {
setInterval(async () => {
var body = await fetch('http://www.ssn.unam.mx/rss/ultimos-sismos.xml').then(res => res.text());
var xml = await parser.parseStringPromise(body);
var items = xml.rss.channel[0].item;
var title = items[0].title[0].trim();
// var scale = title.split(",")[0]; haz una función por cada grado, un emoji adicional 🚨
// var tipo = parseFloat(scale) > 5 ? "sismo" : "replica";
if (items.length == 1) { //Everyone else
console.log('ssn', this.lastTitle);
var mgntd = parseFloat(title.split(",")[0].replace("Preliminar: M ", ""));
if (title !== this.lastTitle && this.lastTitle) { // only mag 6> can be sent to 7 fa and coords
if (mgntd > 6) {
this.client.sendMessage( //replace with sendMultiple after testing
"[email protected]", //Change
`🚨🚨🚨 Alerta *sismo* Tiempo Real\n\n*${title}*`
);
this.client.sendMessage( //replace with sendMultiple after testing
"[email protected]", //Change
`🚨🚨🚨 Alerta *sismo* Tiempo Real\n\n*${title}*`
);
} else {
this.client.sendMessage(
"[email protected]", //Just you
`🚨 Alerta sismo *menor* Tiempo Real\n\n*${title}*`
);
}
}
} else if (this.lastTitle) {
if (title !== this.lastTitle) {
this.client.sendMessage(
"[email protected]", //Just you
`🚨 Alerta sismo *menor* Tiempo Real\n\n*${title}*`
);
}
}
this.lastTitle = title;
}, 60000);
}
protected async _postal(): void {
var db = mysql.createConnection({
host: "localhost",
user: "root",
password: "Nicole10*",
database: 'casitaiedis',
charset: 'utf8mb4'
});
var data = [];
const promise = new Promise((resolve, reject) => {
db.query("select * from ((SELECT id, de as 'text', concat('521',para,'@c.us') as 'to' FROM xmas WHERE sent = '0' and error = '0' LIMIT 10) A INNER JOIN (SELECT x, y, path as imagePath, color FROM xmas_postales ORDER BY RAND() LIMIT 1) B) where id is not null")
.on("error", (err) => {
console.error(err);
reject(err);
})
.on("result", (row) => {
db.pause();
data.push(row);
db.resume();
})
.on("end", () => {
resolve(data);
});
});
var data = await promise;
console.log('Batch of ' + data.length);
const promises = [];
for (var i = 0; i < data.length; i++) {
promises.push(new Promise(async (resolve, reject) => {
var imagePath = data[i].imagePath;
var x = data[i].x;
var y = data[i].y;
var color = data[i].color;
var text = data[i].text;
var to = data[i].to;
var id = data[i].id;
if (!id) {
resolve();
}
console.log(`Sending to ${to} from ${text} at ${x} ${y}`);
await this.delay();
try {
// var buffer = await fetch(`https://bot.casitaapps.com/postal?x=${x}&y=${y}&imagePath=${imagePath}&text=${text}`).then(res => res.buffer());
// var media = new MessageMedia('image/png', buffer);
// await this.client.sendMessage(to, media);
var canvas = createCanvas(1024, 576);
var ctx = canvas.getContext('2d');
var image = await loadImage(imagePath);
ctx.drawImage(image, 0, 0, 1024, 576);
ctx.fillStyle = color;
ctx.font = 'bold 30pt Menlo';
ctx.fillText(text, x, y);
const buffer = canvas.toBuffer('image/png');
const tmpobj = tmp.fileSync();
await writeFile(tmpobj.name, buffer);
const media = await MessageMedia.fromFilePath(tmpobj.name);
media.mimetype = 'image/png'
await this.client.sendMessage(to, media);
console.log(`Sent to ${to}`);
db.query(`UPDATE xmas SET sent = '1' WHERE id = '${id}'`).on("error", (err) => {
console.error(err);
})
resolve();
} catch (err) {
console.log('Errorzote', err);
db.query(`UPDATE xmas SET error = '1' WHERE id = '${id}'`).on("error", (err) => {
console.error(err);
})
resolve();
}
}));
}
await Promise.all(promises);
db.end();
this._clearMysql();
}
protected async _sendMultiple(chatIds, message, args): void {
for (var i = 0; i < chatIds.length; i++) {
await this.delay().then((e) => {
this.client.sendMessage(
chatIds[i],
message,
args
);
});
}
}
} | 0b888342c95111a5d6e4f01c9a1a2ae9 | 113 | dianewan | 2023-05-25T00:20:20 | bad | [] |
I encrypted the passphrase using the following commands for passphrase.enc ($PASSPHRASE_FILE) and the
export.enc ($EXPORT_FILE)
```openssl enc -d -aes-256-cbc -in passphrase.enc```
```openssl enc -d -aes-256-cbc -in export.enc```
I then created a p.12 certificate using an encrypted passphrase from those files above:
```openssl pkcs12 -export -in "$SIGNED_CERT_CRT" -inkey "$SIGNED_CERT_KEY" -passin file:"$PASSPHRASE_FILE" -passout file:"$EXPORT_FILE" -out "$SIGNED_CERT_P12"```
But on MacOS when i aadd the .p12 certificate to the keychain it prompts me for a password, i enter the export.enc password from that file and this doesn't work.
I also wonder how could the following command create the certificate from that encrypted file without asking me for the password that protects the passphrase | 263c40f8d6758e91b68913f97464160c | 114 | dianewan | 2023-05-25T00:20:52 | bad | [] |
글을 읽고 이 글이 어떤 회사와 관련되어 있는지 한 단어로 알려줘.
"* Canada
Travel Canada Vancouver Toronto + Air Canada Sharing Tips
Profile
Mio, 2022. 12. 20. 20:42
Copy URL Add Neighbor
BodyOther functions
There was a place I really wanted to go this year.
That's where I used to live in #Canada :)
I'm sure some of you know,
I lived in Vancouver and worked at Starbucks.
It feels like my second home.
And recently, a close friend of mine said,
#I'm going to Vancouver on Air Canada.
I'm envious and want to go. T_T
#When I go on a trip to Canada,
It's convenient to go by direct flight.
I get really tired when I transfer;
Right now, Air Canada is...
It connects Korea and Canada directly.
7 flights per week from Incheon to Vancouver
Incheon to Toronto 5 times a week
The flight schedule changes from time to time.
As of mid-January 2023,
< Vancouver >
ICN (18:50)-YVR (11:30): 9 hours and 40 minutes
YVR (12:15)-ICN (17:05 +1): 11 hours and 50 minutes
< Toronto >
ICN (15:15)-YZ (14:20): 13 hours and 5 minutes
YYZ (23:05)-ICN (3:50 +2): 14 hours and 45 minutes
It was roughly this kind of flight schedule.
Air Canada is located at Incheon International Airport.
Check-in and boarding are available at Passenger Terminal 1.
Your final destination is not Vancouver or Toronto.
You can use the automatic baggage transfer service.
In other words, after leaving Incheon and passing through Vancouver,
So if you're finally going to Calgary,
I didn't pick up my luggage in Vancouver, a stopover.
You can find it right in Calgary!
By the way, if the final destination is the United States,
I'm going to the U.S. via Canada.
Immigration is easier than direct flights between Korea and the United States.
We're conducting customs inspection in Canada.
You know that immigration in the U.S. is very difficult.
If you use Air Canada's signature class,
You can use Maple Leaf Lounge before boarding.
Lounge located at 16 airports in Canada, the United States, and Europe.
You can enjoy snacks and drinks for free before boarding the plane.
Shower rooms and business centers are also available.
Before the long flight, in the lounge,
I can't believe I can rest like this.
Don't you think it'll be so nice? Hah!
The seats in Air Canada are
Economy, Premium Economy,
And with signature class (business class),
I can share it with you :)
Down here is the economy class.
The leg room was spacious and the flight attendants were kind.
I've also had the experience of riding happily.
Snacks and drinks in the middle, too!
Air Canada is famous for its good in-flight meals.
Famous chefs and wine experts curated it.
They say the quality is very high quality. >_<
If you want to make long-haul flights more comfortable,
I recommend premium economy rather than economy.
Even if it's not business, the fatigue level is much lower!
If you can afford the travel expenses,
It's definitely a signature class.
It's a pretty happy thing to fly lying down.
Long-haul flights never tire you. Hah!
Especially when using signature classes.
Concierge services, quick check-in, boarding privileges, etc.
The fact that you can enjoy a variety of benefits!!
Air Canada is a member of Star Alliance.
I'll earn mileage and get Air Canada or...
Star Alliance Airlines is available.
Star Alliance is the world's largest,
It's an aviation alliance, so it has a lot of advantages. >_<
I also have a lot of mileage accumulated here!
Canada is now a winter country.
There's a pretty Christmas market.
It's a perfect season for skiing or snowboarding.
My friend who's gone right now went skiing, too.~
Below is Whistler near Vancouver.
A lot of people go skiing or snowboarding here.
It's also good to go to Yellowknife to see the aurora.
I saw aurora and rode a dog sled.
I had a fun and unique time. :D
Aurora is so mysterious.
I didn't know how to take pictures.
Most of them are shaky cuts, but...
If I go back, I want to take a pretty picture.
We can't leave out Toronto when we travel to Canada, right?
Vancouver in the west, Toronto in the east!!!
One of the world's top three waterfalls,
Niagara Falls is really close.~
If you go there in person, you'll be overwhelmed.
Vancouver and Toronto are great cities.
If you go out to the suburbs, you can meet Mother Nature like this.
Below is a travel destination near Toronto called Tobermory.
It's a place with clear water and good air, so it's worth visiting!
It's good to go to Canada for a trip,
Like me, I hope you gain a year of experience as a working holiday.
Or it's a good place to study abroad and settle down.
I've lived in Japan, Germany, Australia, and Canada.
If you ask me where is the best place to live,
It's a landslide victory in Canada. ♥
If I have a chance to travel to the Americas,
Check your ticket at Air Canada's official website below!
Link to Air Canada official site
▼
Air Canada
www.aircanada.com
We receive a small fee from AIR CANADA.
Tags
#Canada, Canada
#AirCanada
#Travel to Canada
#Traveling in America
#Traveling abroad
#Recommendation for overseas travel
#Air Canada in-flight meals
#In-flight meal review
#Airline Recommendation
#Studying in Canada
#Canada flight ticket
#WarhallCanada
#WorkingHoliday
#Studying in the United States
#Warhall, United States"
| f1b00005fc4b826ccd5334efc8d52432 | 115 | dianewan | 2023-05-25T00:21:31 | bad | [] |
c++ opengl I am working on a small game engine. I have a problem regarding rendering 3D models. Models can consist of multiple meshes and meshes consist of multiple verticies. I want to be able to load in a model using Assimp from a file and store the modesl data for each mesh and each vertex, so that we only have to load it once. Later I want to draw the model using a class called Renderer3D which will store the various buffers related to the model's meshes. Below is the code I have for loading a model, what is missing from it to store the relevant data for later binding and drawing?
#include "solpch.h"
#include "OpenGL_Model.h"
#include "GalaxyDraw/Platform/OpenGL/GLMacros.h"
#include
#include
#include
#include
namespace GalaxyDraw
{
struct Vertex {
glm::vec3 Position;
glm::vec3 Normal;
glm::vec2 TexCoords;
};
struct Mesh {
public:
std::vector Vertices;
std::vector Indices;
std::vector> Textures;//TODO replace with material struct
Mesh(std::vector vertices, std::vector indices, std::vector> textures):
Vertices(vertices), Indices(indices), Textures(textures)
{}
};
OpenGL_Model::OpenGL_Model(const std::string& modelpath)
{
LoadModel(modelpath);
}
void OpenGL_Model::SetData(const std::string& path)
{
LoadModel(path);
}
bool OpenGL_Model::operator==(const Model& other) const
{
return m_RendererID == ((OpenGL_Model&)other).m_RendererID;
}
void OpenGL_Model::LoadModel(const std::string& modelpath)
{
Assimp::Importer import;
const aiScene* scene = import.ReadFile(modelpath, aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_GenNormals | aiProcess_JoinIdenticalVertices);
if (!scene || scene->mFlags & AI_SCENE_FLAGS_INCOMPLETE || !scene->mRootNode)
{
std::cout << "ERROR::ASSIMP::" << import.GetErrorString() << std::endl;
return;
}
m_ModelDirectory = modelpath.substr(0, modelpath.find_last_of('/'));
ProcessNode(scene->mRootNode, scene);
}
void OpenGL_Model::ProcessNode(aiNode* node, const aiScene* scene)
{
// process all the node's meshes (if any)
for (unsigned int i = 0; i < node->mNumMeshes; i++)
{
aiMesh* mesh = scene->mMeshes[node->mMeshes[i]];
m_Meshes.push_back(ProcessMesh(mesh, scene));
}
// then do the same for each of its children
for (unsigned int i = 0; i < node->mNumChildren; i++)
{
ProcessNode(node->mChildren[i], scene);
}
}
Mesh OpenGL_Model::ProcessMesh(aiMesh* mesh, const aiScene* scene)
{
std::vector vertices;
std::vector indices;
std::vector> textures;
for (unsigned int i = 0; i < mesh->mNumVertices; i++)
{
Vertex vertex;
glm::vec3 vector;
vector.x = mesh->mVertices[i].x;
vector.y = mesh->mVertices[i].y;
vector.z = mesh->mVertices[i].z;
vertex.Position = vector;
vector.x = mesh->mNormals[i].x;
vector.y = mesh->mNormals[i].y;
vector.z = mesh->mNormals[i].z;
vertex.Normal = vector;
// does the mesh contain texture coordinates?
if (mesh->mTextureCoords[0])
{
glm::vec2 vec;
vec.x = mesh->mTextureCoords[0][i].x;
vec.y = mesh->mTextureCoords[0][i].y;
vertex.TexCoords = vec;
}
else
{
vertex.TexCoords = glm::vec2(0.0f, 0.0f);
}
vertices.push_back(vertex);
}
// LMAO, I wish C# procedural shit went like this
for (unsigned int i = 0; i < mesh->mNumFaces; i++)
{
aiFace face = mesh->mFaces[i];
for (unsigned int j = 0; j < face.mNumIndices; j++)
{
indices.push_back(face.mIndices[j]);
}
}
// Checks if any material exists at all
//if (mesh->mMaterialIndex >= 0)
//{
// // gets the material cached within that specific index
// aiMaterial* material = scene->mMaterials[mesh->mMaterialIndex];
// std::vector diffuseMaps = loadMaterialTextures(material, aiTextureType_DIFFUSE, "texture_diffuse");
// textures.insert(textures.end(), diffuseMaps.begin(), diffuseMaps.end());
// std::vector specularMaps = loadMaterialTextures(material, aiTextureType_SPECULAR, "texture_specular");
// textures.insert(textures.end(), specularMaps.begin(), specularMaps.end());
//}
return Mesh(vertices, indices, textures);
}
}
| a163631808b5ae980554a69c87c811b6 | 116 | dianewan | 2023-05-25T00:21:50 | good | [
"code"
] |
what does this code do?
const LevelPickerApp = () => {
const { iTwinId, iModelId, authClient } = useViewerContext();
const uiProviders = [new LevelSelectorWidgetProvider()];
/** Setup view state */
const viewportOptions: ViewerViewportControlOptions = {
viewState: async (iModelConnection) => {
const viewState = await ViewSetup.getDefaultView(iModelConnection);
return viewState;
},
};
const onIModelAppInit = async () => {
IModelApp.viewManager.onViewOpen.addOnce(ITwinLink.initializeLink);
IModelApp.viewManager.onViewOpen.addOnce(async () => {
await ITwinLink.getLevels();
})
}
return authClient &&
<Viewer
iTwinId={iTwinId}
iModelId={iModelId}
authClient={authClient}
viewportOptions={viewportOptions}
defaultUiConfig={default3DSandboxUi}
mapLayerOptions={mapLayerOptions}
onIModelConnected={onIModelAppInit}
enablePerformanceMonitors={false}
uiProviders={uiProviders}
theme="dark"
/>;
};
| aadfe91ed5c679b06a208f9eec2ae573 | 117 | dianewan | 2023-05-25T00:22:22 | good | [
"code"
] |
import { Injectable } from '@nestjs/common';
import type { AssistantAPI } from 'Assistant';
import 'isomorphic-fetch';
const importDynamic = new Function('modulePath', 'return import(modulePath)');
@Injectable()
export class ChatgptUnofficialProxyApiService {
async sendMessage(prompt: string): Promise<{ text: string }> {
const { AssistantAPIBrowser } = await importDynamic('Assistant');
const api = new AssistantAPIBrowser({
email: '[email protected]',
password: 'Qwer123@',
});
await api.initSession();
const result = await api.sendMessage(prompt);
return { text: JSON.stringify(result) };
}
}
帮我优化上述代码,要求:只初始化一次 | dc434d9e71ab2be942c6840910c1a1e3 | 118 | dianewan | 2023-05-25T00:22:26 | bad | [] |
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "@openzeppelin/contracts/token/ERC721/extensions/ERC721Enumerable.sol";
import "@openzeppelin/contracts/utils/Counters.sol";
import "./utils/MetaContext.sol";
import "./interfaces/IPosition.sol";
contract Position is ERC721Enumerable, MetaContext, IPosition {
function ownerOf(uint _id) public view override(ERC721, IERC721, IPosition) returns (address) {
return ERC721.ownerOf(_id);
}
using Counters for Counters.Counter;
uint constant public DIVISION_CONSTANT = 1e10; // 100%
mapping(uint => mapping(address => uint)) public vaultFundingPercent;
mapping(address => bool) private _isMinter; // Trading contract should be minter
mapping(uint256 => Trade) private _trades; // NFT id to Trade
uint256[] private _openPositions;
mapping(uint256 => uint256) private _openPositionsIndexes;
mapping(uint256 => uint256[]) private _assetOpenPositions;
mapping(uint256 => mapping(uint256 => uint256)) private _assetOpenPositionsIndexes;
mapping(uint256 => uint256[]) private _limitOrders; // List of limit order nft ids per asset
mapping(uint256 => mapping(uint256 => uint256)) private _limitOrderIndexes; // Keeps track of asset -> id -> array index
// Funding
mapping(uint256 => mapping(address => int256)) public fundingDeltaPerSec;
mapping(uint256 => mapping(address => mapping(bool => int256))) private accInterestPerOi;
mapping(uint256 => mapping(address => uint256)) private lastUpdate;
mapping(uint256 => int256) private initId;
mapping(uint256 => mapping(address => uint256)) private longOi;
mapping(uint256 => mapping(address => uint256)) private shortOi;
function isMinter(address _address) public view returns (bool) { return _isMinter[_address]; }
function trades(uint _id) public view returns (Trade memory) {
Trade memory _trade = _trades[_id];
_trade.trader = ownerOf(_id);
if (_trade.orderType > 0) return _trade;
int256 _pendingFunding;
if (_trade.direction && longOi[_trade.asset][_trade.tigAsset] > 0) {
_pendingFunding = (int256(block.timestamp-lastUpdate[_trade.asset][_trade.tigAsset])*fundingDeltaPerSec[_trade.asset][_trade.tigAsset])*1e18/int256(longOi[_trade.asset][_trade.tigAsset]);
if (longOi[_trade.asset][_trade.tigAsset] > shortOi[_trade.asset][_trade.tigAsset]) {
_pendingFunding = -_pendingFunding;
} else {
_pendingFunding = _pendingFunding*int256(1e10-vaultFundingPercent[_trade.asset][_trade.tigAsset])/1e10;
}
} else if (shortOi[_trade.asset][_trade.tigAsset] > 0) {
_pendingFunding = (int256(block.timestamp-lastUpdate[_trade.asset][_trade.tigAsset])*fundingDeltaPerSec[_trade.asset][_trade.tigAsset])*1e18/int256(shortOi[_trade.asset][_trade.tigAsset]);
if (shortOi[_trade.asset][_trade.tigAsset] > longOi[_trade.asset][_trade.tigAsset]) {
_pendingFunding = -_pendingFunding;
} else {
_pendingFunding = _pendingFunding*int256(1e10-vaultFundingPercent[_trade.asset][_trade.tigAsset])/1e10;
}
}
_trade.accInterest += (int256(_trade.margin*_trade.leverage/1e18)*(accInterestPerOi[_trade.asset][_trade.tigAsset][_trade.direction]+_pendingFunding)/1e18)-initId[_id];
return _trade;
}
function openPositions() public view returns (uint256[] memory) { return _openPositions; }
function openPositionsIndexes(uint _id) public view returns (uint256) { return _openPositionsIndexes[_id]; }
function assetOpenPositions(uint _asset) public view returns (uint256[] memory) { return _assetOpenPositions[_asset]; }
function assetOpenPositionsIndexes(uint _asset, uint _id) public view returns (uint256) { return _assetOpenPositionsIndexes[_asset][_id]; }
function limitOrders(uint _asset) public view returns (uint256[] memory) { return _limitOrders[_asset]; }
function limitOrderIndexes(uint _asset, uint _id) public view returns (uint256) { return _limitOrderIndexes[_asset][_id]; }
Counters.Counter private _tokenIds;
string public baseURI;
constructor(string memory _setBaseURI, string memory _name, string memory _symbol) ERC721(_name, _symbol) {
baseURI = _setBaseURI;
_tokenIds.increment();
}
function _baseURI() internal override view returns (string memory) {
return baseURI;
}
function setBaseURI(string memory _newBaseURI) external onlyOwner {
baseURI = _newBaseURI;
}
/**
* @notice Update funding rate after open interest change
* @dev only callable by minter
* @param _asset pair id
* @param _tigAsset tigAsset token address
* @param _longOi long open interest
* @param _shortOi short open interest
* @param _baseFundingRate base funding rate of a pair
* @param _vaultFundingPercent percent of earned funding going to the stablevault
*/
function updateFunding(uint256 _asset, address _tigAsset, uint256 _longOi, uint256 _shortOi, uint256 _baseFundingRate, uint _vaultFundingPercent) external onlyMinter {
if(longOi[_asset][_tigAsset] < shortOi[_asset][_tigAsset]) {
if (longOi[_asset][_tigAsset] > 0) {
accInterestPerOi[_asset][_tigAsset][true] += ((int256(block.timestamp-lastUpdate[_asset][_tigAsset])*fundingDeltaPerSec[_asset][_tigAsset])*1e18/int256(longOi[_asset][_tigAsset]))*int256(1e10-vaultFundingPercent[_asset][_tigAsset])/1e10;
}
accInterestPerOi[_asset][_tigAsset][false] -= (int256(block.timestamp-lastUpdate[_asset][_tigAsset])*fundingDeltaPerSec[_asset][_tigAsset])*1e18/int256(shortOi[_asset][_tigAsset]);
} else if(longOi[_asset][_tigAsset] > shortOi[_asset][_tigAsset]) {
accInterestPerOi[_asset][_tigAsset][true] -= (int256(block.timestamp-lastUpdate[_asset][_tigAsset])*fundingDeltaPerSec[_asset][_tigAsset])*1e18/int256(longOi[_asset][_tigAsset]);
if (shortOi[_asset][_tigAsset] > 0) {
accInterestPerOi[_asset][_tigAsset][false] += ((int256(block.timestamp-lastUpdate[_asset][_tigAsset])*fundingDeltaPerSec[_asset][_tigAsset])*1e18/int256(shortOi[_asset][_tigAsset]))*int256(1e10-vaultFundingPercent[_asset][_tigAsset])/1e10;
}
}
lastUpdate[_asset][_tigAsset] = block.timestamp;
int256 _oiDelta;
if (_longOi > _shortOi) {
_oiDelta = int256(_longOi)-int256(_shortOi);
} else {
_oiDelta = int256(_shortOi)-int256(_longOi);
}
fundingDeltaPerSec[_asset][_tigAsset] = (_oiDelta*int256(_baseFundingRate)/int256(DIVISION_CONSTANT))/31536000;
longOi[_asset][_tigAsset] = _longOi;
shortOi[_asset][_tigAsset] = _shortOi;
vaultFundingPercent[_asset][_tigAsset] = _vaultFundingPercent;
}
/**
* @notice mint a new position nft
* @dev only callable by minter
* @param _mintTrade New trade params in struct
*/
function mint(
MintTrade memory _mintTrade
) external onlyMinter {
uint newTokenID = _tokenIds.current();
Trade storage newTrade = _trades[newTokenID];
newTrade.margin = _mintTrade.margin;
newTrade.leverage = _mintTrade.leverage;
newTrade.asset = _mintTrade.asset;
newTrade.direction = _mintTrade.direction;
newTrade.price = _mintTrade.price;
newTrade.tpPrice = _mintTrade.tp;
newTrade.slPrice = _mintTrade.sl;
newTrade.orderType = _mintTrade.orderType;
newTrade.id = newTokenID;
newTrade.tigAsset = _mintTrade.tigAsset;
_safeMint(_mintTrade.account, newTokenID);
if (_mintTrade.orderType > 0) {
_limitOrders[_mintTrade.asset].push(newTokenID);
_limitOrderIndexes[_mintTrade.asset][newTokenID] = _limitOrders[_mintTrade.asset].length-1;
} else {
initId[newTokenID] = accInterestPerOi[_mintTrade.asset][_mintTrade.tigAsset][_mintTrade.direction]*int256(_mintTrade.margin*_mintTrade.leverage/1e18)/1e18;
_openPositions.push(newTokenID);
_openPositionsIndexes[newTokenID] = _openPositions.length-1;
_assetOpenPositions[_mintTrade.asset].push(newTokenID);
_assetOpenPositionsIndexes[_mintTrade.asset][newTokenID] = _assetOpenPositions[_mintTrade.asset].length-1;
}
_tokenIds.increment();
}
/**
* @param _id id of the position NFT
* @param _price price used for execution
* @param _newMargin margin after fees
*/
function executeLimitOrder(uint256 _id, uint256 _price, uint256 _newMargin) external onlyMinter {
Trade storage _trade = _trades[_id];
if (_trade.orderType == 0) {
return;
}
_trade.orderType = 0;
_trade.price = _price;
_trade.margin = _newMargin;
uint _asset = _trade.asset;
_limitOrderIndexes[_asset][_limitOrders[_asset][_limitOrders[_asset].length-1]] = _limitOrderIndexes[_asset][_id];
_limitOrders[_asset][_limitOrderIndexes[_asset][_id]] = _limitOrders[_asset][_limitOrders[_asset].length-1];
delete _limitOrderIndexes[_asset][_id];
_limitOrders[_asset].pop();
_openPositions.push(_id);
_openPositionsIndexes[_id] = _openPositions.length-1;
_assetOpenPositions[_asset].push(_id);
_assetOpenPositionsIndexes[_asset][_id] = _assetOpenPositions[_asset].length-1;
initId[_id] = accInterestPerOi[_trade.asset][_trade.tigAsset][_trade.direction]*int256(_trade.margin*_trade.leverage/1e18)/1e18;
}
/**
* @notice modifies margin and leverage
* @dev only callable by minter
* @param _id position id
* @param _newMargin new margin amount
* @param _newLeverage new leverage amount
*/
function modifyMargin(uint256 _id, uint256 _newMargin, uint256 _newLeverage) external onlyMinter {
_trades[_id].margin = _newMargin;
_trades[_id].leverage = _newLeverage;
}
/**
* @notice modifies margin and entry price
* @dev only callable by minter
* @param _id position id
* @param _newMargin new margin amount
* @param _newPrice new entry price
*/
function addToPosition(uint256 _id, uint256 _newMargin, uint256 _newPrice) external onlyMinter {
_trades[_id].margin = _newMargin;
_trades[_id].price = _newPrice;
initId[_id] = accInterestPerOi[_trades[_id].asset][_trades[_id].tigAsset][_trades[_id].direction]*int256(_newMargin*_trades[_id].leverage/1e18)/1e18;
}
/**
* @notice Called before updateFunding for reducing position or adding to position, to store accumulated funding
* @dev only callable by minter
* @param _id position id
*/
function setAccInterest(uint256 _id) external onlyMinter {
_trades[_id].accInterest = trades(_id).accInterest;
}
/**
* @notice Reduces position size by %
* @dev only callable by minter
* @param _id position id
* @param _percent percent of a position being closed
*/
function reducePosition(uint256 _id, uint256 _percent) external onlyMinter {
_trades[_id].accInterest -= _trades[_id].accInterest*int256(_percent)/int256(DIVISION_CONSTANT);
_trades[_id].margin -= _trades[_id].margin*_percent/DIVISION_CONSTANT;
initId[_id] = accInterestPerOi[_trades[_id].asset][_trades[_id].tigAsset][_trades[_id].direction]*int256(_trades[_id].margin*_trades[_id].leverage/1e18)/1e18;
}
/**
* @notice change a position tp price
* @dev only callable by minter
* @param _id position id
* @param _tpPrice tp price
*/
function modifyTp(uint _id, uint _tpPrice) external onlyMinter {
_trades[_id].tpPrice = _tpPrice;
}
/**
* @notice change a position sl price
* @dev only callable by minter
* @param _id position id
* @param _slPrice sl price
*/
function modifySl(uint _id, uint _slPrice) external onlyMinter {
_trades[_id].slPrice = _slPrice;
}
/**
* @dev Burns an NFT and it's data
* @param _id ID of the trade
*/
function burn(uint _id) external onlyMinter {
_burn(_id);
uint _asset = _trades[_id].asset;
if (_trades[_id].orderType > 0) {
_limitOrderIndexes[_asset][_limitOrders[_asset][_limitOrders[_asset].length-1]] = _limitOrderIndexes[_asset][_id];
_limitOrders[_asset][_limitOrderIndexes[_asset][_id]] = _limitOrders[_asset][_limitOrders[_asset].length-1];
delete _limitOrderIndexes[_asset][_id];
_limitOrders[_asset].pop();
} else {
_assetOpenPositionsIndexes[_asset][_assetOpenPositions[_asset][_assetOpenPositions[_asset].length-1]] = _assetOpenPositionsIndexes[_asset][_id];
_assetOpenPositions[_asset][_assetOpenPositionsIndexes[_asset][_id]] = _assetOpenPositions[_asset][_assetOpenPositions[_asset].length-1];
delete _assetOpenPositionsIndexes[_asset][_id];
_assetOpenPositions[_asset].pop();
_openPositionsIndexes[_openPositions[_openPositions.length-1]] = _openPositionsIndexes[_id];
_openPositions[_openPositionsIndexes[_id]] = _openPositions[_openPositions.length-1];
delete _openPositionsIndexes[_id];
_openPositions.pop();
}
delete _trades[_id];
}
function assetOpenPositionsLength(uint _asset) external view returns (uint256) {
return _assetOpenPositions[_asset].length;
}
function limitOrdersLength(uint _asset) external view returns (uint256) {
return _limitOrders[_asset].length;
}
function getCount() external view returns (uint) {
return _tokenIds.current();
}
function userTrades(address _user) external view returns (uint[] memory) {
uint[] memory _ids = new uint[](balanceOf(_user));
for (uint i=0; i<_ids.length; i++) {
_ids[i] = tokenOfOwnerByIndex(_user, i);
}
return _ids;
}
function openPositionsSelection(uint _from, uint _to) external view returns (uint[] memory) {
uint[] memory _ids = new uint[](_to-_from);
for (uint i=0; i<_ids.length; i++) {
_ids[i] = _openPositions[i+_from];
}
return _ids;
}
function setMinter(address _minter, bool _bool) external onlyOwner {
_isMinter[_minter] = _bool;
}
modifier onlyMinter() {
require(_isMinter[_msgSender()], "!Minter");
_;
}
// META-TX
function _msgSender() internal view override(Context, MetaContext) returns (address sender) {
return MetaContext._msgSender();
}
function _msgData() internal view override(Context, MetaContext) returns (bytes calldata) {
return MetaContext._msgData();
}
}
find smart contract vulnerbelities a and add possible solution | 70a8bec9f479442a732082e8daf02094 | 119 | dianewan | 2023-05-25T00:23:22 | bad | [] |
for (const [notifType,notifArr] of Object.entries(notifObj)) {
const message = notifArr[0].message;
return <div>{notifType}{message}</div>
} | 6c4e4e61e3e1701c4d27402fb4ccf86d | 12 | DaehanKim | 2023-05-29T06:39:51 | bad | [] |
for (const [notifType,notifArr] of Object.entries(notifObj)) {
const message = notifArr[0].message;
return <div>{notifType}{message}</div>
} | 6c4e4e61e3e1701c4d27402fb4ccf86d | 12 | dianewan | 2023-05-23T23:27:35 | bad | [] |
What does this statute mean
Intercepting, recording, or divulging private communication—Consent required—Exceptions.
(1) Except as otherwise provided in this chapter, it shall be unlawful for any individual, partnership, corporation, association, or the state of Washington, its agencies, and political subdivisions to intercept, or record any:
(a) Private communication transmitted by telephone, telegraph, radio, or other device between two or more individuals between points within or without the state by any device electronic or otherwise designed to record and/or transmit said communication regardless how such device is powered or actuated, without first obtaining the consent of all the participants in the communication;
(b) Private conversation, by any device electronic or otherwise designed to record or transmit such conversation regardless how the device is powered or actuated without first obtaining the consent of all the persons engaged in the conversation.
(2) Notwithstanding subsection (1) of this section, wire communications or conversations (a) of an emergency nature, such as the reporting of a fire, medical emergency, crime, or disaster, or (b) which convey threats of extortion, blackmail, bodily harm, or other unlawful requests or demands, or (c) which occur anonymously or repeatedly or at an extremely inconvenient hour, or (d) which relate to communications by a hostage holder or barricaded person as defined in RCW 70.85.100, whether or not conversation ensues, may be recorded with the consent of one party to the conversation.
(3) Where consent by all parties is needed pursuant to this chapter, consent shall be considered obtained whenever one party has announced to all other parties engaged in the communication or conversation, in any reasonably effective manner, that such communication or conversation is about to be recorded or transmitted: PROVIDED, That if the conversation is to be recorded that said announcement shall also be recorded.
(4) An employee of any regularly published newspaper, magazine, wire service, radio station, or television station acting in the course of bona fide news gathering duties on a full-time or contractual or part-time basis, shall be deemed to have consent to record and divulge communications or conversations otherwise prohibited by this chapter if the consent is expressly given or if the recording or transmitting device is readily apparent or obvious to the speakers. Withdrawal of the consent after the communication has been made shall not prohibit any such employee of a newspaper, magazine, wire service, or radio or television station from divulging the communication or conversation.
(5) This section does not apply to the recording of custodial interrogations pursuant to RCW 10.122.040. | 9f006eec30fd55eece9c9c3909ad8054 | 120 | dianewan | 2023-05-25T00:23:48 | bad | [] |
Write some documentation for this python function:
def healthy(self) -> bool:
# pylint: disable=no-else-raise
try:
with self._connector.conn():
_logger.info(f"{self.name} state: {DependencyState.HEALTHY}")
self._metric_reporter.report_success(self.name)
return True
except Exception as e:
self._metric_reporter.report_failure(self.name)
if self.raises_exception_when_unhealthy:
raise UnhealthyCheckException(
f"{self.name} state: {DependencyState.UNHEALTHY}"
) from e
else:
_logger.warning(f"{self.name} state: {DependencyState.UNHEALTHY}: {e}")
return False | c7c556737862e9e67985890d6a06d3de | 121 | dianewan | 2023-05-25T00:23:52 | good | [
"code"
] |
<div><p class="">Tell me reasons why i should exercise more, but say everything in the speaking style of a pirate. And give me reasons only a pirate would give.</p></div> | 638ea4f28581849f374599abd659702b | 122 | dianewan | 2023-05-25T00:23:59 | bad | [] |
convert this to tailwind and get rid of the style tag. for the after pseudoclass, use the after: prefix in tailwind classes
const Hr = () => (
<div>
<style jsx>{`
div {
border-style: none;
margin-top: 30px;
margin-bottom: 30px;
text-align: center;
}
div::after {
content: "***";
text-align: center;
display: inline;
}
`}</style>
</div>
);
export default Hr;
| 0d50ce1ea333cc14f46b8aa7be7130ad | 123 | dianewan | 2023-05-25T00:24:11 | good | [
"code"
] |
what the name of this programming language
org 0x00
movf 0 * 20 w;w=X
movwf 0 * 26 ;y=x-w
clr 0x27
movf 0 * 20 ;w=x btfsc 0x20,7
sublw 0x00 ; w = - x = 0 - w
bsf STATUS,RP1;BANK2[0x122] movwf 0 * 122 ;a-abs(x)
b=a<<3;
movf 0x122,w
bsf STATUS,RP1 bef STATUS,RPO
movwf OxA2
clr 0xA3
rlf 0xA2,f
rlf OxA3,f
rlf 0xA2,f
rlf 0xA3,f
rlf OxA2,f
rlf OxA3,f
movlw b'11111000'
andwf OxA2,f
movlw OxB6
addwf OxA2, w
c=b+950;
bsf STATUS,RP1
bcf STATUS,RPO
movwf Ox150
bsf STATUS,RPO
bcf STATUS,RP1
movlw 0x03
addwf OxA3,w
bsf STATUS,RP1
bcf STATUS,RPO
movwf Ox151
end | bdb8d44cbcaf7b541586a2f4931b0fb9 | 124 | dianewan | 2023-05-25T00:24:43 | bad | [] |
Please explain the main functionality of this contract, a description for its functions. Be as technical as possible
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.17;
import { IERC20 } from "@openzeppelin/contracts/token/ERC20/IERC20.sol";
import { SafeERC20 } from "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol";
import { ERC2771Context } from "./libs/ERC2771Context.sol";
contract OpenPeerEscrow is ERC2771Context {
using SafeERC20 for IERC20;
// Settings
// Address of the arbitrator (currently OP staff)
address public arbitrator;
// Address to receive the fees
address public feeRecipient;
address public immutable seller;
address public immutable buyer;
address public immutable token;
uint256 public immutable amount;
uint32 public immutable fee;
uint32 public sellerCanCancelAfter;
bool public dispute;
/// @notice Settings
/// @param _buyer Buyer address
/// @param _token Token address or 0x0000000000000000000000000000000000000000 for native token
/// @param _fee OP fee (bps) ex: 30 == 0.3%
/// @param _arbitrator Address of the arbitrator (currently OP staff)
/// @param _feeRecipient Address to receive the fees
/// @param _trustedForwarder Forwarder address
constructor(
address _buyer,
address _token,
uint256 _amount,
uint8 _fee,
address _arbitrator,
address _feeRecipient,
address _trustedForwarder
) ERC2771Context(_trustedForwarder) {
require(_amount > 0, "Invalid amount");
require(_buyer != _msgSender(), "Seller and buyer must be different");
require(_buyer != address(0), "Invalid buyer");
seller = _msgSender();
token = _token;
buyer = _buyer;
amount = _amount;
fee = uint32(amount * _fee / 10_000);
arbitrator = _arbitrator;
feeRecipient = _feeRecipient;
}
// Events
event Created();
event Released();
event CancelledByBuyer();
event SellerCancelDisabled();
event CancelledBySeller();
event DisputeOpened();
event DisputeResolved();
modifier onlySeller() {
require(_msgSender() == seller, "Must be seller");
_;
}
modifier onlyArbitrator() {
require(_msgSender() == arbitrator, "Must be arbitrator");
_;
}
modifier onlyBuyer() {
require(_msgSender() == buyer, "Must be buyer");
_;
}
/// @notice Create and fund a new escrow.
function escrow() payable external {
require(sellerCanCancelAfter == 0, "Funds already escrowed");
if (token == address(0)) {
require(msg.value == amount + fee, "Incorrect MATIC sent");
} else {
IERC20(token).safeTransferFrom(_msgSender(), address(this), amount + fee );
}
sellerCanCancelAfter = uint32(block.timestamp) + 24 hours;
emit Created();
}
/// @notice Release ether or token in escrow to the buyer.
/// @return bool
function release() external onlySeller returns (bool) {
transferEscrowAndFees(buyer, amount, fee);
emit Released();
return true;
}
/// @notice Transfer the value of an escrow
/// @param _to Recipient address
/// @param _amount Amount to be transfered
/// @param _fee Fee to be transfered
function transferEscrowAndFees(address _to, uint256 _amount, uint32 _fee) private {
withdraw(_to, _amount);
if (_fee > 0) {
withdraw(feeRecipient, _fee);
}
}
/// @notice Cancel the escrow as a buyer with 0 fees
/// @return bool
function buyerCancel() external onlyBuyer returns (bool) {
transferEscrowAndFees(seller, amount + fee, 0);
emit CancelledByBuyer();
return true;
}
/// @notice Cancel the escrow as a seller
/// @return bool
function sellerCancel() external onlySeller returns (bool) {
if (sellerCanCancelAfter <= 1 || sellerCanCancelAfter > block.timestamp) {
return false;
}
transferEscrowAndFees(seller, amount + fee, 0);
emit CancelledBySeller();
return true;
}
/// @notice Disable the seller from cancelling
/// @return bool
function markAsPaid() external onlyBuyer returns (bool) {
sellerCanCancelAfter = 1;
emit SellerCancelDisabled();
return true;
}
/// @notice Withdraw values in the contract
/// @param _to Address to withdraw fees in to
/// @param _amount Amount to withdraw
function withdraw(address _to, uint256 _amount) private {
if (token == address(0)) {
(bool sent,) = payable(_to).call{value: _amount}("");
require(sent, "Failed to send MATIC");
} else {
require(IERC20(token).transfer(payable(_to), _amount), "Failed to send tokens");
}
}
/// @notice Allow seller or buyer to open a dispute
function openDispute() external {
require(_msgSender() == seller || _msgSender() == buyer, "Must be seller or buyer");
require(sellerCanCancelAfter > 0, "Funds not escrowed yet");
dispute = true;
emit DisputeOpened();
}
/// @notice Allow arbitrator to resolve a dispute
/// @param _winner Address to receive the escrowed values - fees
function resolveDispute(address payable _winner) external onlyArbitrator {
require(dispute, "Dispute is not open");
require(_winner == seller || _winner == buyer, "Winner must be seller or buyer");
emit DisputeResolved();
transferEscrowAndFees(_winner, amount, fee);
}
/// @notice Version recipient
function versionRecipient() external pure returns (string memory) {
return "1.0";
}
} | aca5d7fa38a0cafc689d88e6266bbca5 | 125 | dianewan | 2023-05-25T00:24:55 | good | [
"code"
] |
int32_t getWebACLMaxInspectionBytes(Lmdb_WebACLHandle *handle, GokuLmdbString http_source_string) {
int max_inspection_bytes = 64 * 1024;
if (handle) {
try {
std::string http_source(reinterpret_cast(http_source_string.value));
const goku::apiv2::lmdb::types::WebACL *webACL = (const goku::apiv2::lmdb::types::WebACL *) handle->webACL;
const goku::apiv2::lmdb::types::AssociationConfig *associationConfig = webACL->associationConfig();
const flatbuffers::Vector >
*resourceTypeVec = associationConfig->requestBodyAssociatedResourceTypeConfig();
std::unordered_map sourceMap = {
{"CF", goku::apiv2::lmdb::types::AssociatedResourceType_CLOUDFRONT},
{"ALB", goku::apiv2::lmdb::types::AssociatedResourceType_APPLICATION_LOAD_BALANCER},
{"APIGW", goku::apiv2::lmdb::types::AssociatedResourceType_API_GATEWAY},
{"APPSYNC", goku::apiv2::lmdb::types::AssociatedResourceType_APPSYNC},
{"COGNITOIDP", goku::apiv2::lmdb::types::AssociatedResourceType_COGNITO_USER_POOL},
{"APPRUNNER", goku::apiv2::lmdb::types::AssociatedResourceType_CLOUDFRONT},
{"-", goku::apiv2::lmdb::types::AssociatedResourceType_CLOUDFRONT}
};
auto it = sourceMap.find(http_source);
if (it != sourceMap.end()) {
goku::apiv2::lmdb::types::AssociatedResourceType associatedResourceType = it->second; //the assoc res
// Find AssociatedResourceType in the RequestBodyAssociatedResourceTypeConfig
for (const auto &requestBodyAssociatedResourceTypeConfig: *resourceTypeVec) {
if (requestBodyAssociatedResourceTypeConfig->associatedResourceType() ==
associatedResourceType) {
// Get the SizeInspectionLimit from the RequestBodyAssociatedResourceTypeConfig
goku::apiv2::lmdb::types::SizeInspectionLimit sizeInspectionLimit = requestBodyAssociatedResourceTypeConfig->defaultSizeInspectionLimit();
// Use the SizeInspectionLimit to determine the max_inspection_bytes
switch (sizeInspectionLimit) {
case goku::apiv2::lmdb::types::SizeInspectionLimit_KB_8:
max_inspection_bytes = 8 * 1024;
break;
case goku::apiv2::lmdb::types::SizeInspectionLimit_KB_16:
max_inspection_bytes = 16 * 1024;
break;
case goku::apiv2::lmdb::types::SizeInspectionLimit_KB_32:
max_inspection_bytes = 32 * 1024;
break;
case goku::apiv2::lmdb::types::SizeInspectionLimit_KB_64:
max_inspection_bytes = 64 * 1024;
break;
default:
max_inspection_bytes = 8 * 1024;
break;
}
break;
}
}
}
}
catch (const std::exception &e) {
LOG4CPLUS_ERROR(handle->loggerHandle->logger,
"lmdbDeserializer:GetWebACLMaxInspectionBytes: Caught std::exception: " << e.what());
} catch (...) {
LOG4CPLUS_ERROR(handle->loggerHandle->logger,
"lmdbDeserializer:GetWebACLMaxInspectionBytes: Caught unknown exception");
}
}
return max_inspection_bytes;
} | 17fd6edafabb1a2709779af4ca9f2c9e | 126 | dianewan | 2023-05-25T00:25:23 | bad | [] |
whats wrong with this code?
```
import os
import platform
from ctypes import *
class IDomSDK:
"""IDomSDK."""
def __init__(self) -> None:
"""Initialize."""
print("IDomSDK")
idom_sdk_file_name = None
if platform.system() == "Linux":
if platform.architecture()[0] == '64bit':
idom_sdk_file_name = 'libiDom3SDKLiteLinuxX86_64.so'
else:
idom_sdk_file_name = 'libiDom3SDKLiteLinuxX86_32.so'
if not idom_sdk_file_name:
raise Exception('can find sdk for current system')
current_dir = os.path.dirname(__file__)
sdk_file = os.path.join(current_dir, idom_sdk_file_name)
self.idom_sdk = CDLL(sdk_file)
def init (self, cycle_time = 0):
return self.idom_sdk.idom_init(cycle_time)
def connect (self, remote_host_addr: str, remote_host_port: int):
print('connect', remote_host_addr, remote_host_port)
return bool(self.idom_sdk.idom_connect(remote_host_addr, remote_host_port))
def pump_events (self):
print('pump_events')
self.idom_sdk.idom_pump_events()
connection_state_cb_type = CFUNCTYPE(None, c_int)
def reg_connection_state_cb(self, cb):
return self.idom_sdk.idom_reg_connection_state_cb(
self.connection_state_cb_type(cb)
)
object_data_cb_type = CFUNCTYPE(None, c_int, POINTER(c_char_p), c_char_p, c_bool, c_char_p, c_char_p)
def reg_object_data_cb(self, cb):
self.idom_sdk.idom_reg_object_data_cb(
self.object_data_cb_type(cb)
)
system_file_cb_type = CFUNCTYPE(None, c_int, c_char, c_char_p, c_int)
def reg_system_file_cb(self, cb):
self.idom_sdk.idom_reg_system_file_cb(
self.system_file_cb_type(cb)
)
``` | ecc894fa35253770f2342a9f998456c9 | 127 | dianewan | 2023-05-25T00:25:32 | good | [
"code"
] |
Translate network management software GUI from English to Traditional Chinese. GUI string file is in XML format.
XML content:
<string name="ACLInfoNetworkHeader">Network-wide Access Control</string>
<string name="ACLInfoSSIDHeader">Set Up Access Policy on Wi-Fi(SSIDs)</string>
<string name="ACLNetworkMessage">Apply policy for all Wi-Fi(SSIDs) within this Network</string>
<string name="AP">AP</string>
<string name="APINotSupportMessage">Your device not support this function.</string>
<string name="APMesEnabledhMessage">Exclude this AP from the Mesh Network</string>
<string name="APP_Language">App Language</string>
<string name="AP_List">AP List</string>
<string name="AP_Name">AP Name</string>
<string name="AP_Offline_Count">AP Offline Count</string>
<string name="AP_Online_Count">AP Online Count</string>
<string name="APs">APs</string>
<string name="About_EnGenius">About EnGenius</string>
<string name="Access">Access</string>
<string name="Access_Control">Access Control</string>
<string name="Access_Point">Access Point</string>
<string name="Access_Points">Access Points</string>
<string name="Access_Time">Access Time</string>
<string name="AccountPlaceHolder">EnGenius Cloud account</string>
<string name="Account_Recovery">Account Recovery</string>
<string name="Accounting_Server">Accounting Server</string>
<string name="Action_Taken">Action Taken</string>
<string name="Actions">Actions</string>
<string name="Active">Active</string>
<string name="ActivePortMessage">%s Active</string>
<string name="Add">Add</string>
<string name="AddBackupTitle">Add Backup List</string>
<string name="AddSSIDTitle">New Wi-Fi (SSID)</string>
<string name="Add_Device">Add Device</string>
<string name="Add_New_Photo">Add New Photo</string>
<string name="Add_Note">Add Note</string>
<string name="Add_VLAN">+ Add VLAN</string>
<string name="AlertOfflineTitle">%s goes offline for</string>
<string name="AlertSTPStatusChangeTitle">Switch STP port status change</string>
<string name="AlertSeverityPrefixTitle">Event with severity</string>
<string name="AlertSeveritySuffixTitle">and above occurs</string>
<string name="Alert_Settings">Alert Settings</string>
| f90062624819fa6a558aa5bb438d688d | 128 | dianewan | 2023-05-25T00:26:01 | good | [
"code"
] |
Can you help me complete the TODO part of this code?
def computeSphericalWarpMappings(dstShape, f, k1, k2):
'''
Compute the spherical warp. Compute the addresses of each pixel of the
output image in the source image.
Input:
dstShape -- shape of input / output image in a numpy array.
[number or rows, number of cols, number of bands]
f -- focal length in pixel as int
See assignment description on how to find the focal length
k1 -- horizontal distortion as a float
k2 -- vertical distortion as a float
Output:
uvImg -- warped image in terms of addresses of each pixel in the
source image in a numpy array.
The dimensions are (rows, cols, addresses of pixels
[:,:,0] are x (i.e., cols) and [:,:,1] are y (i.e., rows)).
'''
# calculate minimum y value
vec = np.zeros(3)
vec[0] = np.sin(0.0) * np.cos(0.0)
vec[1] = np.sin(0.0)
vec[2] = np.cos(0.0) * np.cos(0.0)
min_y = vec[1]
# calculate spherical coordinates
# (x,y) is the spherical image coordinates.
# (xf,yf) is the spherical coordinates, e.g., xf is the angle theta
# and yf is the angle phi
one = np.ones((dstShape[0],dstShape[1]))
xf = one * np.arange(dstShape[1])
yf = one.T * np.arange(dstShape[0])
yf = yf.T
xf = ((xf - 0.5 * dstShape[1]) / f)
yf = ((yf - 0.5 * dstShape[0]) / f - min_y)
# BEGIN TODO 1
# add code to apply the spherical correction, i.e.,
# compute the Euclidean coordinates,
# and project the point to the z=1 plane at (xt/zt,yt/zt,1),
# then distort with radial distortion coefficients k1 and k2
# Use xf, yf as input for your code block and compute xt, yt
# as output for your code. They should all have the shape
# (img_height, img_width)
# TODO-BLOCK-BEGIN
raise Exception("TODO in warp.py not implemented")
# TODO-BLOCK-END
# END TODO
# Convert back to regular pixel coordinates
xn = 0.5 * dstShape[1] + xt * f
yn = 0.5 * dstShape[0] + yt * f
uvImg = np.dstack((xn,yn))
return uvImg | a06d3892feecd3b7d091a06320394b1a | 129 | dianewan | 2023-05-25T00:26:08 | good | [
"code"
] |
Responsibilities
5.1 General
The certification system, which shall be controlled and administered by a certification body, includes
all procedures necessary to demonstrate the qualification and the competence of an individual to carry
out tasks in a specific NDT method and product or industrial sector, leading to certification.
5.2 Certification body
5.2.1 The certification body shall fulfil the requirements of ISO/IEC 17024.
5.2.2 The certification body:
a) shall initiate, promote, maintain and administer the certification scheme according to ISO/IEC 17024
and this document;
b) shall be independent of any single interest;
ISO 9712:2021(E)
© ISO 2021 – All rights reserved
6
c) shall be responsible for the definition of sectors (see Annex A);
d) shall publish information regarding the scope of the certification scheme and a general description
of the certification process;
e) shall provide information for training courses that include the syllabi which embody the content of
recognized documents; ISO/TS 25107 or equivalent can be used as guidance;
f) shall conduct an initial audit and subsequent periodic surveillance audits of the authorized
qualification body(ies) to ensure their conformity to the specifications;
g) shall monitor, in accordance with a documented procedure, all delegated functions;
h) shall approve properly staffed and equipped examination centres, which it shall monitor on a
periodic basis;
i) shall administer examinations through approved examination centres;
j) shall bear full responsibilities for examinations conducted on temporary basis at external premises;
k) shall be responsible for ensuring the security of all examination materials (examination specimens,
specimen master reports, question banks, examination papers, etc.) and shall ensure that these
materials are not in use for training purposes;
l) shall be responsible for granting, extension, suspension, withdrawal or revalidation of certification;
m) shall establish an appropriate system for the maintenance of records, which shall be retained for at
least one certification cycle;
n) shall require all candidates and certificate holders to give a signed or stamped undertaking to
abide by a code of ethics which it shall develop for the purpose and publish;
o) may approve training bodies; ISO/TS 25108 can be used as guidance;
p) may delegate, under its direct responsibility, the detailed administration of qualification to
authorized qualification bodies, to which it shall issue specifications and/or procedures covering
facilities, personnel, verification and control of NDT equipment, examination materials, specimens,
conduct of examinations, examination grading, records, etc.;
q) shall establish a process to authorize examiners;
r) shall establish the conditions for the supervision of work activities, which candidates may claim
experience under 7.3;
s) shall establish a process for the recognition of higher education;
t) shall establish a process for the approval of non-certified individuals as a referee;
u) shall establish a process for the approval of a structured credit system, where used;
v) may specify a minimum age requirement for candidates under 7.1;
w) shall maintain and update the question bank and the examination specimens along with their
specimen master report;
x) shall conduct the examination only in the presence of, and under the control of, an authorized
invigilator of the certification body, to ensure that impartiality is maintained;
y) shall establish a process for the approval of a structured experience program, where used.
ISO 9712:2021(E)
© ISO 2021 – All rights reserved 7
5.3 Authorized qualification body
Where established, the authorized qualification body shall:
a) work under the control of and apply the specifications issued by the certification body;
b) be independent of any single predominant interest;
c) ensure that it is impartial with respect to each candidate seeking qualification, bringing to the
attention of the certification body any actual or potential threat to its impartiality;
d) apply a documented quality management system approved by the certification body;
e) have the resources and expertise necessary to establish, monitor and control examinations centres,
including examinations and the verification and control of the equipment;
f) conduct qualification of candidates including review of application and decision on eligibility;
g) prepare, supervise and administer examinations;
h) provide the certification body with the results of qualification needed to make a decision on
certification by the certification body;
i) maintain appropriate qualification and examination records according to the requirements of the
certification body.
5.4 Examination centre
5.4.1 The examination centre shall:
a) work under the control of the certification body or authorized qualification body;
b) apply a documented quality procedure approved by the certification body;
c) have the resources needed to prepare and conduct examinations, including the verification and
control of equipment;
d) have adequate qualified staff, premises and equipment to ensure satisfactory examinations for the
levels, methods, and sectors concerned; the use of external premises is allowed;
e) prepare and conduct examinations under the responsibility of an examiner authorized by the
certification body, using only examination questionnaires and specimens established or approved
by the certification body for that purpose;
f) maintain appropriate examination documents according to the requirements of the certification
body.
5.4.2 An examination centre may operate within the certification body; or within an authorized
qualification body; or be an independent legal entity or part of a legal entity. An examination centre
can be situated at an employer’s premises. In this case, the certification body shall require controls
to preserve impartiality and protect confidentiality of the examinations. The examinations shall
be conducted only in the presence of, and under the control of, an authorized representative of the
certification body.
5.5 Employer
5.5.1 The employer shall document the personal information which shall include the declaration
of education, training and industrial experience and visual acuity needed to determine the eligibility
ISO 9712:2021(E)
© ISO 2021 – All rights reserved
8
of the candidate. If the candidate is self-employed, the industrial experience shall be attested to by a
referee.
All documentation obtained from the employer shall be verified by the certification body.
5.5.2 In respect of certified NDT personnel under their control the employer shall be responsible for:
a) all that concerns the authorization to operate, i.e. providing job-specific training (if necessary);
b) issuing the written authorization to operate;
c) the results of NDT activities;
d) ensuring that the annual vision requirements of 7.4 are met;
e) maintaining documentary evidence confirming the continuous application of the NDT method in
the relevant sector(s) without significant interruption; this action shall be done every 12 months;
f) ensuring that personnel hold valid certification relevant to their tasks within the organization;
g) maintaining appropriate records.
These responsibilities shall be described in a documented procedure.
5.5.3 A self-employed individual shall assume all responsibilities ascribed to the employer.
5.5.4 Certification to this document provides an attestation of general competence of the certified
NDT personnel. It does not represent an authorization to operate, since this remains the responsibility
of the employer; and the certified NDT personnel may require additional specialized knowledge of
parameters such as equipment, NDT procedures, materials and products specific for the employer.
Where required by regulatory requirements and codes, the authorization to operate shall be given in
writing by the employer in accordance with a quality procedure that specifies any employer-required
job-specific training and examinations designed to verify the certificate holder's knowledge of relevant
industry code(s), standard(s), NDT procedures, equipment, and acceptance criteria for the tested
products.
5.6 Candidate
Candidates shall:
a) provide documentary evidence of training in accordance with 7.2;
b) provide documentary evidence that the required experience has been gained under supervision;
c) provide documentary evidence of vision satisfying the requirements of 7.4;
d) abide by a code of ethics published by the certification body;
e) provide other requisites requested by the certification body.
5.7 Certificate holders
Certificate holders shall:
a) abide by a code of ethics published by the certification body;
b) maintain records demonstrating evidence that vision requirements have been fulfilled in
accordance with 7.4;
ISO 9712:2021(E)
© ISO 2021 – All rights reserved 9
c) notify the certification body and the employer if the conditions of certification are not maintained
(see 9.3).
5.8 Examiners
5.8.1 Examiners shall:
— be authorized by the certification body to conduct, supervise and grade examinations;
— be certified to Level 3 in the NDT method in the product and/or industrial sector for which they are
authorized.
5.8.2 An examiner shall not be permitted to examine any candidate:
— that they have trained for the examination for a period of two years from the date of the conclusion
of the training;
— who is working (permanently or temporarily) in the same facility as the examiner unless the
certification body has established a documented confidentiality and impartiality management
procedure for such a situation.
5.9 Referee
A referee shall be:
a) certified to Level 2 or 3 in any NDT method; or
b) non-certified personnel who, approved by the certification body, possess the knowledge, skill,
training and experience required to attest to the candidate’s industrial experience.
generate a work procedure document compliant with the information above, for a company named Global Compliance Service - Inspection Division. there will be 9 sections, this will be the 1st section | 5259f08d1d4ffa7f9579317e758303cd | 13 | DaehanKim | 2023-05-29T06:40:46 | good | [] |
Responsibilities
5.1 General
The certification system, which shall be controlled and administered by a certification body, includes
all procedures necessary to demonstrate the qualification and the competence of an individual to carry
out tasks in a specific NDT method and product or industrial sector, leading to certification.
5.2 Certification body
5.2.1 The certification body shall fulfil the requirements of ISO/IEC 17024.
5.2.2 The certification body:
a) shall initiate, promote, maintain and administer the certification scheme according to ISO/IEC 17024
and this document;
b) shall be independent of any single interest;
ISO 9712:2021(E)
© ISO 2021 – All rights reserved
6
c) shall be responsible for the definition of sectors (see Annex A);
d) shall publish information regarding the scope of the certification scheme and a general description
of the certification process;
e) shall provide information for training courses that include the syllabi which embody the content of
recognized documents; ISO/TS 25107 or equivalent can be used as guidance;
f) shall conduct an initial audit and subsequent periodic surveillance audits of the authorized
qualification body(ies) to ensure their conformity to the specifications;
g) shall monitor, in accordance with a documented procedure, all delegated functions;
h) shall approve properly staffed and equipped examination centres, which it shall monitor on a
periodic basis;
i) shall administer examinations through approved examination centres;
j) shall bear full responsibilities for examinations conducted on temporary basis at external premises;
k) shall be responsible for ensuring the security of all examination materials (examination specimens,
specimen master reports, question banks, examination papers, etc.) and shall ensure that these
materials are not in use for training purposes;
l) shall be responsible for granting, extension, suspension, withdrawal or revalidation of certification;
m) shall establish an appropriate system for the maintenance of records, which shall be retained for at
least one certification cycle;
n) shall require all candidates and certificate holders to give a signed or stamped undertaking to
abide by a code of ethics which it shall develop for the purpose and publish;
o) may approve training bodies; ISO/TS 25108 can be used as guidance;
p) may delegate, under its direct responsibility, the detailed administration of qualification to
authorized qualification bodies, to which it shall issue specifications and/or procedures covering
facilities, personnel, verification and control of NDT equipment, examination materials, specimens,
conduct of examinations, examination grading, records, etc.;
q) shall establish a process to authorize examiners;
r) shall establish the conditions for the supervision of work activities, which candidates may claim
experience under 7.3;
s) shall establish a process for the recognition of higher education;
t) shall establish a process for the approval of non-certified individuals as a referee;
u) shall establish a process for the approval of a structured credit system, where used;
v) may specify a minimum age requirement for candidates under 7.1;
w) shall maintain and update the question bank and the examination specimens along with their
specimen master report;
x) shall conduct the examination only in the presence of, and under the control of, an authorized
invigilator of the certification body, to ensure that impartiality is maintained;
y) shall establish a process for the approval of a structured experience program, where used.
ISO 9712:2021(E)
© ISO 2021 – All rights reserved 7
5.3 Authorized qualification body
Where established, the authorized qualification body shall:
a) work under the control of and apply the specifications issued by the certification body;
b) be independent of any single predominant interest;
c) ensure that it is impartial with respect to each candidate seeking qualification, bringing to the
attention of the certification body any actual or potential threat to its impartiality;
d) apply a documented quality management system approved by the certification body;
e) have the resources and expertise necessary to establish, monitor and control examinations centres,
including examinations and the verification and control of the equipment;
f) conduct qualification of candidates including review of application and decision on eligibility;
g) prepare, supervise and administer examinations;
h) provide the certification body with the results of qualification needed to make a decision on
certification by the certification body;
i) maintain appropriate qualification and examination records according to the requirements of the
certification body.
5.4 Examination centre
5.4.1 The examination centre shall:
a) work under the control of the certification body or authorized qualification body;
b) apply a documented quality procedure approved by the certification body;
c) have the resources needed to prepare and conduct examinations, including the verification and
control of equipment;
d) have adequate qualified staff, premises and equipment to ensure satisfactory examinations for the
levels, methods, and sectors concerned; the use of external premises is allowed;
e) prepare and conduct examinations under the responsibility of an examiner authorized by the
certification body, using only examination questionnaires and specimens established or approved
by the certification body for that purpose;
f) maintain appropriate examination documents according to the requirements of the certification
body.
5.4.2 An examination centre may operate within the certification body; or within an authorized
qualification body; or be an independent legal entity or part of a legal entity. An examination centre
can be situated at an employer’s premises. In this case, the certification body shall require controls
to preserve impartiality and protect confidentiality of the examinations. The examinations shall
be conducted only in the presence of, and under the control of, an authorized representative of the
certification body.
5.5 Employer
5.5.1 The employer shall document the personal information which shall include the declaration
of education, training and industrial experience and visual acuity needed to determine the eligibility
ISO 9712:2021(E)
© ISO 2021 – All rights reserved
8
of the candidate. If the candidate is self-employed, the industrial experience shall be attested to by a
referee.
All documentation obtained from the employer shall be verified by the certification body.
5.5.2 In respect of certified NDT personnel under their control the employer shall be responsible for:
a) all that concerns the authorization to operate, i.e. providing job-specific training (if necessary);
b) issuing the written authorization to operate;
c) the results of NDT activities;
d) ensuring that the annual vision requirements of 7.4 are met;
e) maintaining documentary evidence confirming the continuous application of the NDT method in
the relevant sector(s) without significant interruption; this action shall be done every 12 months;
f) ensuring that personnel hold valid certification relevant to their tasks within the organization;
g) maintaining appropriate records.
These responsibilities shall be described in a documented procedure.
5.5.3 A self-employed individual shall assume all responsibilities ascribed to the employer.
5.5.4 Certification to this document provides an attestation of general competence of the certified
NDT personnel. It does not represent an authorization to operate, since this remains the responsibility
of the employer; and the certified NDT personnel may require additional specialized knowledge of
parameters such as equipment, NDT procedures, materials and products specific for the employer.
Where required by regulatory requirements and codes, the authorization to operate shall be given in
writing by the employer in accordance with a quality procedure that specifies any employer-required
job-specific training and examinations designed to verify the certificate holder's knowledge of relevant
industry code(s), standard(s), NDT procedures, equipment, and acceptance criteria for the tested
products.
5.6 Candidate
Candidates shall:
a) provide documentary evidence of training in accordance with 7.2;
b) provide documentary evidence that the required experience has been gained under supervision;
c) provide documentary evidence of vision satisfying the requirements of 7.4;
d) abide by a code of ethics published by the certification body;
e) provide other requisites requested by the certification body.
5.7 Certificate holders
Certificate holders shall:
a) abide by a code of ethics published by the certification body;
b) maintain records demonstrating evidence that vision requirements have been fulfilled in
accordance with 7.4;
ISO 9712:2021(E)
© ISO 2021 – All rights reserved 9
c) notify the certification body and the employer if the conditions of certification are not maintained
(see 9.3).
5.8 Examiners
5.8.1 Examiners shall:
— be authorized by the certification body to conduct, supervise and grade examinations;
— be certified to Level 3 in the NDT method in the product and/or industrial sector for which they are
authorized.
5.8.2 An examiner shall not be permitted to examine any candidate:
— that they have trained for the examination for a period of two years from the date of the conclusion
of the training;
— who is working (permanently or temporarily) in the same facility as the examiner unless the
certification body has established a documented confidentiality and impartiality management
procedure for such a situation.
5.9 Referee
A referee shall be:
a) certified to Level 2 or 3 in any NDT method; or
b) non-certified personnel who, approved by the certification body, possess the knowledge, skill,
training and experience required to attest to the candidate’s industrial experience.
generate a work procedure document compliant with the information above, for a company named Global Compliance Service - Inspection Division. there will be 9 sections, this will be the 1st section | 5259f08d1d4ffa7f9579317e758303cd | 13 | dianewan | 2023-05-23T23:28:01 | bad | [] |
I amigo, i have a request for you.
i have a C# Solution with a Database Project inside it. In the DataBase project i defined a Role like this:
GO
CREATE ROLE db_KteUsers
GO
DENY EXECUTE TO db_KteUsers
GO
GRANT SELECT TO db_KteUsers
GO
GRANT SELECT, INSERT, UPDATE, DELETE ON SCHEMA :: [cfg] TO db_KteUsers
When deployed if i granted more things to the role, i would like that when i publish the db again this newly added permissions are revoked. something like, when i publish the db reset the role as i describe it in the project.
Can you help me with this? | 74c56d72f909e5f2a89424a94ac7ec22 | 130 | dianewan | 2023-05-25T00:26:22 | good | [
"code"
] |
$searchValue = $data['data']['query'];
$results = [];
foreach ($processed as $object) {
if (in_array($searchValue, (array) $object)) {
$results[] = $object;
}
} | f8cb43cf409fa5fa2d2ad624fd803c6b | 131 | dianewan | 2023-05-25T00:26:28 | bad | [] |
Here is my flatbuffer data:
namespace goku.apiv2.lmdb.types;
table WebACL {
owner:string (id: 0);
payerToken:string (id: 1);
defaultAction:DefaultAction (id: 3);
rules:[Rule] (id: 4);
visibilityConfig: VisibilityConfig (id: 5);
isolationStatus:IsolationStatus (id: 6);
version:int (id: 7);
changeNumber:long (id: 8);
wcuCapacity:long (id: 9);
customResponseBodies: [CustomResponseBody] (id: 10);
labelNamespace: string (id: 11);
globalCaptchaConfig: CaptchaConfig (id: 12);
oversizeFieldsHandlingCompliant:bool (id: 13);
globalChallengeConfig: ChallengeConfig (id: 14);
tokenDomains: [string] (id: 15);
associationConfig: AssociationConfig (id: 16);
}
table AssociationConfig {
requestBodyAssociatedResourceTypeConfig: [RequestBodyAssociatedResourceTypeConfig] (id: 0);
}
table RequestBodyAssociatedResourceTypeConfig {
associatedResourceType: AssociatedResourceType (id: 0);
defaultSizeInspectionLimit: SizeInspectionLimit (id: 1);
}
enum AssociatedResourceType: int {
CLOUDFRONT = 0,
APPLICATION_LOAD_BALANCER = 1,
API_GATEWAY = 2,
APPSYNC = 3,
COGNITO_USER_POOL = 4
}
enum SizeInspectionLimit: int {
KB_8 = 0,
KB_16 = 1,
KB_32 = 2,
KB_64 = 3
}
| ffe931309f9562404ba83084d522ba84 | 132 | dianewan | 2023-05-25T00:26:38 | bad | [] |
I have a button that runs an onClick function as follows:
<button
onClick={handler}
id="generate"
className="flex items-center justify-center gap-2 rounded-lg border border-[#404040] bg-[#0570eb] px-4 py-2 font-medium shadow-lg hover:bg-[#0465d3]"
>
<RefreshCw size={18} />
Generate
</button>
const handler= () => {
let generator = new PasswordGenerator()
setPassword(generator.generatePassword(strengthOption, numbersOption, symbolsOption))
}
Why can't I just do <button onClick={setPassword(generator.generatePassword(...))}> instead? Currently I'm using an arrow function that executes the same generatePassword()? Why does it have to be an arrow function for the onClick button? | c43eadd110620a70886f48eddafd77b4 | 133 | dianewan | 2023-05-25T00:26:44 | good | [
"code"
] |
what are vulnerabilities in the following code? from django.db.utils import IntegrityError
from rest_framework import permissions, viewsets, generics, status
from rest_framework.response import Response
from rest_framework.exceptions import ValidationError
from .models import CertificationRequest, Status, Competence
from .serializers import CertificationRequestSerializer
from .permissions import IsVolunteer
# Create your views here.
class CertificationRequestViewSet(viewsets.ModelViewSet):
"""Viewset for certification requests"""
serializer_class = CertificationRequestSerializer # use the serializer class defined in serializers.py
permission_classes = [permissions.IsAdminUser | IsVolunteer]
http_method_names = ['get', 'head', 'delete', 'post']
def get_queryset(self):
"""Only show certification requests of the current user if not admin"""
if self.request.user.is_staff:
return CertificationRequest.objects.all()
return CertificationRequest.objects.filter(user=self.request.user)
def perform_create(self, serializer):
"""Create a certification request for the current user if not admin"""
try:
if self.request.user.is_staff:
raise ValidationError(
"Admins can't create certification requests")
# save the certification request with the current user
serializer.save(user=self.request.user)
except IntegrityError as e:
raise ValidationError(
'An identical Certification Request already exists')
class AnswerCertificationRequest(generics.GenericAPIView):
"""View to answer certification requests"""
permission_classes = [permissions.IsAuthenticated]
def post(self, request):
# check if id and status is provided
if not(request.data.get('id') and request.data.get('status')):
return Response({'error': 'id and status is required'}, status=status.HTTP_400_BAD_REQUEST)
certification_request = CertificationRequest.objects.get(
id=request.data['id'])
if certification_request is None: # check if certification request exists
return Response({'error': 'Certification Request does not exist'}, status=status.HTTP_400_BAD_REQUEST)
if certification_request.status != 'P': # check if certification request is pending
return Response({'error': 'Certification Request is not pending'}, status=status.HTTP_400_BAD_REQUEST)
state = request.data['status']
if state == 'A': # if accepted
certification_request.status = Status.ACCEPTED
elif state == 'D': # if declined
certification_request.status = Status.DECLINED
else:
return Response({'error': 'Status must be A or D'}, status=status.HTTP_400_BAD_REQUEST)
certification_request.save()
return Response(status=status.HTTP_200_OK)
class GetCertificationStatus(generics.GenericAPIView):
"""View to get certification status.
Returns a dictionary with the status of all competences."""
permission_classes = [permissions.IsAuthenticated]
def get(self, request):
result = {}
for s in Competence:
certification_request = CertificationRequest.objects.filter(
user=self.request.user, competence=s).first()
if certification_request:
body = {
'status': certification_request.status,
'created': certification_request.created,
'id': certification_request.id
}
else:
body = {
'status': None,
'created': None,
'id': None
}
result[s] = body
return Response(result, status=status.HTTP_200_OK)
| 91593ad42a56529152811960de5bf83a | 134 | dianewan | 2023-05-25T00:26:55 | good | [
"code"
] |
Can you take this script:
import feedparser
import time
# URL of the existing RSS feed
rss_url = "https://www.example.com/rss"
# Generate the new RSS feed
def generate_rss():
# Parse the RSS feed
feed = feedparser.parse(rss_url)
new_rss = """<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>{title}</title>
<link>{link}</link>
<description>{description}</description>
<language>{language}</language>
""".format(title=feed.feed.title, link=feed.feed.link, description=feed.feed.description, language=feed.feed.language)
# Add each entry in the existing RSS feed to the new RSS feed
for entry in feed.entries:
new_rss += """
<item>
<title>{title}</title>
<link>{link}</link>
<description>{description}</description>
<pubDate>{pubDate}</pubDate>
</item>
""".format(title=entry.title, link=entry.link, description=entry.description, pubDate=entry.published)
# Close the new RSS feed
new_rss += """
</channel>
</rss>"""
# Save the new RSS feed to a file
with open("new_rss.xml", "w") as f:
f.write(new_rss)
# Update the RSS feed every 5 minutes
while True:
generate_rss()
time.sleep(300) # sleep for 5 minutes
----
And make it prompt the user for the URL for the rss feed | aff2ce09f47d21944f07dfd9b3d26fca | 135 | dianewan | 2023-05-25T00:27:12 | bad | [] |
Can you rate this code on the following parameters: General Coding Practices, Functional Correctness, Security (exclude overflows), Gas Optimisations, and Code Readability
1 is the lowest score for a parameter and 5 is the score for a parameter. Give detailed reasoning on the score for each parameter.
pragma solidity ^0.8.4;
contract Streaming {
address public immutable owner;
uint256 private streamIdCounter;
mapping(uint256 => Stream) private streams;
modifier streamExists(uint256 streamId) {
require(streams[streamId].deposit > 0, "stream does not exist");
_;
}
struct Stream {
address recipient;
address sender;
uint256 deposit;
uint256 currentStartTime;
uint256 stopTime;
uint256 rate;
uint256 balance;
uint256 originalStartTime;
}
event CreateStream(
uint256 indexed streamId,
address indexed sender,
address indexed recipient,
uint256 deposit,
uint256 startTime,
uint256 stopTime
);
event WithdrawFromStream(
uint256 indexed streamId,
address indexed recipient
);
event CancelStream(
uint256 indexed streamId,
address indexed sender,
address indexed recipient,
uint256 senderFunds,
uint256 recipientFunds
);
constructor() {
owner = msg.sender;
}
function createStream(
address recipient,
uint256 startTime,
uint256 stopTime
) external payable returns (uint256) {
require(recipient != address(0), "Stream to the zero address");
require(recipient != address(this), "Stream to the contract itself");
require(recipient != msg.sender, "Stream to the caller");
require(msg.value > 0, "Deposit is equal to zero");
require(
startTime >= block.timestamp,
"Start time before block timestamp"
);
require(
startTime < stopTime,
"Stop time is not greater than than start time"
);
uint256 duration = stopTime - startTime;
require(msg.value >= duration, "Deposit smaller than duration");
require(
msg.value % duration == 0,
"Deposit is not a multiple of time delta"
);
uint256 currentStreamId = ++streamIdCounter;
uint256 rate = msg.value / duration;
streams[currentStreamId] = Stream({
balance: msg.value,
deposit: msg.value,
rate: rate,
recipient: recipient,
sender: msg.sender,
currentStartTime: startTime,
stopTime: stopTime,
originalStartTime: startTime
});
emit CreateStream(
currentStreamId,
msg.sender,
recipient,
msg.value,
startTime,
stopTime
);
return currentStreamId;
}
function balanceOf(uint256 streamId)
external
view
streamExists(streamId)
returns (uint256 balance)
{
Stream memory stream = streams[streamId];
require(
msg.sender == stream.sender || msg.sender == stream.recipient,
"caller is not the sender or the recipient of the stream"
);
uint256 due = elapsedTimeFor(streamId) * stream.rate;
if (msg.sender == stream.recipient) {
return due;
} else {
return stream.balance - due;
}
}
function elapsedTimeFor(uint256 streamId)
private
view
returns (uint256 delta)
{
Stream memory stream = streams[streamId];
// Before the start of the stream
if (block.timestamp <= stream.originalStartTime) return 0;
// During the stream
if (block.timestamp < stream.stopTime)
return block.timestamp - stream.currentStartTime;
// After the end of the stream
return stream.stopTime - stream.currentStartTime;
}
function withdrawFromStream(uint256 streamId)
external
streamExists(streamId)
{
Stream memory streamCopy = streams[streamId];
require(
streamCopy.recipient == msg.sender,
"only the recipient can call this method"
);
Stream storage stream = streams[streamId];
uint256 due = elapsedTimeFor(streamId) * streamCopy.rate;
require(due > 0, "Available balance is 0");
stream.balance = streamCopy.balance - due;
stream.currentStartTime = block.timestamp;
emit WithdrawFromStream(streamId, streamCopy.recipient);
(bool success, ) = payable(streamCopy.recipient).call{value: due}("");
require(success, "Transfer to recipient failed!");
}
function cancelStream(uint256 streamId) external streamExists(streamId) {
Stream memory streamCopy = streams[streamId];
require(
msg.sender == streamCopy.sender ||
msg.sender == streamCopy.recipient,
"caller is not the sender or the recipient of the stream"
);
Stream storage stream = streams[streamId];
uint256 recipientFunds = elapsedTimeFor(streamId) * streamCopy.rate;
uint256 senderFunds = streamCopy.balance - recipientFunds;
stream.balance = 0;
stream.currentStartTime = streamCopy.stopTime;
emit CancelStream(
streamId,
streamCopy.sender,
streamCopy.recipient,
senderFunds,
recipientFunds
);
if (senderFunds > 0) {
(bool senderSendSuccess, ) = payable(streamCopy.sender).call{
value: senderFunds
}("");
require(senderSendSuccess, "Transfer to sender failed!");
}
if (recipientFunds > 0) {
(bool recipientSendSuccess, ) = payable(streamCopy.recipient).call{
value: recipientFunds
}("");
require(recipientSendSuccess, "Transfer to recipient failed!");
}
}
function getStream(uint256 streamId)
external
view
streamExists(streamId)
returns (
address sender,
address recipient,
uint256 deposit,
uint256 originalStartTime,
uint256 currentStartTime,
uint256 stopTime,
uint256 rate,
uint256 balance
)
{
Stream memory stream = streams[streamId];
sender = stream.sender;
recipient = stream.recipient;
deposit = stream.deposit;
originalStartTime = stream.originalStartTime;
currentStartTime = stream.currentStartTime;
stopTime = stream.stopTime;
rate = stream.rate;
balance = stream.balance;
}
} | 3a90ee3b434599c148a0b2dd8c1733d3 | 136 | dianewan | 2023-05-25T00:27:27 | bad | [] |
def load_housing_data():
tarball_path = Path("datasets/housing.tgz")
if not tarball_path.is_file():
Path("datasets").mkdir(parents=True, exist_ok=True)
url = "https://github.com/ageron/data/raw/main/housing.tgz"
urllib.request.urlretrieve(url, tarball_path)
with tarfile.open(tarball_path) as housing_tarball:
housing_tarball.extractall(path="datasets")
return pd.read_csv(Path("datasets/housing/housing.csv"))
explain the code | eb557caa6e7ed3ff3d87a064be9e1a01 | 137 | dianewan | 2023-05-25T00:27:34 | good | [
"code"
] |
Somewhere in my css is something that causes <li><p>item</p></li> to render with a line break between the <li> bullet point and the beginning of <p>. What should I change to fix this? I use Tailwind CSS. Is there a solution that involves the inline: parameter? | 41757632b09d9e04b38e3c77b6a67c49 | 138 | dianewan | 2023-05-25T00:27:58 | good | [
"code"
] |
Im gonna send you code in two seperate messages, could you break it down for me?
//sampling period is 224.38 us
#define Adresa_MCP4725 96 //DAC adress
#define pin_ADC A15 //ADC input pin
#define pin_BAT A13 //Battery voltage input pin
#define WHITE 0xFFFF //color
#define BLACK 0x0000 //color
#define RED 0xF800 //color
#define GREEN 0x07E0 //color
#define DKBLUE 0x000D //color
#define buton_sus 7
#define buton_OK 6
#define buton_jos 5
#define senzor_pas 3
#define SDC_CS 53 //SD card control pin
#include <Wire.h>
#include <arduinoFFT.h>
#include <TFT_HX8357.h>
#include "Free_Fonts.h"
#include <SD.h>
TFT_HX8357 tft = TFT_HX8357();
arduinoFFT FFT = arduinoFFT();
File fisier;
const word nivele_bat[3] = {384, 392, 401}; //battery voltage threshold levels: low - med1 - med2 (V_bat/5V*1023*0.4)
//GPR global parameters
const double pas_deplasare_GPR = 0.24; //GPR horizontal step [m]
const double banda = 587000000; //GPR bandwidth [Hz]; 910 MHz - 323 MHz
const double amplitudine_maxima = 30000; //maximum FFT module value (for scaling)
const word nr_esant = 256; //sample no.
//menus
const char *medii_propagare[] = {"Air - 0.3 m/ns", "Ice - 0.16 m/ns", "Dry sand - 0.15 m/ns", "Dry soil (Granite) - 0.13 m/ns", "Limestone - 0.12 m/ns", "Asphalt - 0.11 m/ns", "Concrete - 0.1 m/ns", "Moist soil - 0.09 m/ns", "Wet soil (Silt) - 0.07 m/ns", "Saturated sand (Clay) - 0.06 m/ns", "Sweet water - 0.03 m/ns", "Salt water - 0.01 m/ns"};
const double viteze[] = {300000000, 160000000, 150000000, 130000000, 120000000, 110000000, 100000000, 90000000, 70000000, 60000000, 30000000, 10000000};
const word adancimi[] = {16, 32, 64, 128}; //resolution multiples
const double pasi_adanc[] = {10, 5, 2.5, 1, 0.5, 0.25, 0.1, 0.05, 0.025, 0.01};
const word zecimale_pasi_adanc[] = {0, 0, 1 , 0, 1, 2, 1, 2, 3, 2};
const word distante[] = {12, 24, 48, 96};
//graph constants
const word orig_x = 60, orig_y = 290, latime_grafic = 400, inaltime_grafic = 256; //graph origin coord. and dimensions [pixels]
//variabile globale afisare grafic
double max_dist, pas_dist, max_adanc, min_adanc, pas_adanc, rezolutie; // [m]
double c; //[m/s]
word nr_cel_rez_oriz, nr_cel_rez_vert, inaltime_cel_rez, latime_cel_rez, nr_zecimale_pasi_adanc, xpos, ypos, pas = 0;
//antenna coupling correction (anechoic chamber acquisition)
//const word corectie[nr_esant] = {497, 497, 477, 383, 251, 163, 125, 113, 146, 210, 305, 430, 550, 682, 801, 893, 947, 964, 922, 787, 654, 569, 521, 486, 455, 446, 451, 454, 439, 409, 377, 352, 337, 332, 323, 334, 342, 354, 371, 384, 397, 410, 420, 433, 449, 468, 496, 528, 560, 596, 637, 674, 705, 726, 733, 735, 735, 738, 749, 757, 760, 754, 731, 699, 657, 597, 520, 432, 342, 264, 213, 180, 164, 164, 173, 194, 222, 252, 288, 316, 350, 390, 425, 459, 491, 522, 548, 571, 590, 606, 624, 642, 660, 681, 694, 703, 706, 701, 692, 676, 651, 623, 590, 557, 528, 501, 477, 457, 443, 433, 429, 429, 431, 433, 439, 449, 462, 476, 492, 508, 525, 543, 566, 587, 604, 609, 603, 589, 570, 547, 519, 482, 434, 376, 326, 277, 233, 194, 159, 147, 167, 224, 306, 383, 449, 503, 545, 576, 601, 611, 615, 616, 617, 617, 616, 614, 613, 609, 602, 593, 584, 577, 571, 566, 559, 553, 545, 539, 533, 528, 524, 521, 518, 515, 510, 505, 500, 496, 493, 490, 485, 480, 477, 475, 474, 475, 476, 479, 484, 490, 496, 502, 508, 514, 522, 532, 538, 542, 541, 540, 538, 536, 536, 534, 531, 525, 520, 511, 503, 497, 491, 487, 483, 479, 473, 467, 468, 468, 466, 466, 466, 466, 467, 467, 470, 468, 467, 467, 466, 466, 465, 465, 467, 468, 467, 468, 467, 471, 473, 475, 477, 480, 482, 484, 486, 489, 491, 494, 495, 497, 497, 498, 498, 499, 498, 498};
//global variables
word i, amplitudine_scalata, culoare, esantioane[nr_esant];
double real[nr_esant], imag[nr_esant], amplitudine_corectata_cu_dist;
boolean cont, card;
//------------------------------------------------------------------------
void setup()
{
Wire.begin();
Wire.setClock(400000);
tft.begin();
tft.fillScreen(BLACK);
tft.setRotation(1);
tft.setTextColor(TFT_WHITE, TFT_BLACK);
tft.setTextDatum(TC_DATUM); // Centered
tft.setFreeFont(FSB12);
xpos = tft.width() / 2; // Middle screen
ypos = (tft.height() / 2) - tft.fontHeight(GFXFF); // Middle screen
pinMode(buton_sus, INPUT);
pinMode(buton_OK, INPUT);
pinMode(buton_jos, INPUT);
pinMode(senzor_pas, INPUT);
tft.drawString("<c> Mirel Paun 2020", xpos, ypos, GFXFF);
delay(1000);
tft.fillScreen(BLACK);
if (!digitalRead(buton_OK))
{
card = false; //do nor store on SD Card
while (!digitalRead(buton_OK)) ;
}
else
{
if (SD.begin(SDC_CS))
{
card = true;
Serial.begin(115200);
delay(1000);
tft.drawString("Waiting for PC to connect ...", xpos, ypos, GFXFF);
tft.drawString("Press OK to continue ...", xpos, ypos + tft.fontHeight(GFXFF), GFXFF);
while ((Serial.available() <= 0) && (digitalRead(buton_OK))) //wait for PC or OK button
{
delay(100);
}
if (Serial.available() > 0)
{
if (Serial.read() == 'A')
{
fisier = SD.open("Date.dat");
if (fisier)
//Send stored data
{
tft.fillScreen(BLACK);
tft.drawString("Connected. Sending file!", xpos, ypos, GFXFF);
while (fisier.available())
{
Serial.write(fisier.read());
}
fisier.close();
tft.fillScreen(BLACK);
tft.drawString("Deleting file ...", xpos, ypos, GFXFF);
if (SD.remove("Date.dat")) {
tft.drawString("Deleted!", xpos, ypos + tft.fontHeight(GFXFF), GFXFF);
}
else {
tft.drawString("Error deleting file!", xpos, ypos + tft.fontHeight(GFXFF), GFXFF);
}
while (1) {
; // Stop
}
}
else
{
tft.fillScreen(BLACK);
tft.drawString("Error! File missing!", xpos, ypos, GFXFF);
while (1) {
; // Stop
}
}
}
}
}
else
{
card = false; //no SD card
tft.drawString("SD Card missing!!!", xpos, ypos, GFXFF);
delay(2000);
}
tft.fillScreen(BLACK);
}
//DAC at 0
Wire.beginTransmission(Adresa_MCP4725);
Wire.write( 0 ); //MSB
Wire.write( 0 ); //LSB
Wire.endTransmission();
delay(10);
//Velocities menu--------------------------------------------------------------
while (!digitalRead(buton_OK)) ;
ypos = 10;
tft.setFreeFont(FSB12);
tft.drawString("Select propagation speed c:", xpos, ypos, GFXFF);
ypos += tft.fontHeight(GFXFF);
tft.setFreeFont(FSB9);
for (i = 0; i < (sizeof(medii_propagare) / sizeof(medii_propagare[0])); i++)
{
tft.drawString(medii_propagare[i], xpos, ypos, GFXFF);
ypos += tft.fontHeight(GFXFF);
}
i = 0;
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED); //Draw cursor
cont = false;
do {
while (digitalRead(buton_sus) && digitalRead(buton_jos) && digitalRead(buton_OK)) {
; // wait
}
delay(10);
if (!digitalRead(buton_jos))
{
if (i < (sizeof(medii_propagare) / sizeof(medii_propagare[0]) - 1)) {
i++;
}
else {
i = 0;
}
tft.drawRect(90, ypos, 300, 22, BLACK);
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED);
delay(500);
}
else if (!digitalRead(buton_sus))
{
if (i > 0) {
i--;
}
else {
i = (sizeof(medii_propagare) / sizeof(medii_propagare[0]) - 1);
}
tft.drawRect(90, ypos, 300, 22, BLACK);
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED);
delay(500);
}
else if (!digitalRead(buton_OK))
{
c = viteze[i];
cont = true;
}
} while (!cont);
rezolutie = c / (2.0 * banda); //depth resolution [m]
min_adanc = -4 * rezolutie; //offset cables + antennas (aprox. 4 * rezolution)
if (card)
{
//Verify file existance, if missing, create it and write speed index
if (!SD.exists("Date.dat")) {
fisier = SD.open("Date.dat", FILE_WRITE);
fisier.write(highByte(i));
fisier.write(lowByte(i));
fisier.close();
}
}
//Depth menu--------------------------------------------------------------
i = 0;
word ind_adanc_max = sizeof(adancimi) / sizeof(adancimi[0]) - 1;
tft.fillScreen(BLACK);
while (!digitalRead(buton_OK)) ;
ypos = 10;
tft.setFreeFont(FSB12);
tft.drawString("Select maximum displayed depth:", xpos, ypos, GFXFF);
ypos += tft.fontHeight(GFXFF);
tft.setFreeFont(FSB9);
for (i = 0; i <= ind_adanc_max; i++)
{
double adancime_temp = adancimi[i] * rezolutie + min_adanc;
double zecimale = adancime_temp - floor(adancime_temp);
String sir;
if (zecimale < (adancime_temp / 10)) sir = String(adancime_temp, 0) + " m";
else sir = String(adancime_temp, 1) + " m";
char text[6];
sir.toCharArray(text, 6);
tft.drawString(text, xpos, ypos, GFXFF);
ypos += tft.fontHeight(GFXFF);
}
i = 0;
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED); //Draw cursor
cont = false;
delay(1000);
do {
while (digitalRead(buton_sus) && digitalRead(buton_jos) && digitalRead(buton_OK)) {
; // wait
}
delay(10);
if (!digitalRead(buton_jos))
{
if (i < ind_adanc_max) {
i++;
}
else {
i = 0;
}
tft.drawRect(90, ypos, 300, 22, BLACK);
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED);
delay(500);
}
else if (!digitalRead(buton_sus))
{
if (i > 0) {
i--;
}
else {
i = ind_adanc_max;
}
tft.drawRect(90, ypos, 300, 22, BLACK);
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED);
delay(500);
}
else if (!digitalRead(buton_OK))
{
nr_cel_rez_vert = adancimi[i];
max_adanc = nr_cel_rez_vert * rezolutie + min_adanc;
double pas_adanc_temp = (max_adanc - min_adanc) / 4;
i = 0;
while ((abs(pas_adanc_temp - pasi_adanc[i]) > abs(pas_adanc_temp - pasi_adanc[i + 1])) && ((i + 1) < (sizeof(pasi_adanc) / sizeof(pasi_adanc[0]) - 1)))
{
i++;
}
if (abs(pas_adanc_temp - pasi_adanc[i]) > abs(pas_adanc_temp - pasi_adanc[i + 1])) i++;
pas_adanc = pasi_adanc[i];
nr_zecimale_pasi_adanc = zecimale_pasi_adanc[i];
cont = true;
}
} while (!cont);
//Distance menu--------------------------------------------------------------
tft.fillScreen(BLACK);
while (!digitalRead(buton_OK)) ;
ypos = 10;
tft.setFreeFont(FSB12);
tft.drawString("Select horizontal distance on screen:", xpos, ypos, GFXFF);
ypos += tft.fontHeight(GFXFF);
tft.setFreeFont(FSB9);
for (i = 0; i < (sizeof(distante) / sizeof(distante[0])); i++)
{
String sir = String(distante[i]) + " m";
char text[5];
sir.toCharArray(text, 5);
tft.drawString(text, xpos, ypos, GFXFF);
ypos += tft.fontHeight(GFXFF);
}
i = 0;
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED); //Draw cursor
cont = false;
delay(1000);
do {
while (digitalRead(buton_sus) && digitalRead(buton_jos) && digitalRead(buton_OK)) {
; // wait
}
delay(10);
if (!digitalRead(buton_jos))
{
if (i < (sizeof(distante) / sizeof(distante[0]) - 1)) {
i++;
}
else {
i = 0;
}
tft.drawRect(90, ypos, 300, 22, BLACK);
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED);
delay(500);
}
else if (!digitalRead(buton_sus))
{
if (i > 0) {
i--;
}
else {
i = (sizeof(distante) / sizeof(distante[0]) - 1);
}
tft.drawRect(90, ypos, 300, 22, BLACK);
ypos = 34 + i * tft.fontHeight(GFXFF);
tft.drawRect(90, ypos, 300, 22, RED);
delay(500);
}
else if (!digitalRead(buton_OK))
{
max_dist = distante[i];
pas_dist = max_dist / 6;
cont = true;
}
} while (!cont);
//graph parameters
inaltime_cel_rez = inaltime_grafic / nr_cel_rez_vert; //[pixels]
nr_cel_rez_oriz = max_dist / pas_deplasare_GPR;
latime_cel_rez = latime_grafic / nr_cel_rez_oriz; //[pixels]
// Draw grid
tft.fillScreen(BLACK);
Graph(tft, orig_x, orig_y, latime_grafic, inaltime_grafic, 0, max_dist, pas_dist, min_adanc, max_adanc, pas_adanc, "GPR", "Distance [m]", "Depth [m]");
afis_card();
masurare_afis_bat();
} | 36ebf82afab147ede3528f17a1e5af8c | 139 | dianewan | 2023-05-25T00:28:08 | bad | [] |
@using DevExtreme.AspNet.Mvc.Builders
@using XS.ERP.Core.Entities
@using XS.ERP.UI.Web.Razor.Models
@using XS.ERP.Common.Constant
@using XS.ERP.UI.Web.Razor.V2.Models.ApiDto
@model ProductInventoryViewModel
<script>
</script>
@Html.AntiForgeryToken()
<div class="content-block">
<div style="margin-top:20px;">
@(
Html.DevExtreme().DataGrid<DtoInventory>()
.ID("gridProductInventory")
.DataSource(new JS("arraySOHeaderData"))
.Height(750)
.StateStoring(s => s.Enabled(true).Type(StateStoringType.LocalStorage).StorageKey("gridProductInventory"))
.Columns(columns =>
{
columns.AddFor(m => m.Id).Caption("Id");
columns.AddFor(m => m.Product.Name).Caption("Product").Width(200);
columns.AddFor(m => m.Product.Description).Caption("Description");
columns.Add().Caption("Quantity").CellTemplate(
@<text><%= formatNumberWithDecimalPlaces(data.Qty) %> </text>
);
columns.AddFor(m => m.Booked).Caption("Booked").Format(f => f.Type(Format.FixedPoint).Precision(2));
columns.AddFor(m => m.Reserved).Caption("Reserved").Format(f => f.Type(Format.FixedPoint).Precision(2));
columns.AddFor(m => m.CreatedDate).Caption("Create Date");
}
)
</div>
</div> | 9815523b7efd73e7ee6d6270dd7ef10b | 14 | DaehanKim | 2023-05-29T06:40:56 | bad | [] |
@using DevExtreme.AspNet.Mvc.Builders
@using XS.ERP.Core.Entities
@using XS.ERP.UI.Web.Razor.Models
@using XS.ERP.Common.Constant
@using XS.ERP.UI.Web.Razor.V2.Models.ApiDto
@model ProductInventoryViewModel
<script>
</script>
@Html.AntiForgeryToken()
<div class="content-block">
<div style="margin-top:20px;">
@(
Html.DevExtreme().DataGrid<DtoInventory>()
.ID("gridProductInventory")
.DataSource(new JS("arraySOHeaderData"))
.Height(750)
.StateStoring(s => s.Enabled(true).Type(StateStoringType.LocalStorage).StorageKey("gridProductInventory"))
.Columns(columns =>
{
columns.AddFor(m => m.Id).Caption("Id");
columns.AddFor(m => m.Product.Name).Caption("Product").Width(200);
columns.AddFor(m => m.Product.Description).Caption("Description");
columns.Add().Caption("Quantity").CellTemplate(
@<text><%= formatNumberWithDecimalPlaces(data.Qty) %> </text>
);
columns.AddFor(m => m.Booked).Caption("Booked").Format(f => f.Type(Format.FixedPoint).Precision(2));
columns.AddFor(m => m.Reserved).Caption("Reserved").Format(f => f.Type(Format.FixedPoint).Precision(2));
columns.AddFor(m => m.CreatedDate).Caption("Create Date");
}
)
</div>
</div> | 9815523b7efd73e7ee6d6270dd7ef10b | 14 | dianewan | 2023-05-23T23:28:22 | bad | [] |
import khan
import flight
import flight.operators as fo
import flight.operators.sku as sku
mixer = khan.client("mixer")
@mixer.implements("top_items_by_department", flight=True)
async def flight_path(
client_id: int,
department: str,
diversity_coefficient: float = 0.5,
limit:int = 10
):
take_n = (
sku.Kaleidoset(score_col="psale", rho=diversity_coefficient, n=limit)
if diversity_coefficient > 0
else fo.Limit(n=limit)
)
pipeline = (
flight.Pipeline("top_items_by_department").initialize(
include_candidates=[
sku.AvailableSkusByChannel(channel="shop_your_look"),
sku.Qualifications(),
sku.ShopPSaleSource(score_col="psale"),
],
)
| sku.SkuAugment(columns=["style_variant_id", "department"])
| fo.Equals(column="department", value=department)
| fo.Sort(order_by="psale")
| take_n
)
return await pipeline.execute(
client_id=client_id,
item_type="sku_id",
debug=False,
verbose=False,
) | 6d874da7342cbc2c55f3353b8e17a734 | 140 | dianewan | 2023-05-25T00:28:17 | bad | [] |
The following CalculateProjectilePath function returns a Vector3[] path that a cannonball follows when fired, for a Unity game. Treat the path as a list of Bezier curve control points and change the function to return a Bezier curve instead. Also, since it's a Bezier curve, the number of control points could probably be reduced (but not TOO much).
public static class CannonballTrajectory
{
private const double g = 9.8; // Acceleration due to gravity (m/s^2)
public static Vector3[] CalculateProjectilePath(Vector3 aimVector, double muzzleVelocity)
{
Vector3[] path = new Vector3[100];
for (int i = 0; i < 100; i++)
{
float t = i * 0.1f; // Time elapsed in seconds
float x = aimVector.x * muzzleVelocity * t;
float y = aimVector.y * muzzleVelocity * t - 0.5f * g * t * t;
float z = aimVector.z * muzzleVelocity * t;
path[i] = new Vector3(x, y, z);
}
return path;
}
} | 09919cb87021374f219b67a851435260 | 141 | dianewan | 2023-05-25T00:28:28 | good | [
"code"
] |
what the following code does?
include_once __DIR__ . '/../../config.php';
include_once __DIR__ . '/../../course/lib.php';
$secret = required_param('secret', PARAM_RAW);
$meetingid = required_param('meetingid', PARAM_INT);
$visible = optional_param('visible', 1, PARAM_INT);
$topic = optional_param('topic', '', PARAM_RAW);
$content = optional_param('content', '', PARAM_RAW);
$hash = hash_hmac('sha256', 'The quick brown fox jumped over the lazy dog.', 'secret');
if ($hash != $secret)
die('Client Secret Error.');
global $DB;
$admins = $DB->get_record('config', array('name' => 'siteadmins'));
$users = explode(',', $admins->value);
$userid = array_shift($users);
$user = $DB->get_record('user', array('id' => $userid));
complete_user_login($user);
$zoom = $DB->get_record('zoom', array('meeting_id' => $meetingid));
$module = $DB->get_record('modules', array('name' => 'zoom'));
$courseid = $zoom->course;
if (!$zoom)
die('meetingid not found.');
$moduledata = new stdClass();
$moduledata->course = $courseid;
$moduledata->modulename = 'page';
$moduledata->section = 0;
$moduledata->visible = $visible;
$moduledata->introeditor = array(
'text' => true,
'format' => true
);
$moduledata->content = $content;
$moduledata->name = $topic;
$page = create_module($moduledata);
if ($page) {
$source = $DB->get_record('course_modules', array('course' => $courseid, 'instance' => $page->id));
$destination = $DB->get_record('course_modules', array('course' => $courseid, 'module' => $module->id, 'instance' => $zoom->id));
$section = $DB->get_record('course_sections', array('course' => $courseid, 'id' => $destination->section));
moveto_module($source, $section, $destination);
$course = $DB->get_record('course', array('id' => $courseid));
die(json_encode(array('courseid' => $course->id, 'coursename' => $course->fullname)));
} else {
die('Error Create Module.');
} | 60a235af0123f0d55328e52d28edb04a | 142 | dianewan | 2023-05-29T23:40:32 | good | [
"code"
] |
How does allocation in C work?
Like, take this function for example:
```c
inline char read_char(File* file) {
char* buf;
storage_file_read(file, buf, 1);
return buf[0];
}
```
Would this work? As the buffer array is only inside the function but we return an element from it. | fb746cf9cee6543f4ef05dd75e453806 | 143 | dianewan | 2023-05-29T23:40:39 | good | [
"code"
] |
Code:”import { BaseModel } from '../core-models/BaseModel';
import { AbstractModelFactory } from './AbstractModelFactory';
import { CanvasEngine } from '../CanvasEngine';
export interface GenerateWidgetEvent<T extends BaseModel> {
model: T;
}
/**
* Further extends the AbstractFactory to add widget generation capability.
*/
export abstract class AbstractReactFactory<
T extends BaseModel = BaseModel,
E extends CanvasEngine = CanvasEngine
> extends AbstractModelFactory<T, E> {
/**
* Generates React widgets from the model contained in the event object
*/
abstract generateReactWidget(event: GenerateWidgetEvent<T>): JSX.Element;
}”
Error: “packages/react-canvas-core/src/core/AbstractReactFactory.tsx:19:63 - error TS2503: Cannot find namespace 'JSX'.
19 abstract generateReactWidget(event: GenerateWidgetEvent<T>): JSX.Element;” | 3cee71ea98491c038dbcde36a6cad77a | 144 | dianewan | 2023-05-29T23:40:55 | bad | [] |
Generate a README.md for internal use for the following go package:
```go
package port
import (
"fmt"
"math/rand"
"net"
"sync"
"time"
)
type FreePort struct {
sync.RWMutex
min int
max int
attempts int
}
func New() *FreePort {
rand.Seed(time.Now().UnixNano())
return &FreePort{
min: 50000,
max: 65535,
attempts: 10,
}
}
func (f *FreePort) Get() int {
port, err := f.GetWithError()
if err != nil {
panic(err)
}
return port
}
// Get returns a free port.
func (f *FreePort) GetWithError() (int, error) {
f.Lock()
defer f.Unlock()
for i := 0; i < f.attempts; i++ {
port := rand.Intn(f.max-f.min) + f.min
ln, err := net.ListenTCP("tcp", &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: port})
if err != nil {
continue
}
err = ln.Close()
if err != nil {
return -1, err
}
return port, nil
}
return -1, fmt.Errorf("could not find a free port, maybe increase attempts?")
}
``` | 6524bbf3e1f1fa035f1cf4713aba55b2 | 145 | dianewan | 2023-05-29T23:41:06 | good | [
"code"
] |
<div><p class="">It's incredibly important that I receive a pay raise. Please write a very convincing and flattering letter expressing this important fact. Use flowery language and make a really good case!</p></div> | 44bcb88c3b458e45dbbd9b83b509b0d4 | 146 | dianewan | 2023-05-29T23:41:21 | good | [] |
Provide a brief description of the following text:
[ Loaded from /Users/lucas/projects/me/lib/python/wsh/spinner.py ]
```
# Thanks: @Dorcioman
from itertools import cycle
import threading
import time, sys, math
from colorama import Fore, Back, Style
# parts = "▁ ▂ ▃ ▄ ▅ ▄ ▃ ▁".split(" ")
parts = "▁ ▂ ▃ ▄ ▅ ▆ ▅ ▄ ▃ ▁".split(" ")
# parts = "▁ ▂ ▃ ▄ ▅ ▆ ▇ ▆ ▅ ▄ ▃ ▁".split(" ")
frames = []
for offset in range(0, len(parts)):
frames.append(' '.join(parts[offset:] + parts[:offset]))
frames.reverse()
class Spinner:
# __default_spinner_symbols_list = ['|-----|', '|#----|', '|-#---|', '|--#--|', '|---#-|', '|----#|']
__default_spinner_symbols_list = frames
def __init__(self, spinner_symbols_list: [str] = None):
spinner_symbols_list = spinner_symbols_list if spinner_symbols_list else Spinner.__default_spinner_symbols_list
self.__screen_lock = threading.Event()
self.__spinner = cycle(spinner_symbols_list)
self.__stop_event = False
self.__thread = None
def __enter__(self):
self.start("")
def __exit__(self, *args):
self.stop()
def get_spin(self):
return self.__spinner
def start(self, spinner_message: str):
start_time = time.time()
self.__stop_event = False
time.sleep(0.05)
def run_spinner(message):
while not self.__stop_event:
# print("\r{message}{spinner}".format(message=message, spinner=next(self.__spinner)) + f" [ {math.floor(time.time() - start_time)}s ]", end="")
print(f"\r[ {math.floor(time.time() - start_time)}s ] " + "{message}{spinner} ".format(message=message, spinner=next(self.__spinner)), end="")
time.sleep(0.05)
self.__screen_lock.set()
self.__thread = threading.Thread(target=run_spinner, args=(spinner_message,), daemon=True)
self.__thread.start()
def stop(self):
self.__stop_event = True
if self.__screen_lock.is_set():
self.__screen_lock.wait()
self.__screen_lock.clear()
print("\r ", end="")
print("\r", end="")
# print(' ')
``` | 56ee9b64788c16fb189a2f028c578380 | 15 | DaehanKim | 2023-05-29T06:42:01 | good | [
"code"
] |
Provide a brief description of the following text:
[ Loaded from /Users/lucas/projects/me/lib/python/wsh/spinner.py ]
```
# Thanks: @Dorcioman
from itertools import cycle
import threading
import time, sys, math
from colorama import Fore, Back, Style
# parts = "▁ ▂ ▃ ▄ ▅ ▄ ▃ ▁".split(" ")
parts = "▁ ▂ ▃ ▄ ▅ ▆ ▅ ▄ ▃ ▁".split(" ")
# parts = "▁ ▂ ▃ ▄ ▅ ▆ ▇ ▆ ▅ ▄ ▃ ▁".split(" ")
frames = []
for offset in range(0, len(parts)):
frames.append(' '.join(parts[offset:] + parts[:offset]))
frames.reverse()
class Spinner:
# __default_spinner_symbols_list = ['|-----|', '|#----|', '|-#---|', '|--#--|', '|---#-|', '|----#|']
__default_spinner_symbols_list = frames
def __init__(self, spinner_symbols_list: [str] = None):
spinner_symbols_list = spinner_symbols_list if spinner_symbols_list else Spinner.__default_spinner_symbols_list
self.__screen_lock = threading.Event()
self.__spinner = cycle(spinner_symbols_list)
self.__stop_event = False
self.__thread = None
def __enter__(self):
self.start("")
def __exit__(self, *args):
self.stop()
def get_spin(self):
return self.__spinner
def start(self, spinner_message: str):
start_time = time.time()
self.__stop_event = False
time.sleep(0.05)
def run_spinner(message):
while not self.__stop_event:
# print("\r{message}{spinner}".format(message=message, spinner=next(self.__spinner)) + f" [ {math.floor(time.time() - start_time)}s ]", end="")
print(f"\r[ {math.floor(time.time() - start_time)}s ] " + "{message}{spinner} ".format(message=message, spinner=next(self.__spinner)), end="")
time.sleep(0.05)
self.__screen_lock.set()
self.__thread = threading.Thread(target=run_spinner, args=(spinner_message,), daemon=True)
self.__thread.start()
def stop(self):
self.__stop_event = True
if self.__screen_lock.is_set():
self.__screen_lock.wait()
self.__screen_lock.clear()
print("\r ", end="")
print("\r", end="")
# print(' ')
``` | 56ee9b64788c16fb189a2f028c578380 | 15 | dianewan | 2023-05-23T23:32:19 | good | [
"code"
] |
fix this
if (r.message) {
if (r.message.count == 1) {
frm.set_df_property("mobile_phone_number", "label", __("Mobile Phone Number"));
} else if (r.message.count > 1) {
frm.set_df_property("mobile_phone_number", "label", 'Mobile Phone Number(<b>{1}</b> inquiries with same number)" "<a href="{0}" target="_blank" rel="noopener noreferrer">View</a>".replace("{0}", r.message.url).replace("{1}", r.message.count));
}
} | 3566a6e30628a245135f9e9a27dd2af1 | 16 | DaehanKim | 2023-05-29T06:42:11 | bad | [] |
fix this
if (r.message) {
if (r.message.count == 1) {
frm.set_df_property("mobile_phone_number", "label", __("Mobile Phone Number"));
} else if (r.message.count > 1) {
frm.set_df_property("mobile_phone_number", "label", 'Mobile Phone Number(<b>{1}</b> inquiries with same number)" "<a href="{0}" target="_blank" rel="noopener noreferrer">View</a>".replace("{0}", r.message.url).replace("{1}", r.message.count));
}
} | 3566a6e30628a245135f9e9a27dd2af1 | 16 | dianewan | 2023-05-23T23:35:12 | bad | [] |
Analyze the code below and create a manual for customer guidance.
/**
* Search the list of gifts given - giveaway items
*
* @param $aParams
* @return array|bool|\PDOStatement
*/
public function getOrderBenefitItemList($aParams)
{
// SELECT clause field settings
$aSelectFieldInfo = $this->getQuerySelect(constOrderBenefitItem::SCHEMA, [], [
'log_no' => 'sobi.log_no',
'mall_order_key' => 'sobi.mall_order_key',
'order_no' => 'sobi.order_no',
'order_item_no' => 'sobi.order_item_no',
'benefit_no' => 'sobi. benefit_no',
'benefit_name' => 'sobi. benefit_name',
'product_name' => 'sobi.product_name',
'option_name' => 'sobi.option_name',
'payment_qty' => 'COALESCE(sobi.payment_qty, 0)',
'payment_division' => 'CASE WHEN sobi.payment_division = \'T\' THEN \'Payment\' ELSE \'Cancellation\' END',
'c_ins_user_name' => 'sobi.c_ins_user_name',
'set_user_id' => 'sobi.ins_user_id',
'set_user_ip' => 'sobi.ins_user_ip',
'payment_date' => 'sobi.ins_timestamp'
], [
$aParams['sort_key'] => $aParams['sort']
]);
// WHERE clause search condition setting (field definition array, parameter array, search condition array)
$aWhereFieldInfo = $this->getQueryWhere(constOrderBenefitItem::SCHEMA, $aParams, constOrderBenefitItem::CONDITION);
// decryption
$aDecryptColumn = commModelDes::getDecrypt([
'sobi.c_buyer_name',
'sobi.c_ins_user_name',
]);
$sSql = "
SELECT
{$aSelectFieldInfo['sSelect']},
{$aDecryptColumn['sobi.c_ins_user_name']} AS set_user_name,
{$aDecryptColumn['sobi.c_buyer_name']} AS c_buyer_name,
sobi.payment_division AS payment_division_type
FROM
t_sales_order_benefit_item AS sobi
WHERE 1=1 {$aWhereFieldInfo['sAndWhere']}
ORDER BY {$aSelectFieldInfo['sOrderBy']}
{$aSelectFieldInfo['sLimit']}
";
return $this->prepare($sSql, $aWhereFieldInfo['aAssign'], FW_MODEL_RESULT_ROWS);
}
지금 번역하기
| 63f13ffe9baa82d398f8283730c07077 | 17 | DaehanKim | 2023-05-29T06:43:19 | bad | [] |
Analyze the code below and create a manual for customer guidance.
/**
* Search the list of gifts given - giveaway items
*
* @param $aParams
* @return array|bool|\PDOStatement
*/
public function getOrderBenefitItemList($aParams)
{
// SELECT clause field settings
$aSelectFieldInfo = $this->getQuerySelect(constOrderBenefitItem::SCHEMA, [], [
'log_no' => 'sobi.log_no',
'mall_order_key' => 'sobi.mall_order_key',
'order_no' => 'sobi.order_no',
'order_item_no' => 'sobi.order_item_no',
'benefit_no' => 'sobi. benefit_no',
'benefit_name' => 'sobi. benefit_name',
'product_name' => 'sobi.product_name',
'option_name' => 'sobi.option_name',
'payment_qty' => 'COALESCE(sobi.payment_qty, 0)',
'payment_division' => 'CASE WHEN sobi.payment_division = \'T\' THEN \'Payment\' ELSE \'Cancellation\' END',
'c_ins_user_name' => 'sobi.c_ins_user_name',
'set_user_id' => 'sobi.ins_user_id',
'set_user_ip' => 'sobi.ins_user_ip',
'payment_date' => 'sobi.ins_timestamp'
], [
$aParams['sort_key'] => $aParams['sort']
]);
// WHERE clause search condition setting (field definition array, parameter array, search condition array)
$aWhereFieldInfo = $this->getQueryWhere(constOrderBenefitItem::SCHEMA, $aParams, constOrderBenefitItem::CONDITION);
// decryption
$aDecryptColumn = commModelDes::getDecrypt([
'sobi.c_buyer_name',
'sobi.c_ins_user_name',
]);
$sSql = "
SELECT
{$aSelectFieldInfo['sSelect']},
{$aDecryptColumn['sobi.c_ins_user_name']} AS set_user_name,
{$aDecryptColumn['sobi.c_buyer_name']} AS c_buyer_name,
sobi.payment_division AS payment_division_type
FROM
t_sales_order_benefit_item AS sobi
WHERE 1=1 {$aWhereFieldInfo['sAndWhere']}
ORDER BY {$aSelectFieldInfo['sOrderBy']}
{$aSelectFieldInfo['sLimit']}
";
return $this->prepare($sSql, $aWhereFieldInfo['aAssign'], FW_MODEL_RESULT_ROWS);
}
지금 번역하기
| 63f13ffe9baa82d398f8283730c07077 | 17 | dianewan | 2023-05-23T23:36:23 | bad | [] |
if i define a function with the syntax below, what does it mean in in regards of memory allocation?
def func_a():
pass | 6b89355ee650af38657df106d713507b | 18 | DaehanKim | 2023-05-29T06:44:07 | good | [
"code"
] |
if i define a function with the syntax below, what does it mean in in regards of memory allocation?
def func_a():
pass | 6b89355ee650af38657df106d713507b | 18 | dianewan | 2023-05-23T23:37:03 | good | [
"code"
] |
in my script, when I click on the rating stars to update it updates, but then on the next run clicking on the same star is supposed to change titles and classes to Unset and on click set the rating to null. But it isn't happening. const {
stash,
Stash,
waitForElementId,
waitForElementClass,
waitForElementByXpath,
getElementByXpath,
getClosestAncestor,
updateTextInput,
} = unsafeWindow.stash;
async function querySceneRating(sceneId) {
const gqlQuery = {
'query': `query {
findScene(id: ${sceneId}){
rating
}
}`
};
var rating = await stash.callGQL(gqlQuery);
if (!rating.data.findScene) {
// handle the error here
console.log("Error with querySceneRating GraphQL: " + rating);
}
return rating
}
function updateElements(rating) {
let sceneId = updateSceneId();
console.log("updateElements: sceneId: " + sceneId + ": passed rating: " + rating);
// code to update the elements here
waitForElementId("video-stars", async function(IdName, el) {
currentRating = rating;
console.log("updateElements: currentRating: " + currentRating)
const customdivElements = document.getElementById('video-stars')
const ratingButtons = customdivElements.querySelectorAll('.rating-stars button');
// Set Classes & Titles inital values
if (currentRating == null) {
console.log("updateElements - currentRating == null: ");
updateTitles(0);
updateClasses(0);
} else {
console.log("updateElements - rating != null - rating: ");
updateTitles(currentRating);
updateClasses(currentRating);
}
})
return
}
async function updateSceneRating(sceneId, rating) {
console.log("Updating Rating: Passed Rating: " + rating)
console.log("updateSceneRating rating " + rating)
const reqData = {
"operationName": "sceneUpdate",
"variables": {
"input": {
"id": sceneId,
"rating100": rating * 20
}
},
"query": `mutation sceneUpdate($input: SceneUpdateInput!){
sceneUpdate(input: $input){
id
rating100
}
}`
}
const response = await stash.callGQL(reqData);
currentRating = rating;
// setTimeout(function() {
updateElements(currentRating);
// }, 2000)
}
function updateSceneId() {
const currentUrl = window.location.href.split('?')[0];
const sceneId = currentUrl.substring(currentUrl.lastIndexOf('/') + 1);
console.log("updateSceneId" + sceneId);
return sceneId
}
function updateTitles(num) {
console.log("update_titles: passed title: " + num)
const customdivstitle = document.getElementById('video-stars')
const ratingButtons = customdivstitle.querySelectorAll('.rating-stars button');
ratingButtons.forEach(btn => {
btn.title = num;
});
}
function updateClasses(num) {
console.log("update_classes: passed class: " + num)
const customdivclasses = document.getElementById('video-stars')
const ratingButtons = customdivclasses.querySelectorAll('.rating-stars button');
ratingButtons.forEach((btn, i) => {
if (i < num) {
btn.classList.remove('star-fill-0');
btn.classList.add('star-fill-100');
} else {
btn.classList.add('star-fill-0');
btn.classList.remove('star-fill-100');
}
});
}
stash.addEventListener("page:scene", async function() {
let sceneId = updateSceneId();
// let rating = (await querySceneRating(sceneId)).data.findScene.rating;
if (!document.getElementById('video-stars')) {
await waitForElementClass("rating-stars", async function (className, el) {
const e = document.querySelector('.rating-stars')
// const e = document.getElementsByClassName('.rating-stars')[1]
console.log(e);
const node = e.cloneNode(true);
const control = document.querySelector('.vjs-control-bar');
node.setAttribute('id', 'video-stars');
// ratingStarsNode.innerHTML = ratingstars;
control.appendChild(node);
control.insertBefore(node, control.childNodes[9]);
await waitForElementByXpath("//div[contains(@class, 'VideoPlayer')]", function(xpath, el) {
console.log("Hello3");
// const n = document.getElementsByClassName('rating-stars')[1];
const customdiv = document.getElementById('video-stars');
console.log("customdiv: " + customdiv)
const nodeButtons = customdiv.querySelectorAll('button')
if(typeof nodeButtons === 'object' && nodeButtons.length > 0 ){
for(let i = 0; i < nodeButtons.length; i++){
if (nodeButtons[i] && nodeButtons[i].hasAttribute("disabled")) {
console.log("True")
nodeButtons[i].removeAttribute("disabled");
}
}
}
if(typeof nodeButtons === 'object' && nodeButtons.length > 0 ){
for(let i = 0; i < nodeButtons.length; i++){
nodeButtons[i].classList.add('btn-rating-node')
}
}
// Get current set rating or if on first pass get rating from HTMl
var rating = currentRating || customdiv.querySelector('button').title * 20;
console.log("xpath rating: " + rating);
const ratingButtons = customdiv.querySelectorAll('button');
const unfilledstar = customdiv.querySelectorAll('.unfilled-star div');
// const sceneid = updateSceneId();
currentRating = Math.floor(rating / 20);
ratingButtons.forEach((btn, index) => {
index += 1
btn.addEventListener('mouseover', event => {
if (index === currentRating) {
ratingButtons.forEach(button => {
console.log('mouseover (index===currentRating) : index: ' + index + " : currentRating : " + currentRating)
updateTitles("Unset");
})
}
else if (currentRating !== "0" && currentRating !== index) {
console.log('mouseover-currentRating !== "0" && currentRating !== index - ' + "index: " + index + " rating: " + rating + "currentRating: " + currentRating);
updateTitles(index);
updateClasses(index);
}
});
btn.addEventListener('mouseout', event => {
if (currentRating !== "0" && currentRating !== btn.title) {
console.log('mouseout (currentRating !== "0" && currentRating !== btn.title) : currentrating: ' + currentRating + " : btn.title: " + btn.title)
if (rating !== "0") {
updateTitles(currentRating);
updateClasses(currentRating);
console.log('mouseout: rating: ' + rating + " currentRating: " + currentRating);
}
}});
btn.addEventListener('click', event => {
if (btn.title !== "Unset") {
console.log('click: (btn.title !== "Unset"): btn.title : ' + btn.title);
updateSceneRating(sceneId, btn.title);
} else {
console.log('click btn: (btn.title !== "Unset") else : btn.title : ' + btn.title);
updateTitles("0");
updateClasses(0);
updateSceneRating(sceneId, null);
}
});
});
unfilledstar.forEach((div, index) => {
index += 1
unfilledstar.addEventListener('click', event => {
if (ratingButtons.title == "Unset") {
console.log('click : unfilledstar : (ratingButtons.title == "Unset"): ratingButtons.title: ' + ratingButtons.title + ' : updateTitles & classes & rating = 0');
updateTitles("0");
updateClasses(0);
updateSceneRating(sceneId, null);
} else {
console.log('click : unfilledstar : (ratingButtons.title == "Unset") else : ratingButtons.title: ' + ratingButtons.title + ' : event.target.title : ' + event.target.title + ' updateTitles & classes & rating = event.target.title');
updateTitles(event.target.title);
updateClasses(event.target.title);
updateSceneRating(sceneId, event.target.title);
}
});
});
document.addEventListener("DOMContentLoaded", () => {
console.log("Loaded")
const observer = new MutationObserver(mutations => {
mutations.forEach(mutation => {
Array.from(mutation.addedNodes).forEach(addedNode => {
if (addedNode.matches && addedNode.matches('.fade.toast.success.show')) {
setTimeout(function() {
updateElements();
}, 2000);
}
});
});
});
observer.observe(document.body, {
childList: true,
subtree: true
});
});
})
});
}
})
})(); | 66c83caa397fab43ff2a306a7eb299f0 | 19 | DaehanKim | 2023-05-29T06:45:23 | good | [
"code"
] |
in my script, when I click on the rating stars to update it updates, but then on the next run clicking on the same star is supposed to change titles and classes to Unset and on click set the rating to null. But it isn't happening. const {
stash,
Stash,
waitForElementId,
waitForElementClass,
waitForElementByXpath,
getElementByXpath,
getClosestAncestor,
updateTextInput,
} = unsafeWindow.stash;
async function querySceneRating(sceneId) {
const gqlQuery = {
'query': `query {
findScene(id: ${sceneId}){
rating
}
}`
};
var rating = await stash.callGQL(gqlQuery);
if (!rating.data.findScene) {
// handle the error here
console.log("Error with querySceneRating GraphQL: " + rating);
}
return rating
}
function updateElements(rating) {
let sceneId = updateSceneId();
console.log("updateElements: sceneId: " + sceneId + ": passed rating: " + rating);
// code to update the elements here
waitForElementId("video-stars", async function(IdName, el) {
currentRating = rating;
console.log("updateElements: currentRating: " + currentRating)
const customdivElements = document.getElementById('video-stars')
const ratingButtons = customdivElements.querySelectorAll('.rating-stars button');
// Set Classes & Titles inital values
if (currentRating == null) {
console.log("updateElements - currentRating == null: ");
updateTitles(0);
updateClasses(0);
} else {
console.log("updateElements - rating != null - rating: ");
updateTitles(currentRating);
updateClasses(currentRating);
}
})
return
}
async function updateSceneRating(sceneId, rating) {
console.log("Updating Rating: Passed Rating: " + rating)
console.log("updateSceneRating rating " + rating)
const reqData = {
"operationName": "sceneUpdate",
"variables": {
"input": {
"id": sceneId,
"rating100": rating * 20
}
},
"query": `mutation sceneUpdate($input: SceneUpdateInput!){
sceneUpdate(input: $input){
id
rating100
}
}`
}
const response = await stash.callGQL(reqData);
currentRating = rating;
// setTimeout(function() {
updateElements(currentRating);
// }, 2000)
}
function updateSceneId() {
const currentUrl = window.location.href.split('?')[0];
const sceneId = currentUrl.substring(currentUrl.lastIndexOf('/') + 1);
console.log("updateSceneId" + sceneId);
return sceneId
}
function updateTitles(num) {
console.log("update_titles: passed title: " + num)
const customdivstitle = document.getElementById('video-stars')
const ratingButtons = customdivstitle.querySelectorAll('.rating-stars button');
ratingButtons.forEach(btn => {
btn.title = num;
});
}
function updateClasses(num) {
console.log("update_classes: passed class: " + num)
const customdivclasses = document.getElementById('video-stars')
const ratingButtons = customdivclasses.querySelectorAll('.rating-stars button');
ratingButtons.forEach((btn, i) => {
if (i < num) {
btn.classList.remove('star-fill-0');
btn.classList.add('star-fill-100');
} else {
btn.classList.add('star-fill-0');
btn.classList.remove('star-fill-100');
}
});
}
stash.addEventListener("page:scene", async function() {
let sceneId = updateSceneId();
// let rating = (await querySceneRating(sceneId)).data.findScene.rating;
if (!document.getElementById('video-stars')) {
await waitForElementClass("rating-stars", async function (className, el) {
const e = document.querySelector('.rating-stars')
// const e = document.getElementsByClassName('.rating-stars')[1]
console.log(e);
const node = e.cloneNode(true);
const control = document.querySelector('.vjs-control-bar');
node.setAttribute('id', 'video-stars');
// ratingStarsNode.innerHTML = ratingstars;
control.appendChild(node);
control.insertBefore(node, control.childNodes[9]);
await waitForElementByXpath("//div[contains(@class, 'VideoPlayer')]", function(xpath, el) {
console.log("Hello3");
// const n = document.getElementsByClassName('rating-stars')[1];
const customdiv = document.getElementById('video-stars');
console.log("customdiv: " + customdiv)
const nodeButtons = customdiv.querySelectorAll('button')
if(typeof nodeButtons === 'object' && nodeButtons.length > 0 ){
for(let i = 0; i < nodeButtons.length; i++){
if (nodeButtons[i] && nodeButtons[i].hasAttribute("disabled")) {
console.log("True")
nodeButtons[i].removeAttribute("disabled");
}
}
}
if(typeof nodeButtons === 'object' && nodeButtons.length > 0 ){
for(let i = 0; i < nodeButtons.length; i++){
nodeButtons[i].classList.add('btn-rating-node')
}
}
// Get current set rating or if on first pass get rating from HTMl
var rating = currentRating || customdiv.querySelector('button').title * 20;
console.log("xpath rating: " + rating);
const ratingButtons = customdiv.querySelectorAll('button');
const unfilledstar = customdiv.querySelectorAll('.unfilled-star div');
// const sceneid = updateSceneId();
currentRating = Math.floor(rating / 20);
ratingButtons.forEach((btn, index) => {
index += 1
btn.addEventListener('mouseover', event => {
if (index === currentRating) {
ratingButtons.forEach(button => {
console.log('mouseover (index===currentRating) : index: ' + index + " : currentRating : " + currentRating)
updateTitles("Unset");
})
}
else if (currentRating !== "0" && currentRating !== index) {
console.log('mouseover-currentRating !== "0" && currentRating !== index - ' + "index: " + index + " rating: " + rating + "currentRating: " + currentRating);
updateTitles(index);
updateClasses(index);
}
});
btn.addEventListener('mouseout', event => {
if (currentRating !== "0" && currentRating !== btn.title) {
console.log('mouseout (currentRating !== "0" && currentRating !== btn.title) : currentrating: ' + currentRating + " : btn.title: " + btn.title)
if (rating !== "0") {
updateTitles(currentRating);
updateClasses(currentRating);
console.log('mouseout: rating: ' + rating + " currentRating: " + currentRating);
}
}});
btn.addEventListener('click', event => {
if (btn.title !== "Unset") {
console.log('click: (btn.title !== "Unset"): btn.title : ' + btn.title);
updateSceneRating(sceneId, btn.title);
} else {
console.log('click btn: (btn.title !== "Unset") else : btn.title : ' + btn.title);
updateTitles("0");
updateClasses(0);
updateSceneRating(sceneId, null);
}
});
});
unfilledstar.forEach((div, index) => {
index += 1
unfilledstar.addEventListener('click', event => {
if (ratingButtons.title == "Unset") {
console.log('click : unfilledstar : (ratingButtons.title == "Unset"): ratingButtons.title: ' + ratingButtons.title + ' : updateTitles & classes & rating = 0');
updateTitles("0");
updateClasses(0);
updateSceneRating(sceneId, null);
} else {
console.log('click : unfilledstar : (ratingButtons.title == "Unset") else : ratingButtons.title: ' + ratingButtons.title + ' : event.target.title : ' + event.target.title + ' updateTitles & classes & rating = event.target.title');
updateTitles(event.target.title);
updateClasses(event.target.title);
updateSceneRating(sceneId, event.target.title);
}
});
});
document.addEventListener("DOMContentLoaded", () => {
console.log("Loaded")
const observer = new MutationObserver(mutations => {
mutations.forEach(mutation => {
Array.from(mutation.addedNodes).forEach(addedNode => {
if (addedNode.matches && addedNode.matches('.fade.toast.success.show')) {
setTimeout(function() {
updateElements();
}, 2000);
}
});
});
});
observer.observe(document.body, {
childList: true,
subtree: true
});
});
})
});
}
})
})(); | 66c83caa397fab43ff2a306a7eb299f0 | 19 | dianewan | 2023-05-23T23:37:46 | bad | [] |
Please act as an experienced software engineer hands on skill in JavaScript, write human written code in JavaScript, use proper coding standards, to write and fix the code, I'll give you input points to write the code using all of those points, sunny and johnny like pool their money and go to the ice cream parlor. Johnny never buys the same flavor that sunny does. The only other rule they have is that they spend all of their money.
Given a list of prices for the flavors of ice cream, select what two that will cost all the money they have.
For example, they have m = 6 to spend and there sre flavors costing cost = [1,3,4,5,6]. The two flavors costing 1 and 5 meet criteria, Using 1-based indexing, they are at indices 1 and 4
Function description ;
Complete the icecreamParlor in the below, It should return an array containing the indices of the prices of the two flavors they buy, sorted ascending
icecreamParlor has the following parameter(s);
m: an integer denoting the amount of money they have to spend.
cost: an integer array denoting the cost of each flavor of ice cream
Here are how answer should be :
Return type shoiul be object
icecreamParlor(4,[6,3,5,1,2]) should return [2,4]
icecreamParlor(4,[1,4,5,3,2]) should return [1,4]
Answer should be valid for any given input
Here is the incomplete code :
function icecreamParlor(m, arr){
return false;
}
icecreamParlor(4,[6,3,5,1,2]); | dfe486c2395d0fd400a1851131e8f460 | 2 | DaehanKim | 2023-05-29T06:29:36 | good | [
"code"
] |
Please act as an experienced software engineer hands on skill in JavaScript, write human written code in JavaScript, use proper coding standards, to write and fix the code, I'll give you input points to write the code using all of those points, sunny and johnny like pool their money and go to the ice cream parlor. Johnny never buys the same flavor that sunny does. The only other rule they have is that they spend all of their money.
Given a list of prices for the flavors of ice cream, select what two that will cost all the money they have.
For example, they have m = 6 to spend and there sre flavors costing cost = [1,3,4,5,6]. The two flavors costing 1 and 5 meet criteria, Using 1-based indexing, they are at indices 1 and 4
Function description ;
Complete the icecreamParlor in the below, It should return an array containing the indices of the prices of the two flavors they buy, sorted ascending
icecreamParlor has the following parameter(s);
m: an integer denoting the amount of money they have to spend.
cost: an integer array denoting the cost of each flavor of ice cream
Here are how answer should be :
Return type shoiul be object
icecreamParlor(4,[6,3,5,1,2]) should return [2,4]
icecreamParlor(4,[1,4,5,3,2]) should return [1,4]
Answer should be valid for any given input
Here is the incomplete code :
function icecreamParlor(m, arr){
return false;
}
icecreamParlor(4,[6,3,5,1,2]); | dfe486c2395d0fd400a1851131e8f460 | 2 | dianewan | 2023-05-23T23:18:03 | good | [
"code",
"system"
] |
Please act as an experienced software engineer hands on skill in JavaScript, write human written code in JavaScript, use proper coding standards, to write and fix the code, I'll give you input points to write the code using all of those points, sunny and johnny like pool their money and go to the ice cream parlor. Johnny never buys the same flavor that sunny does. The only other rule they have is that they spend all of their money.
Given a list of prices for the flavors of ice cream, select what two that will cost all the money they have.
For example, they have m = 6 to spend and there sre flavors costing cost = [1,3,4,5,6]. The two flavors costing 1 and 5 meet criteria, Using 1-based indexing, they are at indices 1 and 4
Function description ;
Complete the icecreamParlor in the below, It should return an array containing the indices of the prices of the two flavors they buy, sorted ascending
icecreamParlor has the following parameter(s);
m: an integer denoting the amount of money they have to spend.
cost: an integer array denoting the cost of each flavor of ice cream
Here are how answer should be :
Return type shoiul be object
icecreamParlor(4,[6,3,5,1,2]) should return [2,4]
icecreamParlor(4,[1,4,5,3,2]) should return [1,4]
Answer should be valid for any given input
Here is the incomplete code :
function icecreamParlor(m, arr){
return false;
}
icecreamParlor(4,[6,3,5,1,2]); | dfe486c2395d0fd400a1851131e8f460 | 2 | nazneen | 2023-05-22T18:25:42 | good | [
"code"
] |
def search_embeddings(airtable, embedding_column, query, n=3, pprint=True):
records = get_embeddings_table(airtable)
query_embedding = openai.Embedding.create(input=query,engine="text-embedding-ada-002")["data"][0]["embedding"]
query_embedding = np.array(query_embedding)
#print("query_embedding.shape")
#print(type(query_embedding))
similarities = []
with ThreadPoolExecutor() as executor:
futures = [executor.submit(calculate_similarity, record, query_embedding) for record in records]
for future in futures:
similarities.append(future.result())
for i in range(len(similarities)):
records[i]["similarities"] = similarities[i]
#print(records[i]["similarities"])
records = sorted(records, key=itemgetter("similarities"), reverse=True)
print("sorted")
#print(len(records))
return records[:25]
Can you rewrite the code above to first store the records from Airtable into Pinecone, and then use Pinecone's vector search function for similarity? | 11bbaec140b864fb8d7ebb7f8afcd2af | 20 | DaehanKim | 2023-05-29T06:46:54 | good | [
"code",
"plugins"
] |
def search_embeddings(airtable, embedding_column, query, n=3, pprint=True):
records = get_embeddings_table(airtable)
query_embedding = openai.Embedding.create(input=query,engine="text-embedding-ada-002")["data"][0]["embedding"]
query_embedding = np.array(query_embedding)
#print("query_embedding.shape")
#print(type(query_embedding))
similarities = []
with ThreadPoolExecutor() as executor:
futures = [executor.submit(calculate_similarity, record, query_embedding) for record in records]
for future in futures:
similarities.append(future.result())
for i in range(len(similarities)):
records[i]["similarities"] = similarities[i]
#print(records[i]["similarities"])
records = sorted(records, key=itemgetter("similarities"), reverse=True)
print("sorted")
#print(len(records))
return records[:25]
Can you rewrite the code above to first store the records from Airtable into Pinecone, and then use Pinecone's vector search function for similarity? | 11bbaec140b864fb8d7ebb7f8afcd2af | 20 | dianewan | 2023-05-23T23:37:58 | good | [
"code"
] |
what does this SQL do:
SELECT
t1."STATE" AS "STATE",
COUNT(t2."DEAL_ID") AS "COUNT_DEALS",
SUM(t2."AMOUNT") AS "SUM_AMOUNT",
MAX(t1."CREATED_AT") AS "MAX_CREATED_AT"
FROM
"DEMO_DATA"."DBT_HUBSPOT"."HUBSPOT__COMPANIES" t1
INNER JOIN
"DEMO_DATA"."DBT_STG_HUBSPOT"."STG_HUBSPOT__DEAL_COMPANY" t3
ON
t1."COMPANY_ID" = t3."COMPANY_ID"
INNER JOIN
"DEMO_DATA"."DBT_HUBSPOT"."HUBSPOT__DEALS" t2
ON
t3."DEAL_ID" = t2."DEAL_ID"
WHERE
t1."STATE" IS NOT NULL
GROUP BY
1
| fa4b8a539b35e8f7707e46d2cef28275 | 21 | DaehanKim | 2023-05-29T06:47:09 | good | [
"code"
] |
what does this SQL do:
SELECT
t1."STATE" AS "STATE",
COUNT(t2."DEAL_ID") AS "COUNT_DEALS",
SUM(t2."AMOUNT") AS "SUM_AMOUNT",
MAX(t1."CREATED_AT") AS "MAX_CREATED_AT"
FROM
"DEMO_DATA"."DBT_HUBSPOT"."HUBSPOT__COMPANIES" t1
INNER JOIN
"DEMO_DATA"."DBT_STG_HUBSPOT"."STG_HUBSPOT__DEAL_COMPANY" t3
ON
t1."COMPANY_ID" = t3."COMPANY_ID"
INNER JOIN
"DEMO_DATA"."DBT_HUBSPOT"."HUBSPOT__DEALS" t2
ON
t3."DEAL_ID" = t2."DEAL_ID"
WHERE
t1."STATE" IS NOT NULL
GROUP BY
1
| fa4b8a539b35e8f7707e46d2cef28275 | 21 | dianewan | 2023-05-23T23:39:17 | good | [
"code"
] |
<div><p class="">Write a poem about taking medication for high blood pressure</p></div> | 077d9e3190091ab3c27bb237c4db2cf7 | 22 | DaehanKim | 2023-05-29T06:47:39 | good | [] |
<div><p class="">Write a poem about taking medication for high blood pressure</p></div> | 077d9e3190091ab3c27bb237c4db2cf7 | 22 | dianewan | 2023-05-23T23:39:27 | bad | [] |
fix the column creation errors in this code: import PySimpleGUI as sg
col_1 = [[sg.Button('Car Rental')],
[sg.Button('Car Return')],
[sg.Button('Total Finance Detail')]
]
col_2 = [[sg.Text('TG Enterprises', enable_events=True, size=(50, 1), justification="centre", text_color="red")]
]
cars = [
['CULTUS', 4, 3500, 300, 550],
['SONATA', 3, 5000, 350, 750],
['KIA Sportage', 3, 6000, 500, 700],
['Yaris', 5, 4000, 300, 600],
['HONDA CIVIC', 5, 5500, 300, 600],
['Pajero', 2, 7000, 500, 700]
]
header = ["Model", "Available", "price/Day",
"Liability Insurance/Day", "Comprehensive Insurance/Day"]
layout1 = [
[sg.Text('Select Function', justification="left")], [sg.Column(
col_1, key="Col1", visible=True, justification="left"), sg.VSeparator(),]
]
layout2 = [
[sg.Column(col_2)],
[sg.Text('Select Car:')],
[sg.Text('Car not Available', key='-carnot-',
text_color="red", visible=False)],
[sg.Table(
values=cars,
headings=header,
key="-TABLE-",
justification="left",
enable_events="True")],
[sg.Text("Number of Days"), sg.Input(key="-NUMOFDAYS-", size=(15, 1))],
[sg.CB("Liability", key="-liability-")],
[sg.CB("Comprehensive", key="-comprehensive-")],
[sg.Button('Ok', key="ok")]
]
layout3 = [
[sg.Text('TG Enterprises', enable_events=True, size=(50, 1), justification="centre", text_color="red")],
[sg.Text('Summary:', size=(30, 1))],
[sg.Text('Car',size=(30,1)), sg.T('Car Type', key='-cartype-', size=(30, 1))],
[sg.Text('Rent Cost', size=(30, 1)), sg.T('PKR',key="-rentcost-", size=(30, 1))],
[sg.Text('Insurance Cost', size=(30, 1)), sg.T('PKR',key="-insurancecost-", size=(30, 1))],
[sg.Text('Tax', size=(30, 1)), sg.T(' PKR',key=('-tax-'), size=(30, 1))],
[sg.Text('==========================================')],
[sg.Text('Total', size=(30, 1)), sg.T( 'PKR',key=('-total-') ,size=(30, 1))],
[sg.Button('Confirm', key='-confirm-'),
sg.Button('Cancel', key='-cancel-')]
]
layout4=[]
layout = [[sg.Column(layout1, key='-lay1-'),
sg.Column(layout2, visible=False, key='-lay2-'),
sg.Column(layout3, visible=False, key='-lay3-'),
sg.Column(layout4, visible = False,key = '-lay4-')]]
window = sg.Window('Car Rental System', layout,
size=(1000, 600), finalize=True)
def rent_car():
window['-lay2-'].update(visible=True)
while True:
event, values = window.read()
if event == 'Car Rental':
rent_car()
if event == "ok" and values['-TABLE-'] != []:
window['-lay3-'].update(visible=True)
window.refresh()
window['-lay2-'].update(visible=False)
car_cost = cars[values['-TABLE-'][0]][2]
car_type = cars[values['-TABLE-'][0]]
insurance_type = "Liability" if values["-liability-"] else "Comprehensive"
insurance_cost1 = cars[values['-TABLE-'][0]][4] if insurance_type == "Comprehensive" else cars[values['-TABLE-'][0]][3]
insurance_cost = insurance_cost1 *int(values['-NUMOFDAYS-'])
rent_cost = car_cost *int(values['-NUMOFDAYS-'])
tax = int(rent_cost) * 0.05
total = rent_cost + tax + insurance_cost
window['-cartype-'].update(value=f'{car_type[0]}')
window['-rentcost-'].update(value=f'{rent_cost} PKR')
window['-insurancecost-'].update(value=f'{insurance_cost} PKR')
window['-tax-'].update(value=f'{tax} PKR')
window['-total-'].update(value=f'{total} PKR')
if event == '-confirm-':
cars[values['-TABLE-'][0]][1] = cars[values['-TABLE-'][0]][1] - 1
window['-lay3-'].update(visible=False)
window['-lay2-'].update(visible=True)
window['-TABLE-'].update(cars)
if event == 'Ok':
cars(values['-TABLE-'])
if event != None:
if values['-TABLE-'] != [] and event == '-TABLE-':
if cars[values['-TABLE-'][0]][1] == 0:
window['-carnot-'].update(visible=True)
window['-lay3-'].update(visible=False)
window['-TABLE-'].update(cars)
else:
window['-carnot-'].update(visible=False)
if event == 'Car Return':
newList=[]
for idx, x in enumerate(cars):
newList.append( (x[0],idx))
layout4 += [[sg.Text('Select Car:')],
[sg.DD(newList,key='-dd-', size=(30,1), enable_events=True)]
]
window.extend_layout(window['-lay4-'], layout4)
window['-lay4-'].update(visible=True)
if event == 'Total Finance Detail':
headers_2 = ['Car Model', 'Rent Cost', 'Days','Insurance Cost', 'Tax Cost', 'Total Cost']
values_2 = [] | 188d52ae4428733426503e720694731b | 23 | DaehanKim | 2023-05-29T06:53:59 | bad | [] |
fix the column creation errors in this code: import PySimpleGUI as sg
col_1 = [[sg.Button('Car Rental')],
[sg.Button('Car Return')],
[sg.Button('Total Finance Detail')]
]
col_2 = [[sg.Text('TG Enterprises', enable_events=True, size=(50, 1), justification="centre", text_color="red")]
]
cars = [
['CULTUS', 4, 3500, 300, 550],
['SONATA', 3, 5000, 350, 750],
['KIA Sportage', 3, 6000, 500, 700],
['Yaris', 5, 4000, 300, 600],
['HONDA CIVIC', 5, 5500, 300, 600],
['Pajero', 2, 7000, 500, 700]
]
header = ["Model", "Available", "price/Day",
"Liability Insurance/Day", "Comprehensive Insurance/Day"]
layout1 = [
[sg.Text('Select Function', justification="left")], [sg.Column(
col_1, key="Col1", visible=True, justification="left"), sg.VSeparator(),]
]
layout2 = [
[sg.Column(col_2)],
[sg.Text('Select Car:')],
[sg.Text('Car not Available', key='-carnot-',
text_color="red", visible=False)],
[sg.Table(
values=cars,
headings=header,
key="-TABLE-",
justification="left",
enable_events="True")],
[sg.Text("Number of Days"), sg.Input(key="-NUMOFDAYS-", size=(15, 1))],
[sg.CB("Liability", key="-liability-")],
[sg.CB("Comprehensive", key="-comprehensive-")],
[sg.Button('Ok', key="ok")]
]
layout3 = [
[sg.Text('TG Enterprises', enable_events=True, size=(50, 1), justification="centre", text_color="red")],
[sg.Text('Summary:', size=(30, 1))],
[sg.Text('Car',size=(30,1)), sg.T('Car Type', key='-cartype-', size=(30, 1))],
[sg.Text('Rent Cost', size=(30, 1)), sg.T('PKR',key="-rentcost-", size=(30, 1))],
[sg.Text('Insurance Cost', size=(30, 1)), sg.T('PKR',key="-insurancecost-", size=(30, 1))],
[sg.Text('Tax', size=(30, 1)), sg.T(' PKR',key=('-tax-'), size=(30, 1))],
[sg.Text('==========================================')],
[sg.Text('Total', size=(30, 1)), sg.T( 'PKR',key=('-total-') ,size=(30, 1))],
[sg.Button('Confirm', key='-confirm-'),
sg.Button('Cancel', key='-cancel-')]
]
layout4=[]
layout = [[sg.Column(layout1, key='-lay1-'),
sg.Column(layout2, visible=False, key='-lay2-'),
sg.Column(layout3, visible=False, key='-lay3-'),
sg.Column(layout4, visible = False,key = '-lay4-')]]
window = sg.Window('Car Rental System', layout,
size=(1000, 600), finalize=True)
def rent_car():
window['-lay2-'].update(visible=True)
while True:
event, values = window.read()
if event == 'Car Rental':
rent_car()
if event == "ok" and values['-TABLE-'] != []:
window['-lay3-'].update(visible=True)
window.refresh()
window['-lay2-'].update(visible=False)
car_cost = cars[values['-TABLE-'][0]][2]
car_type = cars[values['-TABLE-'][0]]
insurance_type = "Liability" if values["-liability-"] else "Comprehensive"
insurance_cost1 = cars[values['-TABLE-'][0]][4] if insurance_type == "Comprehensive" else cars[values['-TABLE-'][0]][3]
insurance_cost = insurance_cost1 *int(values['-NUMOFDAYS-'])
rent_cost = car_cost *int(values['-NUMOFDAYS-'])
tax = int(rent_cost) * 0.05
total = rent_cost + tax + insurance_cost
window['-cartype-'].update(value=f'{car_type[0]}')
window['-rentcost-'].update(value=f'{rent_cost} PKR')
window['-insurancecost-'].update(value=f'{insurance_cost} PKR')
window['-tax-'].update(value=f'{tax} PKR')
window['-total-'].update(value=f'{total} PKR')
if event == '-confirm-':
cars[values['-TABLE-'][0]][1] = cars[values['-TABLE-'][0]][1] - 1
window['-lay3-'].update(visible=False)
window['-lay2-'].update(visible=True)
window['-TABLE-'].update(cars)
if event == 'Ok':
cars(values['-TABLE-'])
if event != None:
if values['-TABLE-'] != [] and event == '-TABLE-':
if cars[values['-TABLE-'][0]][1] == 0:
window['-carnot-'].update(visible=True)
window['-lay3-'].update(visible=False)
window['-TABLE-'].update(cars)
else:
window['-carnot-'].update(visible=False)
if event == 'Car Return':
newList=[]
for idx, x in enumerate(cars):
newList.append( (x[0],idx))
layout4 += [[sg.Text('Select Car:')],
[sg.DD(newList,key='-dd-', size=(30,1), enable_events=True)]
]
window.extend_layout(window['-lay4-'], layout4)
window['-lay4-'].update(visible=True)
if event == 'Total Finance Detail':
headers_2 = ['Car Model', 'Rent Cost', 'Days','Insurance Cost', 'Tax Cost', 'Total Cost']
values_2 = [] | 188d52ae4428733426503e720694731b | 23 | dianewan | 2023-05-23T23:40:21 | bad | [] |
Create some javascript code that can find all DOM elements with the class "jetAction" or "jetLabel", and parse those elements for any variables formatted in the style of {{ variable }}, for example {{ persMonthlyTrial.label }} or {{ persMonthlyPlan.price }}. The variable name will typically include an object name (like persMonthlyTrial) and a property name (like label or price).
For any variables specified in the DOM element's innerHTML, check if the variable exists in the JSON object named plans, and replace the variable with the value found in the plans object. For example, with a plans object as follows:
plans = {
persMonthlyTrial: {
label: "$4.99 + tax / Month",
price: "$4.99",
term: "Month",
trial: true,
},
persAnnualTrial: {
label: "$35.99 / Year",
price: "$35.99",
term: "Year",
trial: true,
monthlyPrice: "$3.00",
savings: "40%",
},
persMonthlyPlan: {
label: "$4.99 / Month",
price: "$4.99",
term: "Month",
trial: false,
},
persAnnualPlan: {
label: "$35.99 / Year",
price: "$35.99",
term: "Year",
trial: false,
monthlyPrice: "$3.00",
savings: "40%",
},
};
The variable "{{ persAnnualTrial. monthlyPrice}}" would be changed to $3.00 when displayed to the user. | 869f9f4c6c8153c664b7a4ecf1ec68b1 | 24 | DaehanKim | 2023-05-29T06:54:48 | good | [
"code"
] |
Create some javascript code that can find all DOM elements with the class "jetAction" or "jetLabel", and parse those elements for any variables formatted in the style of {{ variable }}, for example {{ persMonthlyTrial.label }} or {{ persMonthlyPlan.price }}. The variable name will typically include an object name (like persMonthlyTrial) and a property name (like label or price).
For any variables specified in the DOM element's innerHTML, check if the variable exists in the JSON object named plans, and replace the variable with the value found in the plans object. For example, with a plans object as follows:
plans = {
persMonthlyTrial: {
label: "$4.99 + tax / Month",
price: "$4.99",
term: "Month",
trial: true,
},
persAnnualTrial: {
label: "$35.99 / Year",
price: "$35.99",
term: "Year",
trial: true,
monthlyPrice: "$3.00",
savings: "40%",
},
persMonthlyPlan: {
label: "$4.99 / Month",
price: "$4.99",
term: "Month",
trial: false,
},
persAnnualPlan: {
label: "$35.99 / Year",
price: "$35.99",
term: "Year",
trial: false,
monthlyPrice: "$3.00",
savings: "40%",
},
};
The variable "{{ persAnnualTrial. monthlyPrice}}" would be changed to $3.00 when displayed to the user. | 869f9f4c6c8153c664b7a4ecf1ec68b1 | 24 | dianewan | 2023-05-23T23:41:05 | bad | [] |
Fasse den Text zusammen in Tabellenform
Text: (Next.js 13.2 includes major improvements to the App Router (app) in preparation for stability:
Built-in SEO Support: New Metadata API to set static and dynamic meta tags.
Route Handlers: Custom request handlers, built on Web Request and Response.
MDX for Server Components: Use React components inside Markdown, server-side only.
Rust MDX Parser: Faster Markdown parsing with a brand new Rust plugin.
Improved Error Overlay: Separate Next.js and React stack traces for improved readability.
Statically Typed Links (Beta): Prevent broken links with next/link and TypeScript.
Turbopack Improvements (Alpha): Compatibility with Webpack loaders and improved support.
Next.js Cache (Beta): Progressive ISR and faster re-deploys of code changes.
Update today by running:
npm i next@latest react@latest react-dom@latest eslint-config-next@latest
Built-in SEO support with new Metadata API
Next.js has been designed from the beginning to enable optimization for search engines.
Serving pre-rendered HTML content not only helps improve indexing for search engines but also improves the performance of your application. While Next.js has provided a simple API for modifying metadata in your application (next/head) for many versions, we wanted to redesign and enhance how you optimize for search engines with the App Router (app).
The new Metadata API allows you to define metadata (e.g. meta and link tags inside your HTML head element) with an explicit metadata configuration in any layout or page that is a Server Component.
// app/layout.tsx
import type { Metadata } from 'next';
export const metadata: Metadata = {
title: 'Home',
description: 'Welcome to Next.js',
};
This API is simple, composable, and designed to be compatible with streaming server rendering. For example, you can set common metadata attributes in your root layout for the entire application, and compose and merge metadata objects together for other routes in your application.
This includes support for dynamic metadata as well as static:
// Static metadata
export const metadata = {
title: '...',
};
// Dynamic metadata
export async function generateMetadata({ params, searchParams }) {
const product = await getProduct(params.id);
return { title: product.title };
}
All metadata options are available, including the ability to provide custom metadata, with support for TypeScript through the TypeScript plugin or by adding the Metadata type.
For example, you can define open graph images through metadata:
export const metadata = {
openGraph: {
title: 'Next.js',
description: 'The React Framework for the Web',
url: 'https://nextjs.org',
siteName: 'Next.js',
images: [
{
url: 'https://nextjs.org/og.png',
width: 800,
height: 600,
},
],
locale: 'en-US',
type: 'website',
},
};
The Metadata API is available in 13.2 for the App Router (app), replacing the previous head.js special file. It is not available for the pages directory.
Learn more about SEO or view the API reference for Metadata. We'd like to thank next-seo for their work on the community package and feedback on the initial API design.
Custom Route Handlers
One of the missing pieces for the original beta release of the App Router (app) was API Routes, which exist in the pages/api directory. We wanted to take this opportunity to create a new, more modern version of API Routes that were deeply integrated into the new routing system for app.
Route Handlers allow you to create custom request handlers for a given route using the Web Request and Response APIs.
export async function GET(request: Request) {}
Route Handlers have an isomorphic API to support both Edge and Node.js runtimes seamlessly, including support for streaming responses. Since Route Handlers use the same route segment configuration as pages and layouts, they support long-awaited features like general-purpose Static Rendering and Revalidation.
A route.ts file can export an async function named by the HTTP verbs: GET, HEAD, OPTIONS, POST, PUT, DELETE, and PATCH. These functions can then be wrapped and abstracted to create helpers / reusable logic for your custom route logic.
Other server functions, like cookies and headers, can be used inside Route Handlers – along with any Web APIs these abstractions are built upon. This allows for code to be shared between Server Components and Route Handlers.
import { cookies } from 'next/headers';
export async function GET(request: Request) {
const cookieStore = cookies();
const token = cookieStore.get('token');
return new Response('Hello, Next.js!', {
status: 200,
headers: { 'Set-Cookie': `token=${token}` },
});
}
Route Handlers are available in 13.2 for the App Router (app) using the route.ts special file. They are not available in the pages directory, as they are a replacement for API Routes.
Learn more about Route Handlers or view the API reference. We'd like to thank SvelteKit for their prior art and inspiration here.
MDX for Server Components
MDX is a superset of markdown that lets you write JSX directly in your markdown files. It is a powerful way to add dynamic interactivity and embed React components within your content.
With 13.2, you can now use MDX entirely with React Server Components – meaning less client-side JavaScript for faster page loads, all while retaining the powerful capabilities of React for templating dynamic UI. You can sprinkle interactivity into your MDX content as needed.
The @next/mdx plugin has been updated with support for a new special file, mdx-components.js|ts, defined at the root of your application to provide custom components:
// This file allows you to provide custom React components
// to be used in MDX files. You can import and use any
// React component you want, including components from
// other libraries.
function H1({ children }) {
// ...
}
function H2({ children }) {
// ...
}
export function useMDXComponents(components) {
return { h1: H1, h2: H2, ...components };
}
Further, we've worked with community packages for fetching MDX content next-mdx-remote and contentlayer to add support for React Server Components.
Learn more about how to setup MDX with Server Components or deploy our example.
Rust MDX Parser
As part of enabling MDX for Server Components, we've also rewritten the MDX parser in Rust to improve performance. This is a significant improvement over the previous JavaScript-based parser, which saw noticable slowdowns when processing a large number of MDX files.
You can opt-into using the Rust parser in next.config.js. For example, with @next/mdx:
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
appDir: true,
mdxRs: true,
},
};
const withMDX = require('@next/mdx')();
module.exports = withMDX(nextConfig);
We'd like to thank Titus Wormer who we sponsored to work on this project. If you'd like to use this outside of Next.js, check out the new package mdxjs-rs.
Statically Typed Links
Next.js can now statically type links in the app directory to prevent typos and other errors when using next/link, improving type safety when navigating between pages.
import Link from 'next/link'
// ✅
<Link href="/about" />
// ✅
<Link href="/blog/nextjs" />
// ✅
<Link href={`/blog/${slug}`} />
// ❌ TypeScript errors if href is not a valid route
<Link href="/aboot" />
This feature requires using the new App Router, as well as TypeScript.
// next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
appDir: true,
typedRoutes: true,
},
};
module.exports = nextConfig;
This feature is now available in beta. rewrites and redirects are not yet supported.
Learn more about statically typed routes.
Improved Error Overlay
To help improve the readability and debugability of errors, we've made a number of improvements to the Next.js error overlay.
In 13.2, Next.js and React stack traces are now separated, making it easier to identify where the error is coming from. Additionally, the error overlay now displays the current version of Next.js, helping you understand whether your version is up to date.
The improved error overlay in 13.2 showing version staleness.
We've also improved the error output for React hydration errors, which are now more readable and easier to debug.
Turbopack Improvements
Turbopack, announced in alpha with Next.js 13, is an incremental bundler designed to speed up both local development, as well as production builds in the future.
We've been focused on supporting existing Next.js features in Turbopack and improving overall stability as we move towards beta. Since our last release, we've added:
Support for next/dynamic
Support for rewrites, redirects, headers , and pageExtensions in next.config.js
Support for 404s and errors in pages
Support for CSS modules composes: ... from ...
Improved Fast Refresh reliability and error recovery
Improved CSS precedence handling
Improved compile-time evaluation
We've also fixed many bugs and improved stability while dogfooding Turbopack with some of our largest internal Next.js applications and with early Vercel customers.
Custom File Transformation with Webpack Loaders
Turbopack now ships with support and compatibility for some webpack loaders. This means that you can use many loaders from the Webpack ecosystem to transform files of different types to JavaScript. Loaders like @mdx-js/loader, @svgr/webpack, and babel-loader are supported. Learn more about customizing Turbopack.
For example, use experimental.turbo.loaders to configure a list of loaders for each file extension:
// next.config.js
module.exports = {
experimental: {
turbo: {
loaders: {
'.md': [
{
// Option format
loader: '@mdx-js/loader',
options: {
format: 'md',
},
},
],
'.svg': ['@svgr/webpack'],
},
},
},
};
Check out the Turbopack example using loaders for a complete example.
Webpack-style Resolve Aliases
Turbopack can now be configured to modify module resolution through aliases, similar to webpack's resolve.alias. Configure this through experimental.turbo.resolveAlias:
// next.config.js
module.exports = {
experimental: {
turbo: {
resolveAlias: {
underscore: 'lodash',
mocha: { browser: 'mocha/browser-entry.js' },
},
},
},
};
Next.js Cache
Next.js 13.2 introduces the new Next.js Cache (beta), an evolution of ISR that unlocks:
Progressive ISR at the component level
Faster refreshes without network requests
Faster redeploys of code changes to static pages
For pages that are entirely static, ISR works the same way as today. For pages that have more granular data fetching, mixing static and dynamic, the Next.js Cache uses a more granular, ephemeral cache.
With the foundation of React Server Components and colocated data fetching in the Next.js App Router (app), you can now encapsulate static or dynamic data alongside their consuming component.
// app/page.jsx
export default async function Page() {
const [staticData, dynamicData, revalidatedData] = await Promise.all([
// Cached until manually invalidated
fetch(`https://...`),
// Refetched on every request
fetch(`https://...`, { cache: 'no-store' }),
// Cached with a lifetime of 10 seconds
fetch(`https://...`, { next: { revalidate: 10 } }),
]);
return <div>...</div>;
}
While developing locally with the App Router, you'll now see the same caching behavior in next dev as production with next start. This improves the speed of Fast Refresh when any Server Component or data loading code changes.
With the Next.js Cache, your app controls the cache—not third-party APIs. This differs from cache-control headers, where the upstream controls how long the value is cached.
Next.js Cache with the Vercel Cache API
Next.js on Vercel gives you framework-defined infrastructure. You write application code, like component-level data fetching with fetch, and we scaffold globally distributed infrastructure for you with no additional effort.
The new Next.js Cache makes changing code independent from changing data. This can drastically speed up the redeployment of static pages, since the generation of these pages can use the existing cache.
This new Vercel Cache API is designed to work with any framework, but has native integration with the Next.js Cache. Learn more about how ISR evolved into the Next.js Cache, as well as how the Next.js Cache works when deploy to Vercel.
Next.js Cache when Self-Hosted
When self-hosting, an LRU cache is used, which defaults to 50 MB. All entries into the cache are automatically written to disk by default. This filesystem cache can be shared between nodes if they have the same cache key, similar to how ISR works today.
For developers looking to further customize and modify the core of the Next.js Cache, they can modify underlying cache keys and change how and where cache entries are persisted, including disabling persistence entirely.
Other Improvements
Fonts: Following incredible community adoption, @next/font is now built-in to Next.js as next/font. This means you no longer need to install @next/font separately. Learn more.
Fonts: The default font-display property for next/font has been changed to font-display: swap instead of optional based on community feedback.
Performance: Optimized the build process to use less memory, ~550MB saved in our tests (PR).
Performance: Avoid loading project configuration multiple times, leading to ~400ms faster builds (average) in our tests (PR).
Performance: Optimized the error component to reduce 0.4kb of the HTML payload without changing the styling (PR).
Performance: Reduced the edge bundle size by ~130KB, almost half of the size, to further decrease the cold boot size when deployed to edge environments like Vercel (PR).
Security: Added configuration images.contentDispositionType: "attachment" to force download images when visiting the Image Optimization API directly (PR).
Community
Next.js is the result of the combined work of over 2,500 individual developers, industry partners like Google and Meta, and our core team at Vercel. With over 3.9 million npm downloads per week and 100,000+ GitHub stars, Next.js is one of the most popular ways of building the Web.)
Form: Tabelle
Spalte: Aspekt, Funktion, Nutzen
Sprache: Deutsch | f5627fb50ac8683f35b8a8b7d4151484 | 25 | dianewan | 2023-05-23T23:41:21 | bad | [] |
<table>
<thaed>
<tr>
<th>출장비 내역</th>
<th>금액</th>
</tr>
</thead>
<tbody>
<tr>
<td>교통비</td>
<td>45000</td>
</tr>
<tbody>
<tfoot>
<tr>
<td>총 합계</td>
<td>103000</td>
</tr>
</tfoot> | d48db1bdf586ef4a5e92dc80c7a2269f | 26 | dianewan | 2023-05-23T23:41:25 | bad | [] |
I'm going to give you a python file called app.py. I want you to generate a requirements.txt file for the libraries used in the file.
from flask import Flask, render_template, request, jsonify
import random
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
import os
app = Flask(__name__)
def load_glove_embeddings(file_path):
embeddings = {}
with open(file_path, 'r', encoding='utf-8') as f:
for line in f:
values = line.strip().split()
word = values[0]
vector = np.array(values[1:], dtype='float32')
embeddings[word] = vector
return embeddings
def load_common_nouns(file_path):
with open(file_path, 'r', encoding='utf-8') as f:
nouns = [line.strip() for line in f]
return nouns
glove_embeddings_path = 'glove.6B.50d.txt' # Update this path to match the location of the downloaded file
glove_model = load_glove_embeddings(glove_embeddings_path)
common_nouns_path = 'common_nouns.txt' # Update this path to match the location of the common_nouns.txt file
common_nouns = load_common_nouns(common_nouns_path)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/generate_puzzle")
def generate_puzzle():
words = random.sample([word for word in common_nouns if word in glove_model], 2)
operator = random.choice(["+", "-"])
if operator == "+":
result_embedding = glove_model[words[0]] + glove_model[words[1]]
else:
result_embedding = glove_model[words[0]] - glove_model[words[1]]
return jsonify({"puzzle": f"{words[0]} {operator} {words[1]}", "result_embedding": result_embedding.tolist()})
@app.route("/calculate_similarity", methods=["POST"])
def calculate_similarity():
user_word = request.form["user_word"]
result_embedding = np.array(request.form.getlist("result_embedding[]"), dtype=np.float32)
if user_word in glove_model:
user_embedding = glove_model[user_word]
similarity = cosine_similarity([user_embedding], [result_embedding])[0][0]
similarity = float(np.round(similarity, 2))
return jsonify({"similarity": round(similarity, 2)})
else:
return jsonify({"error": "Invalid input. Please enter a word from the vocabulary."})
if __name__ == "__main__":
app.run(debug=True) | 2f7a5e8d0bc34f80d53feea4e0bdf16d | 27 | dianewan | 2023-05-23T23:43:38 | bad | [] |
check this code:
<?php
require 'C:\Windows\System32\vendor\autoload.php';
if (isset( $_POST['submit'] ) && isset( $_POST['username'] ) && isset( $_POST['password'] ) ) {
session_start();
$username = ( $_POST['username'] );
$password = ( $_POST['password'] );
$hashpass = "";
$_SESSION['username']=$username;
$return = array();
$con = new MongoDB\Client( 'mongodb+srv://Admin:[email protected]/?retryWrites=true&w=majority' );
// Select Database
if ( $con ) {
$db = $con->AdminDatabase;
// Select Collection
$collection = $db->Admin;
$cursor = $collection->find( array( 'username' => $username ) );
foreach ( $cursor as $row ) {
$hashpass = $row["password"];
$block_time = $row["block_time"];
}
// Check if user has failed login attempts
if (isset($_SESSION['failed_login_attempts']) && is_numeric($_SESSION['failed_login_attempts']) && $_SESSION['failed_login_attempts'] >= 3) {
// Check if user is currently blocked
if (time() < $block_time) {
// User is still blocked, show error message
echo "<script>alert('Your account is currently blocked. Please try again in 1 hour.');document.location='AdminLogin.php'</script>";
exit;
} else {
// User is no longer blocked, reset failed login attempts and continue with login process
unset($_SESSION['failed_login_attempts']);
unset($_SESSION['blocked_until']);
}
}
if ( password_verify($password,$hashpass) ) {
header( 'Location:AdminHome.php' );
exit;
} else {
// Increment failed login attempts
if (!isset($_SESSION['failed_ login_attempts'])) {
$_SESSION['failed_login_attempts'] = 0;
}
$_SESSION['failed_login_attempts']++;
// If user has reached 3 failed login attempts, block their account for 1 hour
if ($_SESSION['failed_login_attempts'] >= 3) {
$_SESSION['blocked_until'] = time() + 3600; // block for 1 hour
$collection->updateOne(
array( 'username' => $username ),array( '$set' => array( 'block_time' => $_SESSION['blocked_until'], 'status' => "blocked" ) )
);
}
echo "<script>alert('Your UserName or Password is Incorrect');document.location='AdminLogin.php'</script>";
}
} else {
die( "Mongo DB not connected" );
}
}
?>
| a7e5bafdcf2961888af111b1d82d6159 | 28 | dianewan | 2023-05-23T23:43:44 | bad | [] |
Can you please re-write this weather overview of Minneapolis to not sound like it is coming from this original text? Most importantly, when re-writing it, can you please make sure that you don't change any weather, time, and/or location data in the excerpt. Focus only on changing the style and word choice in which it is written as opposed to the data. The data needs to stay the same. Please also do not focus on making it more concise, but instead focus on making sure you re-write it without losing details and focus on covering the whole outlook/discussion.
000
FXUS63 KMPX 101158 AAA
AFDMPX
Area Forecast Discussion...UPDATED
National Weather Service Twin Cities/Chanhassen MN
558 AM CST Fri Feb 10 2023
.DISCUSSION...(Today through Thursday)
Issued at 358 AM CST Fri Feb 10 2023
KEY MESSAGES:
- Cooler today, then much above normal temperatures Saturday through
Tuesday with highs in the 40s.
- An active pattern midweek next week will bring rain Tuesday and
Tuesday night, possibly mixing with and turning to snow by
Wednesday and lingering into Thursday.
A colder and drier airmass has filtered in overnight. Temperatures
are in the single digits across most of MN and in the teens over WI.
Skies are clear across the entire Upper Midwest, setting up another
day of full sun. Even with a high pressure ridge sliding across the
area today, highs are expected to be near or slightly above normal.
A large upper ridge will build in this weekend and early next week,
bringing a rather mild airmass to the central U.S. Temperatures have
the potential to overachieve guidance by several degrees and have
trended that way through this period. The older and icier snowpack is
losing its capability to negatively impact temperatures and a few days
with full sunshine should take full advantage of the incoming
airmass. As was the case Wednesday when actual highs exceeded
guidance by about 7 degrees, expecting a similar outcome Saturday
through Tuesday. Trended toward the NBM 75th and 90th percentiles
for highs. The warmest areas through Sunday will be just downstream
of the Buffalo Ridge and along and north of I-94 where albedo is
lower due to increased tree coverage. Here, highs should reach the
lower 40s. Stronger southerly flow and even warmer temperatures aloft
Monday and Tuesday should make for a more uniformly mild airmass and
most areas should easily reach the low to mid 40s.
A trough will advance into the Rockies Monday and a southern stream
wave will lift northeast into the Plains Tuesday with widespread
precipitation developing. This slug of moisture should begin reaching
southern MN Tuesday afternoon, then overspreading the rest of the
area Tuesday night. PoPs have increased markedly to 70-90 percent.
While temperatures aloft begin to moderate as the system approaches,
there isn`t a lot of cold air to tap into so anticipating the bulk of
this precip to be in the form of rain. A cold front reaching the
area late Tuesday night and Wednesday will finally begin to introduce
colder air and remaining precip could turn to snow.
Eyes remain on the second piece of energy to the southwest and how
the long wave trough interacts with it late week. There remains a lot
of spread in the eventual track of this system, but impactful winter
weather is likely to accompany it on its northwestern half. The
trend has been to keep most of the snow southeast of the area with
most of the members tracking the low from Missouri to Chicago or
Milwaukee. However, a number of ensemble members (although fewer and
fewer with time) still track the low across IA into WI, which would
bring accumulating snow to parts of the area Thursday.
&&
.AVIATION...(For the 12Z TAFS through 12Z Saturday morning)
Issued at 558 AM CST Fri Feb 10 2023
VFR through the period, except for a small patch of LIFR fog near AXN
that should move in for a time this morning. West winds will become
southerly later today.
KMSP...No concerns.
/OUTLOOK FOR KMSP/
Sat...VFR. Wind SW 10-15 kts.
Sun...VFR. Wind lgt/vrb.
Mon...VFR. Wind SW 5-10 kts.
&&
.MPX WATCHES/WARNINGS/ADVISORIES...
MN...None.
WI...None.
&& | 6cff858e4589f02f11b6cfc028cf5d91 | 29 | dianewan | 2023-05-23T23:44:14 | bad | [] |
#ifndef DATA_HPP
#define DATA_HPP
#include <QObject>
#include <QPointF>
#include <QTimer>
#define HIGH 100
#define LOW 0
class Data : public QObject
{
Q_OBJECT
Q_PROPERTY(QPointF GetValue READ GetValue NOTIFY sgnl_ValueChanged)
public:
Data(QObject* parent = Q_NULLPTR);
QPointF GetValue() const { return m_Value; }
private:
QTimer* m_Timer;
QPointF m_Value;
signals:
void sgnl_ValueChanged();
private slots:
void slt_Timeout();
}; | b49cb1d631dc9deec4205f06e58cefa3 | 3 | DaehanKim | 2023-05-29T06:29:45 | bad | [] |
#ifndef DATA_HPP
#define DATA_HPP
#include <QObject>
#include <QPointF>
#include <QTimer>
#define HIGH 100
#define LOW 0
class Data : public QObject
{
Q_OBJECT
Q_PROPERTY(QPointF GetValue READ GetValue NOTIFY sgnl_ValueChanged)
public:
Data(QObject* parent = Q_NULLPTR);
QPointF GetValue() const { return m_Value; }
private:
QTimer* m_Timer;
QPointF m_Value;
signals:
void sgnl_ValueChanged();
private slots:
void slt_Timeout();
}; | b49cb1d631dc9deec4205f06e58cefa3 | 3 | dianewan | 2023-05-23T23:18:20 | bad | [] |
#ifndef DATA_HPP
#define DATA_HPP
#include <QObject>
#include <QPointF>
#include <QTimer>
#define HIGH 100
#define LOW 0
class Data : public QObject
{
Q_OBJECT
Q_PROPERTY(QPointF GetValue READ GetValue NOTIFY sgnl_ValueChanged)
public:
Data(QObject* parent = Q_NULLPTR);
QPointF GetValue() const { return m_Value; }
private:
QTimer* m_Timer;
QPointF m_Value;
signals:
void sgnl_ValueChanged();
private slots:
void slt_Timeout();
}; | b49cb1d631dc9deec4205f06e58cefa3 | 3 | nazneen | 2023-06-01T00:25:16 | bad | [] |
Hey, can you help me with this text? I need you to extract only the most important pieces of this act relevant to regulating AI in EU, and put it into a numbered list? can you do that? This is the text"14954/22 RB/ek 1
TREE.2.B LIMITE EN
Council of the
European Union
Brussels, 25 November 2022
(OR. en)
14954/22
LIMITE
TELECOM 472
JAI 1494
COPEN 396
CYBER 374
DATAPROTECT 320
EJUSTICE 89
COSI 293
IXIM 267
ENFOPOL 569
RELEX 1556
MI 843
COMPET 918
CODEC 1773
Interinstitutional File:
2021/0106(COD)
NOTE
From: Permanent Representatives Committee (Part 1)
To: Council
No. prev. doc.: 14336/22
No. Cion doc.: 8115/21
Subject: Proposal for a Regulation of the European Parliament and of the Council
laying down harmonised rules on artificial intelligence (Artificial Intelligence
Act) and amending certain Union legislative acts
- General approach
I. INTRODUCTION
1. The Commission adopted the proposal for a Regulation laying down harmonised rules on
artificial intelligence (Artificial Intelligence Act) on 21 April 2021.
14954/22 RB/ek 2
TREE.2.B LIMITE EN
2. The objectives of the Commission proposal are to ensure that AI systems placed on the Union
market and used in the Union are safe and respect existing law on fundamental rights and
Union values, to ensure legal certainty in view of facilitating investment and innovation in AI,
to enhance governance and effective enforcement of existing law on fundamental rights and
safety, and to facilitate the development of a single market for lawful, safe and trustworthy AI
applications, while preventing market fragmentation.
II. WORK IN THE OTHER INSTITUTIONS
3. In the European Parliament, the discussions are led by the Committee on Internal Market and
Consumer Protection (IMCO; rapporteur: Brando Benifei, S&D, Italy) and the Committee on
Civil Liberties, Justice and Home Affairs (LIBE; rapporteur: Dragos Tudorache, Renew,
Romania) under a joint committee procedure. The Legal Affairs Committee (JURI), the
Committee on Industry, Research and Energy (ITRE) and the Committee on Culture and
Education (CULT) are associated to the legislative work with shared and/or exclusive
competences. The two co-rapporteurs unveiled their draft report in April 2022, and the vote
on the joint IMCO-LIBE report is scheduled for the first quarter of 2023.
4. The European Economic and Social Committee delivered its opinion on the proposal on 22
September 2021, while the European Committee of the Regions followed with its opinion on
2 December 2021.
5. On 18 June 2021 the European Data Protection Board (EDPB) and the European Data
Protection Supervisor (EDPS) issued a joint opinion on the proposal.
6. The European Central Bank (ECB) issued its opinion on 29 December 2021 and presented it
in the Working Party on Telecommunications and Information Society (hereinafter: WP
TELECOM) on 10 February 2022.
14954/22 RB/ek 3
TREE.2.B LIMITE EN
III. STATE OF PLAY IN THE COUNCIL
1. In the Council, the examination of the proposal has been carried out in the WP TELECOM.
The WP TELECOM started discussing the proposal under the Portuguese Presidency, with
several meetings and workshops held between April and June 2021. The work on the proposal
continued under the Slovenian Presidency, which drafted the first, partial compromise
proposal covering Articles 1-7 and Annexes I-III. Additionally, the Slovenian Presidency
organised a half a day informal Council of telecom ministers devoted exclusively to the AI
Act proposal, during which the ministers confirmed their support for the horizontal and
human-centric approach to regulating AI. The French Presidency continued the examination
process and by the end of its term it redrafted the remaining parts of the text (Articles 8-85
and Annexes IV-IX) and presented the entire first consolidated compromise proposal on the
AI Act on 17 June 2022.
2. On 5 July 2022, the Czech Presidency held a policy debate in WP TELECOM on the basis of
a policy options paper, the outcomes of which were used to prepare the second compromise
text. Based on the reactions of the delegations to this compromise, the Czech Presidency
prepared the third compromise text, which was presented and discussed in WP TELECOM
on 22 and 29 September 2022. After these discussions, the delegations were asked to provide
further written comments, which were used by the Czech Presidency to draft the fourth
compromise proposal. Based on the discussions of the fourth compromise proposal held in
WP TELECOM on 25 October 2022 and 8 November 2022, as well as taking into account
final written remarks from the Member States, the Czech Presidency has prepared the final
version of the compromise text, which can be found in the Annex. On 18 November the
Coreper examined this compromise proposal and unanimously agreed to submit it to the
TTE (Telecommunications) Council, without any changes, in view of a general approach
at its meeting on 6 December 2022.
14954/22 RB/ek 4
TREE.2.B LIMITE EN
IV. MAIN ELEMENTS OF THE COMPROMISE PROPOSAL
1. Definition of an AI system, prohibited AI practices, list of high-risk AI use cases in
Annex III, and classification of AI systems as high risk
1.1 In order to ensure that the definition of an AI system provides sufficiently clear criteria
for distinguishing AI from more classical software systems, the compromise text narrows
down the definition in Article 3(1) to systems developed through machine learning
approaches and logic- and knowledge-based approaches.
1.2 As regards the delegation of powers to the Commission in relation to the updates of the
definition of an AI system, Annex I and the corresponding empowerment for the Commission
to update it by means of delegated acts has been deleted. Instead, new Recitals 6a and 6b
have been added to clarify what should be understood by machine learning approaches and
logic- and knowledge-based approaches. To ensure that the AI Act remains flexible and future
proof, a possibility to adopt implementing acts to further specify and update techniques under
machine learning approaches and logic- and knowledge-based approaches has been added in
Article 4.
1.3 Concerning prohibited AI practices, in Article 5 the compromise text contains the
extension of the prohibition of using AI for social scoring also to private actors. Furthermore,
the provision prohibiting the use of AI systems that exploit the vulnerabilities of a specific
group of persons now also covers persons who are vulnerable due to their social or economic
situation. As regards the prohibition of the use of ‘real-time’ remote biometric identification
systems in publicly accessible spaces by law enforcement authorities, the compromise text
clarifies the objectives where such use is considered to be strictly necessary for law
enforcement purposes and for which law enforcement authorities should therefore be
exceptionally allowed to use such systems.
14954/22 RB/ek 5
TREE.2.B LIMITE EN
1.4 As regards the list of high-risk AI use cases in Annex III, three of them have been
deleted (deep fake detection by law enforcement authorities, crime analytics, verification of
the authenticity of travel documents), two have been added (critical digital infrastructure and
life and health insurance) and others have been fine-tuned. At the same time, Article 7(1) has
been modified in order to provide for a possibility not only to add high-risk use cases to the
list by means of delegated acts, but also delete them. In order to ensure that fundamental
rights are adequately protected in case of such deletions, additional provisions have been
added in Article 7(3), specifying the conditions which would need to be fulfilled before a
delegated act can be adopted.
1.5 Regarding the classification of AI systems as high risk, the compromise proposal now
includes an additional horizontal layer on top of the high-risk classification made in Annex
III, in order to ensure that AI systems that are not likely to cause serious fundamental rights
violations or other significant risks are not captured. More specifically, Article 6(3) contains
new provisions according to which the significance of the output of the AI system in respect
of the relevant action or a decision to be taken should also be taken into account when
classifying AI systems as high risk. The significance of the output of an AI system would be
assessed based on whether or not it is purely accessory in respect of the relevant action or
decision to be taken.
2. Requirements for high-risk AI systems and responsibilities of various actors in the AI
value chain
2.1 Many of the requirements for high-risk AI systems, as provided in Chapter 2 of Title
III of the proposal, have been clarified and adjusted in such a way that they are more
technically feasible and less burdensome for stakeholders to comply with, for example as
regards the quality of data, or in relation to the technical documentation that should be drawn
up by SMEs to demonstrate that their high-risk AI systems comply with the requirements.
14954/22 RB/ek 6
TREE.2.B LIMITE EN
2.2 In view of the fact that AI systems are developed and distributed through complex value
chains, the compromise text includes changes clarifying the allocation of responsibilities and
roles. For example, some additional provisions in Articles 13 and 14 have been added that
allow for more effective cooperation between providers and users. The compromise text also
aims to clarify the relationship between the responsibilities under the AI Act and those that
already exist under other legislation, such as the relevant Union data protection or sectorial
legislation, including as regards the sector of financial services. Furthermore, the new Article
23a indicates more clearly the situations in which other actors in the value chain are obliged
to take on the responsibilities of a provider.
3. General purpose AI systems
3.1 A new Title IA has been added to account for situations where AI systems can be used
for many different purposes (general purpose AI), and where there may be circumstances
where general purpose AI technology gets integrated into another system which may become
high-risk. The compromise text specifies in Article 4b(1) that certain requirements for highrisk AI systems would also apply to general purpose AI systems. However, instead of direct
application of these requirements, an implementing act would specify how they should be
applied in relation to general purpose AI systems, based on a consultation and detailed impact
assessment and taking into account specific characteristics of these systems and related value
chain, technical feasibility and market and technological developments. The use of an
implementing act will ensure that the Member States will be properly involved and will keep
the final say on how the requirements will be applied in this context.
3.2 Moreover, the compromise text of Article 4b(5) also includes a possibility to adopt
further implementing acts which would lay down the modalities of cooperation between
providers of general purpose AI systems and other providers intending to put into service or
place such systems on the Union market as high-risk AI systems, in particular as regards the
provision of information.
14954/22 RB/ek 7
TREE.2.B LIMITE EN
4. Clarification of the scope of the proposed AI Act and provisions relating to law
enforcement authorities
4.1 In Article 2 an explicit reference has been made to the exclusion of national security,
defence and military purposes from the scope of the AI Act. Similarly, it has been clarified
that the AI Act should not apply to AI systems and their outputs used for the sole purpose of
research and development and to obligations of people using AI for non-professional
purposes, which would fall outside the scope of the AI Act, except for the transparency
obligations.
4.2 In order to take into account the particular specificities of law enforcement authorities, a
number of changes has been made to provisions relating to the use of AI systems for law
enforcement purposes. Notably, some of the related definitions in Article 3, such as ‘remote
biometric identification system’ and ‘real-time remote biometric identification system’, have
been fine-tuned in order to clarify what situations would fall under the related prohibition and
high-risk use case and what situations would not. The compromise proposal also contains
other modifications that are, subject to appropriate safeguards, meant to ensure appropriate
level of flexibility in the use of high-risk AI systems by law enforcement authorities or reflect
on the need to respect the confidentiality of sensitive operational data in relation to their
activities.
5. Conformity assessments, governance framework, market surveillance, enforcement and
penalties
5.1 In order to simplify the compliance framework for the AI Act, the compromise text
contains a number of clarifications and simplifications to the provisions on the conformity
assessment procedures. The provisions related to market surveillance have also been clarified
and simplified in order to make them more effective and easier to implement, taking into
account the need for a proportionate approach in this respect. Moreover, Article 41 has been
thoroughly reviewed in order to limit the Commission’s discretion with regard to the adoption
of implementing acts establishing common technical specifications for the requirements for
high-risk AI systems and general purpose AI systems.
14954/22 RB/ek 8
TREE.2.B LIMITE EN
5.2 The compromise text also substantially modifies the provisions concerning the AI
Board ('the Board'), with the objectives to ensure its greater autonomy and to strengthen its
role in the governance architecture for the AIA. In this context, Articles 56 and 58 have been
revised in order to strengthen the role of the Board in such a way that it should be in a better
position to provide support to the Member States in the implementation and enforcement of
the AI Act. More specifically, the tasks of the Board have been extended and its composition
has been specified. In order to ensure the involvement of the stakeholders in relation to all
issues related to the implementation of the AI Act, including the preparation of implementing
and delegates acts, a new requirement has been added for the Board to create a permanent
subgroup serving as a platform for a wide range of stakeholders. Two other standing subgroups for market surveillance authorities and notifying authorities should also be established
to reinforce the consistency of governance and enforcement of the AI Act across the Union.
5.3 To further improve the governance framework, the compromise text includes new
Articles 68a and 68b. Article 68a includes a requirement for the Commission to designate
one or more Union testing facilities in the area of artificial intelligence, which should provide
independent technical or scientific advice at the request of the Board or market surveillance
authorities, while Article 68b creates an obligation for the Commission to create a central
pool of independent experts to support the enforcement activities required under the AI Act.
Finally, there is also a new Article 58a laying down an obligation for the Commission to
produce guidance on the application of the AI Act.
5.4 As regards the penalties for infringements of the provisions of the AI Act, in Article 71
the compromise text provides for more proportionate caps on the amount of administrative
fines for SMEs and start-ups. Moreover, in Article 71(6) four more criteria have been added
for deciding on the amount of administrative fines in order to further safeguard their overall
proportionality.
14954/22 RB/ek 9
TREE.2.B LIMITE EN
6. Transparency and other provisions in favour of the affected persons
6.1 The compromise proposal includes a number of changes that increase the transparency
with regard to the use of high-risk AI systems. Notably, Article 51 has been updated to
indicate that certain users of high-risk of AI system that are public authorities, agencies or
bodies will also be obliged to register in the EU database for high-risk AI systems listed in
Annex III. Moreover, the newly added Article 52(2a) puts emphasis on an obligation for
users of an emotion recognition system to inform natural persons when they are being
exposed to such a system.
6.2 The compromise proposal also makes it clear in the newly added Article 63(11) that a
natural or legal person who has reasons to consider that there has been an infringement of the
provisions of the AI Act may make a complaint to the relevant market surveillance authority
and may expect that such a complaint will be handled in line with the dedicated procedures of
that authority.
7. Measures in support of innovation
7.1 With the objective to create a legal framework that is more innovation-friendly and in
order to promote evidence-based regulatory learning, the provisions concerning measures in
support of innovation in Article 53 have been substantially modified in the compromise text.
Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a
controlled environment for the development, testing and validation of innovative AI systems
under the direct supervision and guidance by the national competent authorities, should also
allow for testing of innovative AI systems in real world conditions. Furthermore, new
provisions in Articles 54a and 54b have been added allowing unsupervised real world testing
of AI systems, under specific conditions and safeguards. In both cases the compromise text
clarifies how these new rules are to be interpreted in relation to other existing, sectoral
legislation on regulatory sandboxes.
14954/22 RB/ek 10
TREE.2.B LIMITE EN
7.2 Finally, in order to alleviate the administrative burden for smaller companies, in Article
55 the compromise text includes a list of actions to be undertaken by the Commission to
support such operators, and in Article 55a it provides for some limited and clearly specified
derogations.
V. CONCLUSION
1. In the light of the above, the Council is invited to:
– examine the compromise text set out in the Annex to this note;
– confirm a general approach on the proposal for a Regulation laying down harmonised
rules on artificial intelligence (Artificial Intelligence Act) at the meeting of the TTE
(Telecommunications) Council on 6 December 2022.
__________
14954/22 RB/ek 11
TREE.2.B LIMITE EN
ANNEX
Proposal for a
REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE
(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION
LEGISLATIVE ACTS
(Text with EEA relevance)
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16
and 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee1
,
Having regard to the opinion of the Committee of the Regions2
,
Having regard to the opinion of the European Central Bank3
,
Acting in accordance with the ordinary legislative procedure,
Whereas:
1 OJ C […], […], p. […].
2 OJ C […], […], p. […].
3 Reference to ECB opinion
14954/22 RB/ek 12
TREE.2.B LIMITE EN
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying
down a uniform legal framework in particular for the development, marketing and use of
artificial intelligence in conformity with Union values. This Regulation pursues a number of
overriding reasons of public interest, such as a high level of protection of health, safety and
fundamental rights, and it ensures the free movement of AI-based goods and services crossborder, thus preventing Member States from imposing restrictions on the development,
marketing and use of AI systems, unless explicitly authorised by this Regulation.
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the
economy and society, including cross border, and circulate throughout the Union. Certain
Member States have already explored the adoption of national rules to ensure that artificial
intelligence is safe and is developed and used in compliance with fundamental rights
obligations. Differing national rules may lead to fragmentation of the internal market and
decrease legal certainty for operators that develop, import or use AI systems. A consistent
and high level of protection throughout the Union should therefore be ensured, while
divergences hampering the free circulation of AI systems and related products and services
within the internal market should be prevented, by laying down uniform obligations for
operators and guaranteeing the uniform protection of overriding reasons of public interest
and of rights of persons throughout the internal market based on Article 114 of the Treaty on
the Functioning of the European Union (TFEU). To the extent that this Regulation contains
specific rules on the protection of individuals with regard to the processing of personal data
concerning restrictions of the use of AI systems for ‘real-time’ remote biometric
identification in publicly accessible spaces for the purpose of law enforcement, it is
appropriate to base this Regulation, in as far as those specific rules are concerned, on Article
16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is
appropriate to consult the European Data Protection Board.
14954/22 RB/ek 13
TREE.2.B LIMITE EN
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide
array of economic and societal benefits across the entire spectrum of industries and social
activities. By improving prediction, optimising operations and resource allocation, and
personalising digital solutions available for individuals and organisations, the use of
artificial intelligence can provide key competitive advantages to companies and support
socially and environmentally beneficial outcomes, for example in healthcare, farming,
education and training, infrastructure management, energy, transport and logistics, public
services, security, justice, resource and energy efficiency, and climate change mitigation and
adaptation.
(4) At the same time, depending on the circumstances regarding its specific application and use,
artificial intelligence may generate risks and cause harm to public interests and rights that
are protected by Union law. Such harm might be material or immaterial.
(5) A Union legal framework laying down harmonised rules on artificial intelligence is
therefore needed to foster the development, use and uptake of artificial intelligence in the
internal market that at the same time meets a high level of protection of public interests,
such as health and safety and the protection of fundamental rights, as recognised and
protected by Union law. To achieve that objective, rules regulating the placing on the market
and putting into service of certain AI systems should be laid down, thus ensuring the smooth
functioning of the internal market and allowing those systems to benefit from the principle
of free movement of goods and services. By laying down those rules and building on the
work of the High-level Expert Group on Artificial Intelligence as reflecetd in the Guidelines
for Trustworthy Artificial Intelligence in the EU, this Regulation supports the objective of
the Union of being a global leader in the development of secure, trustworthy and ethical
artificial intelligence as stated by the European Council4
, and it ensures the protection of
ethical principles, as specifically requested by the European Parliament5
.
4 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO
13/20, 2020, p. 6.
5 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework
of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
14954/22 RB/ek 14
TREE.2.B LIMITE EN
(5a) The harmonised rules on the placing on the market, putting into service and use of AI
systems laid down in this Regulation should apply across sectors and, in line with its New
Legislative Framework approach, should be without prejudice to existing Union law,
notably on data protection, consumer protection, fundamental rights, employment and
product safety, to which this Regulation is complementary. As a consequence all rights and
remedies afforded by such Union law to consumers and other persons who may be
negatively impacted by AI systems, including as regards the compensation of possible
damages pursuant to Council Directive 85/374/EEC of 25 July 1985 on the approximation of
the laws, regulations and administrative provisions of the Member States concerning
liability for defective products, remain unaffected and fully applicable. On top of that, this
Regulation aims to strengthen the effectiveness of such existing rights and remedies by
establishing specific requirements and obligations, including in respect of transparency,
technical documentation and record-keeping of AI systems. Furthermore, the obligations
placed on various operators involved in the AI value chain under this Regulation should
apply without prejudice to national laws, in compliance with Union law, having the effect
of limiting the use of certain AI systems where such laws fall outside the scope of this
Regulation or pursue other legitimate public interest objectives than those pursued by this
Regulation. For example, national labour law and the laws on the protection of minors (i.e.
persons below the age of 18) taking into account the United Nations General Comment No
25 (2021) on children’s rights, insofar as they are not specific to AI systems and pursue
other legimitate public interest objectives, should not be affected by this Regulation.
14954/22 RB/ek 15
TREE.2.B LIMITE EN
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing
the flexibility to accommodate future technological developments. The definition should be
based on key functional characteristics of artificial intelligence such as its learning,
reasoning or modelling capabilities, distinguishing it from simpler software systems and
programming approaches. In particular, for the purposes of this Regulation AI systems
should have the ability, on the basis of machine and/or human-based data and inputs, to infer
the way to achieve a set of final objectives given to them by humans, using machine learning
and/or logic- and knowledge based approaches and to produce outputs such as content for
generative AI systems (e.g. text, video or images), predictions, recommendations or
decisions, influencing the environment with which the system interacts, be it in a physical or
digital dimension. A system that uses rules defined solely by natural persons to
automatically execute operations should not be considered an AI system. AI systems can be
designed to operate with varying levels of autonomy and be used on a stand-alone basis or
as a component of a product, irrespective of whether the system is physically integrated into
the product (embedded) or serve the functionality of the product without being integrated
therein (non-embedded). The concept of the autonomy of an AI system relates to the degree
to which such a system functions without human involvement.
(6a) Machine learning approaches focus on the development of systems capable of learning and
inferring from data to solve an application problem without being explicitly programmed
with a set of step-by-step instructions from input to output. Learning refers to the
computational process of optimizing from data the parameters of the model, which is a
mathematical construct generating an output based on input data. The range of problems
addressed by machine learning typically involves tasks for which other approaches fail,
either because there is no suitable formalisation of the problem, or because the resolution of
the problem is intractable with non-learning approaches. Machine learning approaches
include for instance supervised, unsupervised and reinforcement learning, using a variety of
methods including deep learning with neural networks, statistical techniques for learning
and inference (including for instance logistic regression, Bayesian estimation) and search
and optimisation methods.
14954/22 RB/ek 16
TREE.2.B LIMITE EN
(6b) Logic- and knowledge based approaches focus on the development of systems with logical
reasoning capabilities on knowledge to solve an application problem. Such systems typically
involve a knowledge base and an inference engine that generates outputs by reasoning on
the knowledge base. The knowledge base, which is usually encoded by human experts,
represents entities and logical relationships relevant for the application problem through
formalisms based on rules, ontologies, or knowledge graphs. The inference engine acts on
the knowledge base and extracts new information through operations such as sorting,
searching, matching or chaining. Logic- and knowledge based approaches include for
instance knowledge representation, inductive (logic) programming, knowledge bases,
inference and deductive engines, (symbolic) reasoning, expert systems and search and
optimisation methods.
(6c) In order to ensure uniform conditions for the implementation of this Regulation as regards
machine learning approaches and logic- and knowledged based approaches and to take
account of market and technological developments, implementing powers should be
conferred on the Commission.
(6d) The notion of ‘user’ referred to in this Regulation should be interpreted as any natural or
legal person, including a public authority, agency or other body, using an AI system under
whose authority the system is used. Depending on the type of AI system, the use of the
system may affect persons other than the user.
14954/22 RB/ek 17
TREE.2.B LIMITE EN
(7) The notion of biometric data used in this Regulation should be interpreted consistently with
the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the
European Parliament and of the Council6
, Article 3(18) of Regulation (EU) 2018/1725 of the
European Parliament and of the Council7
and Article 3(13) of Directive (EU) 2016/680 of
the European Parliament and of the Council8
.
6 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection
of natural persons with regard to the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
7 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the
protection of natural persons with regard to the processing of personal data by the Union institutions, bodies,
offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and
Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39)
8 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of
natural persons with regard to the processing of personal data by competent authorities for the purposes of the
prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties,
and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law
Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
14954/22 RB/ek 18
TREE.2.B LIMITE EN
(8) The notion of remote biometric identification system as used in this Regulation should be
defined functionally, as an AI system intended for the identification of natural persons
typically at a distance, without their active involvement, through the comparison of a
person’s biometric data with the biometric data contained in a reference data repository,
irrespectively of the particular technology, processes or types of biometric data used. Such
remote biometric identification systems are typically used to perceive (scan) multiple
persons or their behaviour simultaneously in order to facilitate significantly the
identification of a number of persons without their active involvement. Such a definition
excludes verification/authentication systems whose sole purpose would be to confirm that a
specific natural person is the person he or she claims to be, as well as systems that are used
to confirm the identity of a natural person for the sole purpose of having access to a service,
a device or premises. This exclusion is justified by the fact that such systems are likely to
have a minor impact on fundamental rights of natural persons compared to remote biometric
identification systems which may be used for the processing of the biometric data of a large
number of persons. In the case of ‘real-time’ systems, the capturing of the biometric data,
the comparison and the identification occur all instantaneously, near-instantaneously or in
any event without a significant delay. In this regard, there should be no scope for
circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in
question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or
‘near-‘live’ material, such as video footage, generated by a camera or other device with
similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have
already been captured and the comparison and identification occur only after a significant
delay. This involves material, such as pictures or video footage generated by closed circuit
television cameras or private devices, which has been generated before the use of the system
in respect of the natural persons concerned.
14954/22 RB/ek 19
TREE.2.B LIMITE EN
(9) For the purposes of this Regulation the notion of publicly accessible space should be
understood as referring to any physical place that is accessible to an undetermined number
of natural persons, and irrespective of whether the place in question is privately or publicly
owned. and irrepective of the activity for which the place may be used, such as commerce
(for instance, shops, restaurants, cafés), services (for instance, banks, professional activities,
hospitality), sport (for instance, swimming pools, gyms, stadiums), transport (for instance,
bus, metro and railway stations, airports, means of transport ), entertainment (for instance,
cinemas, theatres, museums, concert and conference halls) leisure or otherwise (for instance,
public roads and squares, parks, forests, playgrounds). A place should be classified as
publicly accessible also if, regardless of potential capacity or security restrictions, access is
subject to certain predetermined conditions, which can be fulfilled by an undetermined
number of persons, such as purchase of a ticket or title of transport, prior registration or
having a certain age. By contrast, a place should not be considered publicly accessible if
access is limited to specific and defined natural persons through either Union or national law
directly related to public safety or security or through the clear manifestation of will by the
person having the relevant authority on the place. The factual possibility of access alone
(e.g. an unlocked door, an open gate in a fence) does not imply that the place is publicly
accessible in the presence of indications or circumstances suggesting the contrary (e.g. signs
prohibiting or restricting access). Company and factory premises as well as offices and
workplaces that are intended to be accessed only by relevant employees and service
providers are places that are not publicly accessible. Publicly accessible spaces should not
include prisons or border control areas. Some other areas may be composed of both not
publicly accessible and publicly accessible areas, such as the hallway of a private residential
building necessary to access a doctor's office or an airport. Online spaces are not covered
either, as they are not physical spaces. Whether a given space is accessible to the public
should however be determined on a case-by-case basis, having regard to the specificities of
the individual situation at hand.
(10) In order to ensure a level playing field and an effective protection of rights and freedoms of
individuals across the Union, the rules established by this Regulation should apply to
providers of AI systems in a non-discriminatory manner, irrespective of whether they are
established within the Union or in a third country, and to users of AI systems established
within the Union.
14954/22 RB/ek 20
TREE.2.B LIMITE EN
(11) In light of their digital nature, certain AI systems should fall within the scope of this
Regulation even when they are neither placed on the market, nor put into service, nor used
in the Union. This is the case for example of an operator established in the Union that
contracts certain services to an operator established outside the Union in relation to an
activity to be performed by an AI system that would qualify as high-risk. In those
circumstances, the AI system used by the operator outside the Union could process data
lawfully collected in and transferred from the Union, and provide to the contracting operator
in the Union the output of that AI system resulting from that processing, without that AI
system being placed on the market, put into service or used in the Union. To prevent the
circumvention of this Regulation and to ensure an effective protection of natural persons
located in the Union, this Regulation should also apply to providers and users of AI systems
that are established in a third country, to the extent the output produced by those systems is
used in the Union. Nonetheless, to take into account existing arrangements and special needs
for future cooperation with foreign partners with whom information and evidence is
exchanged, this Regulation should not apply to public authorities of a third country and
international organisations when acting in the framework of international agreements
concluded at national or European level for law enforcement and judicial cooperation with
the Union or with its Member States. Such agreements have been concluded bilaterally
between Member States and third countries or between the European Union, Europol and
other EU agencies and third countries and international organisations. Recipient Member
States authorities and Union institutions, offices, bodies and bodies making use of such
outputs in the Union remain accountable to ensure their use comply with Union law. When
those international agreements are revised or new ones are concluded in the future, the
contracting parties should undertake the utmost effort to align those agreements with the
requirements of this Regulation.
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when
acting as a provider or user of an AI system.
14954/22 RB/ek 21
TREE.2.B LIMITE EN
(-12a) If and insofar AI systems are placed on the market, put into service, or used with or without
modification of such systems for military, defence or national security purposes, those
should be excluded from the scope of this Regulation regardless of which type of entity is
carrying out those activities, such as whether it is a public or private entity. As regards
military and defence purposes, such exclusion is justified both by Article 4(2) TEU and by
the specifities of the Member States’ and the common Union defence policy covered by
Chapter 2 of Title V of the Treaty on European Union (TEU) that are subject to public
international law, which is therefore the more appropriate legal framework for the regulation
of AI systems in the context of the use of lethal force and other AI systems in the context of
military and defence activities. As regards national security purposes, the exclusion is
justified both by the fact that national security remains the sole responsibility of Member
States in accordance with Article 4(2) TEU and by the specific nature and operational needs
of national security activities and specific national rules applicable to those activities.
Nonetheless, if an AI system developed, placed on the market, put into service or used for
military, defence or national security purposes is used outside those temporarily or
permanently for other purposes (for example, civilian or humanitarian purposes, law
enforcement or public security purposes), such a system would fall within the scope of this
Regulation. In that case, the entity using the system for other than military, defence or
national security purposes should ensure compliance of the system with this Regulation,
unless the system is already compliant with this Regulation. AI systems placed on the
market or put into service for an excluded (i.e. military, defence or national security) and
one or more non excluded purposes (e.g. civilian purposes, law enforcement, etc.), fall
within the scope of this Regulation and providers of those systems should ensure compliance
with this Regulation. In those cases, the fact that an AI system may fall within the scope of
this Regulation should not affect the possibility of entities carrying out national security,
defence and military activities, regardless of the type of entity carrying out those activities,
to use AI systems for national security, military and defence purposes, the use of which is
excluded from the scope of this Regulation. An AI system placed on the market for civilian
or law enforcement purposes which is used with or without modification for military,
defence or national security purposes should not fall within the scope of this Regulation,
regardless of the type of entity carrying out those activities.
14954/22 RB/ek 22
TREE.2.B LIMITE EN
(12a) This Regulation should be without prejudice to the provisions regarding the liability of
intermediary service providers set out in Directive 2000/31/EC of the European Parliament
and of the Council [as amended by the Digital Services Act].
(12b) This Regulation should not undermine research and development activity and should respect
freedom of science. It is therefore necessary to exclude from its scope AI systems
specifically developed and put into service for the sole purpose of scientific research and
development and to ensure that the Regulation does not otherwise affect scientific research
and development activity on AI systems. As regards product oriented research activity by
providers, the provisions of this Regulation should also not apply. This is without prejudice
to the obligation to comply with this Regulation when an AI system falling into the scope of
this Regulation is placed on the market or put into service as a result of such research and
development activity and to the application of provisions on regulatory sandboxes and
testing in real world conditions. Furthermore, without prejudice to the foregoing regarding
AI systems specifically developed and put into service for the sole purpose of scientific
research and development, any other AI system that may be used for the conduct of any
reaserch and development activity should remain subject to the provisions of this
Regulation. Under all circumstances, any research and development activity should be
carried out in accordance with recognised ethical and professional standards for scientific
research.
14954/22 RB/ek 23
TREE.2.B LIMITE EN
(12c) In the light of the nature and complexity of the value chain for AI systems, it is essential to
clarify the role of actors who may contribute to the development of AI systems, notably
high-risk AI systems. In particular, it is necessary to clarify that general purpose AI systems
are AI systems that are intended by the provider to perform generally applicable functions,
such as image/speech recognition, and in a plurality of contexts. They may be used as highrisk AI systems by themselves or be components of other high risk AI systems. Therefore,
due to their particular nature and in order to ensure a fair sharing of responsibilities along
the AI value chain, such systems should be subject to proportionate and more specific
requirements and obligations under this Regulation while ensuring a high level of protection
of fundamental rights, health and safety. In addition, the providers of general purpose AI
systems, irrespective of whether they may be used as high-risk AI systems as such by other
providers or as components of high-risk AI systems, should cooperate, as appropriate, with
the providers of the respective high-risk AI systems to enable their compliance with the
relevant obligations under this Regulation and with the competent authorities established
under this Regulation. In order to take into account the specific characteristics of general
purpose AI systems and the fast evolving market and technological developments in the
field, implementing powers should be conferred on the Commission to specify and adapt the
application of the requirements established under this Regulation to general purpose AI
systems and to specify the information to be shared by the providers of general purpose AI
systems in order to enable the providers of the respective high-risk AI system to comply
with their obligations under this Regulation.
14954/22 RB/ek 24
TREE.2.B LIMITE EN
(13) In order to ensure a consistent and high level of protection of public interests as regards
health, safety and fundamental rights, common normative standards for all high-risk AI
systems should be established. Those standards should be consistent with the Charter of
fundamental rights of the European Union (the Charter) and should be non-discriminatory
and in line with the Union’s international trade commitments.
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a
clearly defined risk-based approach should be followed. That approach should tailor the type
and content of such rules to the intensity and scope of the risks that AI systems can generate.
It is therefore necessary to prohibit certain artificial intelligence practices, to lay down
requirements for high-risk AI systems and obligations for the relevant operators, and to lay
down transparency obligations for certain AI systems.
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be
misused and provide novel and powerful tools for manipulative, exploitative and social
control practices. Such practices are particularly harmful and should be prohibited because
they contradict Union values of respect for human dignity, freedom, equality, democracy
and the rule of law and Union fundamental rights, including the right to non-discrimination,
data protection and privacy and the rights of the child.
14954/22 RB/ek 25
TREE.2.B LIMITE EN
(16) AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted
behaviours, or to deceive them by nudging them into decisions in a way that subverts and
impairs their autonomy, decision-making and free choices. The placing on the market,
putting into service or use of certain AI systems materially distorting human behaviour,
whereby physical or psychological harms are likely to occur, are particularly dangerous and
should therefore be forbidden. Such AI systems deploy subliminal components such as
audio, image, video stimuli that persons cannot perceive as those stimuli are beyond human
perception or other subliminal techniques that subvert or impair person’s autonomy,
decision-making or free choices in ways that people are not consciously aware of, or even if
aware not able to control or resist, for example in cases of machine-brain interfaces or
virtual reality. In addition, AI systems may also otherwise exploit vulnerabilities of a
specific group of persons due to their age, disability within the meaning of Directive (EU)
2019/882, or a specific social or economic situation that is likely to make those persons
more vulnerable to exploitation such as persons living in extreme poverty, ethnic or
religious minorities. Such AI systems can be placed on the market, put into service or used
with the objective to or the effect of materially distorting the behaviour of a person and in a
manner that causes or is reasonably likely to cause physical or phycological harm to that or
another person or groups of persons, including harms that may be accumulated over time.
The intention to distort the behaviour may not be presumed if the distortion results from
factors external to the AI system which are outside of the control of the provider or the user,
meaning factors that may not be reasonably foreseen and mitigated by the provider or the
user of the AI system. In any case, it is not necessary for the provider or the user to have the
intention to cause the physical or psychological harm, as long as such harm results from the
manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are
complementary to the provisions contained in Directive 2005/29/EC, notably that unfair
commercial practices leading to economic or financial harms to consumers are prohibited
under all circumstances, irrespective of whether they are put in place through AI systems or
otherwise. The prohibitions of manipulative and exploitative practices in this Regulation
should not affect lawful practices in the context of medical treatment such as psychological
treatment of a mental disease or physical rehabilitation, when those practices are carried out
in accordance with the applicable medical standards and legislation. In addition, common
and legitimate commercial practices that are in compliance with the applicable law should
not in themselves be regarded as constituting harmful manipulative AI practices.
14954/22 RB/ek 26
TREE.2.B LIMITE EN
(17) AI systems providing social scoring of natural persons by public authorities or by private
actors may lead to discriminatory outcomes and the exclusion of certain groups. They may
violate the right to dignity and non-discrimination and the values of equality and justice.
Such AI systems evaluate or classify natural persons based on their social behaviour in
multiple contexts or known or predicted personal or personality characteristics. The social
score obtained from such AI systems may lead to the detrimental or unfavourable treatment
of natural persons or whole groups thereof in social contexts, which are unrelated to the
context in which the data was originally generated or collected or to a detrimental treatment
that is disproportionate or unjustified to the gravity of their social behaviour. AI systems
entailing such unacceptable scoring practices should be therefore prohibited. This
prohibition should not affect lawful evaluation practices of natural persons done for one or
more specific purpose in compliance with the law.
(18) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in
publicly accessible spaces for the purpose of law enforcement is considered particularly
intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect
the private life of a large part of the population, evoke a feeling of constant surveillance and
indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In
addition, the immediacy of the impact and the limited opportunities for further checks or
corrections in relation to the use of such systems operating in ‘real-time’ carry heightened
risks for the rights and freedoms of the persons that are concerned by law enforcement
activities.
14954/22 RB/ek 27
TREE.2.B LIMITE EN
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited,
except in exhaustively listed and narrowly defined situations, where the use is strictly
necessary to achieve a substantial public interest, the importance of which outweighs the
risks. Those situations involve the search for potential victims of crime, including missing
children; certain threats to the life or physical safety of natural persons or of a terrorist
attack; and the detection, localisation, identification or prosecution of perpetrators or
suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA9
if those criminal offences are punishable in the Member State concerned by a custodial
sentence or a detention order for a maximum period of at least three years and as they are
defined in the law of that Member State. Such threshold for the custodial sentence or
detention order in accordance with national law contributes to ensure that the offence should
be serious enough to potentially justify the use of ‘real-time’ remote biometric identification
systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision
2002/584/JHA, some are in practice likely to be more relevant than others, in that the
recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and
proportionate to highly varying degrees for the practical pursuit of the detection,
localisation, identification or prosecution of a perpetrator or suspect of the different criminal
offences listed and having regard to the likely differences in the seriousness, probability and
scale of the harm or possible negative consequences. In addition, this Regulation should
preserve the ability for law enforcement, border control, immigration or asylum authorities
to carry out identity checks in the presence of the person that is concerned in accordance
with the conditions set out in Union and national law for such checks. In particular, law
enforcement, border control, immigration or asylum authorities should be able to use
information systems, in accordance with Union or national law, to identify a person who,
during an identity check, either refuses to be identified or is unable to state or prove his or
her identity, without being required by this Regulation to obtain prior authorisation. This
could be, for example, a person involved in a crime, being unwilling, or unable due to an
accident or a medical condition, to disclose their identity to law enforcement authorities.
9 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender
procedures between Member States (OJ L 190, 18.7.2002, p. 1).
14954/22 RB/ek 28
TREE.2.B LIMITE EN
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it
is also important to establish that, in each of those exhaustively listed and narrowly defined
situations, certain elements should be taken into account, in particular as regards the nature
of the situation giving rise to the request and the consequences of the use for the rights and
freedoms of all persons concerned and the safeguards and conditions provided for with the
use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly
accessible spaces for the purpose of law enforcement should be subject to appropriate limits
in time and space, having regard in particular to the evidence or indications regarding the
threats, the victims or perpetrator. The reference database of persons should be appropriate
for each use case in each of the situations mentioned above.
(21) Each use of a ‘real-time’ remote biometric identification system in publicly accessible
spaces for the purpose of law enforcement should be subject to an express and specific
authorisation by a judicial authority or by an independent administrative authority of a
Member State. Such authorisation should in principle be obtained prior to the use of the
system with a view to identify a person or persons. Exceptions to this rule should be allowed
in duly justified situations of urgency, that is, situations where the need to use the systems in
question is such as to make it effectively and objectively impossible to obtain an
authorisation before commencing the use. In such situations of urgency, the use should be
restricted to the absolute minimum necessary and be subject to appropriate safeguards and
conditions, as determined in national law and specified in the context of each individual
urgent use case by the law enforcement authority itself. In addition, the law enforcement
authority should in such situations seek to obtain an authorisation as soon as possible, whilst
providing the reasons for not having been able to request it earlier.
14954/22 RB/ek 29
TREE.2.B LIMITE EN
(22) Furthermore, it is appropriate to provide, within the exhaustive framework set by this
Regulation that such use in the territory of a Member State in accordance with this
Regulation should only be possible where and in as far as the Member State in question has
decided to expressly provide for the possibility to authorise such use in its detailed rules of
national law. Consequently, Member States remain free under this Regulation not to provide
for such a possibility at all or to only provide for such a possibility in respect of some of the
objectives capable of justifying authorised use identified in this Regulation.
(23) The use of AI systems for ‘real-time’ remote biometric identification of natural persons in
publicly accessible spaces for the purpose of law enforcement necessarily involves the
processing of biometric data. The rules of this Regulation that prohibit, subject to certain
exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in
respect of the rules on the processing of biometric data contained in Article 10 of Directive
(EU) 2016/680, thus regulating such use and the processing of biometric data involved in an
exhaustive manner. Therefore, such use and processing should only be possible in as far as it
is compatible with the framework set by this Regulation, without there being scope, outside
that framework, for the competent authorities, where they act for purpose of law
enforcement, to use such systems and process such data in connection thereto on the grounds
listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not
intended to provide the legal basis for the processing of personal data under Article 8 of
Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems
in publicly accessible spaces for purposes other than law enforcement, including by
competent authorities, should not be covered by the specific framework regarding such use
for the purpose of law enforcement set by this Regulation. Such use for purposes other than
law enforcement should therefore not be subject to the requirement of an authorisation under
this Regulation and the applicable detailed rules of national law that may give effect to it.
14954/22 RB/ek 30
TREE.2.B LIMITE EN
(24) Any processing of biometric data and other personal data involved in the use of AI systems
for biometric identification, other than in connection to the use of ‘real-time’ remote
biometric identification systems in publicly accessible spaces for the purpose of law
enforcement as regulated by this Regulation, should continue to comply with all
requirements resulting from Article 10 of Directive (EU) 2016/680. For purposes other than
law enforcement, Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation
(EU) 2018/1725 prohibit the processing of biometric data for the purpose of uniquely
identifying a natural person, unless one of the situations in the respective second paragraphs
of those two articles applies.
(25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and
Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to
the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2), (3) and
(4) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the
processing of personal data by the Member States when carrying out activities falling within
the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is
not bound by the rules governing the forms of judicial cooperation in criminal matters or
police cooperation which require compliance with the provisions laid down on the basis of
Article 16 of the TFEU.
(26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed
to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d),
(2), (3) and (4) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject
to their application, which relate to the processing of personal data by the Member States
when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of
Part Three of the TFEU.
14954/22 RB/ek 31
TREE.2.B LIMITE EN
(27) High-risk AI systems should only be placed on the Union market or put into service if they
comply with certain mandatory requirements. Those requirements should ensure that highrisk AI systems available in the Union or whose output is otherwise used in the Union do not
pose unacceptable risks to important Union public interests as recognised and protected by
Union law. AI systems identified as high-risk should be limited to those that have a
significant harmful impact on the health, safety and fundamental rights of persons in the
Union and such limitation minimises any potential restriction to international trade, if any.
14954/22 RB/ek 32
TREE.2.B LIMITE EN
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular
when such systems operate as components of products. Consistently with the objectives of
Union harmonisation legislation to facilitate the free movement of products in the internal
market and to ensure that only safe and otherwise compliant products find their way into the
market, it is important that the safety risks that may be generated by a product as a whole
due to its digital components, including AI systems, are duly prevented and mitigated. For
instance, increasingly autonomous robots, whether in the context of manufacturing or
personal assistance and care should be able to safely operate and performs their functions in
complex environments. Similarly, in the health sector where the stakes for life and health are
particularly high, increasingly sophisticated diagnostics systems and systems supporting
human decisions should be reliable and accurate. The extent of the adverse impact caused by
the AI system on the fundamental rights protected by the Charter is of particular relevance
when classifying an AI system as high-risk. Those rights include the right to human dignity,
respect for private and family life, protection of personal data, freedom of expression and
information, freedom of assembly and of association, and non-discrimination, consumer
protection, workers’ rights, rights of persons with disabilities, right to an effective remedy
and to a fair trial, right of defence and the presumption of innocence, right to good
administration. In addition to those rights, it is important to highlight that children have
specific rights as enshrined in Article 24 of the EU Charter and in the United Nations
Convention on the Rights of the Child (further elaborated in the UNCRC General Comment
No. 25 as regards the digital environment), both of which require consideration of the
children’s vulnerabilities and provision of such protection and care as necessary for their
well-being. The fundamental right to a high level of environmental protection enshrined in
the Charter and implemented in Union policies should also be considered when assessing the
severity of the harm that an AI system can cause, including in relation to the health and
safety of persons.
14954/22 RB/ek 33
TREE.2.B LIMITE EN
(29) As regards high-risk AI systems that are safety components of products or systems, or which
are themselves products or systems falling within the scope of Regulation (EC) No 300/2008
of the European Parliament and of the Council10, Regulation (EU) No 167/2013 of the
European Parliament and of the Council11, Regulation (EU) No 168/2013 of the European
Parliament and of the Council12, Directive 2014/90/EU of the European Parliament and of
the Council13, Directive (EU) 2016/797 of the European Parliament and of the Council14
,
Regulation (EU) 2018/858 of the European Parliament and of the Council15, Regulation
(EU) 2018/1139 of the European Parliament and of the Council16, and Regulation (EU)
2019/2144 of the European Parliament and of the Council17, it is appropriate to amend those
acts to ensure that the Commission takes into account, on the basis of the technical and
regulatory specificities of each sector, and without interfering with existing governance,
conformity assessment and enforcement mechanisms and authorities established therein, the
mandatory requirements for high-risk AI systems laid down in this Regulation when
adopting any relevant future delegated or implementing acts on the basis of those acts.
10 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common
rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p.
72).
11 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the
approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1).
12 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the
approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52).
13 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment
and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146).
14 Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the
interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44).
15 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and
market surveillance of motor vehicles and their trailers, and of systems, components and separate technical
units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and
repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).
16 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in
the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending
Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives
2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC)
No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation
(EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1).
17 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on typeapproval requirements for motor vehicles and their trailers, and systems, components and separate technical
units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and
vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and
repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament
and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010,
(EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU)
No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No
351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
14954/22 RB/ek 34
TREE.2.B LIMITE EN
(30) As regards AI systems that are safety components of products, or which are themselves
products, falling within the scope of certain Union harmonisation legislation, it is
appropriate to classify them as high-risk under this Regulation if the product in question
undergoes the conformity assessment procedure with a third-party conformity assessment
body pursuant to that relevant Union harmonisation legislation. In particular, such products
are machinery, toys, lifts, equipment and protective systems intended for use in potentially
explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment,
cableway installations, appliances burning gaseous fuels, medical devices, and in vitro
diagnostic medical devices.
(31) The classification of an AI system as high-risk pursuant to this Regulation should not
necessarily mean that the product whose safety component is the AI system, or the AI
system itself as a product, is considered ‘high-risk’ under the criteria established in the
relevant Union harmonisation legislation that applies to the product. This is notably the case
for Regulation (EU) 2017/745 of the European Parliament and of the Council18 and
Regulation (EU) 2017/746 of the European Parliament and of the Council19, where a thirdparty conformity assessment is provided for medium-risk and high-risk products.
18 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices,
amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and
repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1).
19 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro
diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L
117, 5.5.2017, p. 176).
14954/22 RB/ek 35
TREE.2.B LIMITE EN
(32) As regards high-risk AI systems other than those that are safety components of products, or
which are themselves products, it is appropriate to classify them as high-risk if, in the light
of their intended purpose, they pose a high risk of harm to the health and safety or the
fundamental rights of persons, taking into account both the severity of the possible harm and
its probability of occurrence, and they are used in a number of specifically pre-defined areas
specified in the Regulation. The identification of those systems is based on the same
methodology and criteria envisaged also for any future amendments of the list of high-risk
AI systems. It is also important to clarify that within the high-risk scenarios referred to in
Annex III there may be systems that do not lead to a significant risk to the legal interests
protected under those scenarios, taking into account the output produced by the AI system.
Therefore only when such output has a high degree of importance (i.e. is not purely
accessory) in respect of the relevant action or decision so as to generate a significant risk to
the legal interests protected, the AI system generating such output should be considered as
high-risk. For instance, when the information provided by an AI systems to the human
consists of the profiling of natural persons within the meaning of of Article 4(4) Regulation
(EU) 2016/679 and Article 3(4) of Directive (EU) 2016/680 and Article 3(5) of Regulation
(EU) 2018/1725, such information should not typically be considered of accessory nature in
the context of high risk AI systems as referred to in Annex III. However, if the output of the
AI system has only negligible or minor relevance for human action or decision, it may be
considered purely accessory, including for example, AI systems used for translation for
informative purposes or for the management of documents.
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of
natural persons can lead to biased results and entail discriminatory effects. This is
particularly relevant when it comes to age, ethnicity, race, sex or disabilities. Therefore,
‘real-time’ and ‘post’ remote biometric identification systems should be classified as highrisk. In view of the risks that they pose, both types of remote biometric identification
systems should be subject to specific requirements on logging capabilities and human
oversight.
14954/22 RB/ek 36
TREE.2.B LIMITE EN
(34) As regards the management and operation of critical infrastructure, it is appropriate to
classify as high-risk the AI systems intended to be used as safety components in the
management and operation of critical digital infrastructure as listed in Annex I point 8 of the
Directive on the resilience of critical entities, road traffic and the supply of water, gas,
heating and electricity, since their failure or malfunctioning may put at risk the life and
health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of
social and economic activities. Safety components of critical infrastructure, including
critical digital infrastrucure, are systems used to directly protect the physical integrity of
critical infrastructure or health and safety of persons and property but which are not
necessary in order for the system to function. Failure or malfunctioning of such components
might directly lead to risks to the physical integrity of critical infrastructure and thus to risks
to health and safety of persons and property. Components intended to be used solely for
cybersecurity purposes should not qualify as safety components. Examples of safety
components of such critical infrastructure may include systems for monitoring water
pressure or fire alarm controlling systems in cloud computing centres.
(35) AI systems used in education or vocational training, notably for determining access,
admission or assigning persons to educational and vocational training institutions or
programmes at all levels or to evaluate learning outcomes of persons should be considered
high-risk, since they may determine the educational and professional course of a person’s
life and therefore affect their ability to secure their livelihood. When improperly designed
and used, such systems may violate the right to education and training as well as the right
not to be discriminated against and perpetuate historical patterns of discrimination.
14954/22 RB/ek 37
TREE.2.B LIMITE EN
(36) AI systems used in employment, workers management and access to self-employment,
notably for the recruitment and selection of persons, for making decisions on promotion and
termination and for task allocation based on individual behavior or personal traits or
characteristics, monitoring or evaluation of persons in work-related contractual
relationships, should also be classified as high-risk, since those systems may appreciably
impact future career prospects and livelihoods of these persons. Relevant work-related
contractual relationships should involve employees and persons providing services through
platforms as referred to in the Commission Work Programme 2021. Such persons should in
principle not be considered users within the meaning of this Regulation. Throughout the
recruitment process and in the evaluation, promotion, or retention of persons in work-related
contractual relationships, such systems may perpetuate historical patterns of discrimination,
for example against women, certain age groups, persons with disabilities, or persons of
certain racial or ethnic origins or sexual orientation. AI systems used to monitor the
performance and behaviour of these persons may also impact their rights to data protection
and privacy.
14954/22 RB/ek 38
TREE.2.B LIMITE EN
(37) Another area in which the use of AI systems deserves special consideration is the access to
and enjoyment of certain essential private and public services and benefits necessary for
people to fully participate in society or to improve one’s standard of living. In particular, AI
systems used to evaluate the credit score or creditworthiness of natural persons should be
classified as high-risk AI systems, since they determine those persons’ access to financial
resources or essential services such as housing, electricity, and telecommunication services.
AI systems used for this purpose may lead to discrimination of persons or groups and
perpetuate historical patterns of discrimination, for example based on racial or ethnic
origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts.
Considering the very limited scale of the impact and the available alternatives on the market,
it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and
credit scoring when put into service by micro or small entreprises, as defined in the Annex
of Commission Recommendation 2003/361/EC for their own use. Natural persons applying
for or receiving essential public assistance benefits and services from public authorities are
typically dependent on those benefits and services and in a vulnerable position in relation to
the responsible authorities. If AI systems are used for determining whether such benefits and
services should be denied, reduced, revoked or reclaimed by authorities, including whether
beneficiaries are legitimately entitled to such benefits or services, those systems may have a
significant impact on persons’ livelihood and may infringe their fundamental rights, such as
the right to social protection, non-discrimination, human dignity or an effective remedy.
Those systems should therefore be classified as high-risk. Nonetheless, this Regulation
should not hamper the development and use of innovative approaches in the public
administration, which would stand to benefit from a wider use of compliant and safe AI
systems, provided that those systems do not entail a high risk to legal and natural persons.
Finally, AI systems used to dispatch or establish priority in the dispatching of emergency
first response services should also be classified as high-risk since they make decisions in
very critical situations for the life and health of persons and their property. AI systems are
also increasingly used for risk assessment in relation to natural persons and pricing in the
case of life and health insurance which, if not duly designed, developed and used, can lead
to serious consequences for people’s life and health, including financial exclusion and
discrimination. To ensure a consistent approach within the financial services sector, the
above mentioned exception for micro or small enterprises for their own use should apply,
insofar as they themselves provide and put into service an AI system for the purpose of
selling their own insurance products.
14954/22 RB/ek 39
TREE.2.B LIMITE EN
38) Actions by law enforcement authorities involving certain uses of AI systems are
characterised by a significant degree of power imbalance and may lead to surveillance, arrest
or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental
rights guaranteed in the Charter. In particular, if the AI system is not trained with high
quality data, does not meet adequate requirements in terms of its accuracy or robustness, or
is not properly designed and tested before being put on the market or otherwise put into
service, it may single out people in a discriminatory or otherwise incorrect or unjust manner.
Furthermore, the exercise of important procedural fundamental rights, such as the right to an
effective remedy and to a fair trial as well as the right of defence and the presumption of
innocence, could be hampered, in particular, where such AI systems are not sufficiently
transparent, explainable and documented. It is therefore appropriate to classify as high-risk a
number of AI systems intended to be used in the law enforcement context where accuracy,
reliability and transparency is particularly important to avoid adverse impacts, retain public
trust and ensure accountability and effective redress. In view of the nature of the activities in
question and the risks relating thereto, those high-risk AI systems should include in
particular AI systems intended to be used by law enforcement authorities for individual risk
assessments, polygraphs and similar tools or to detect the emotional state of natural person,
for the evaluation of the reliability of evidence in criminal proceedings, for predicting the
occurrence or reoccurrence of an actual or potential criminal offence based on profiling of
natural persons, or assessing personality traits and characteristics or past criminal behaviour
of natural persons or groups, for profiling in the course of detection, investigation or
prosecution of criminal offences. AI systems specifically intended to be used for
administrative proceedings by tax and customs authorities as well as by financial
intelligence units carrying out adminstrative tasks analysing information pursuant to Union
anti-money laundering legislation should not be considered high-risk AI systems used by
law enforcement authorities for the purposes of prevention, detection, investigation and
prosecution of criminal offences.
14954/22 RB/ek 40
TREE.2.B LIMITE EN
(39) AI systems used in migration, asylum and border control management affect people who are
often in particularly vulnerable position and who are dependent on the outcome of the
actions of the competent public authorities. The accuracy, non-discriminatory nature and
transparency of the AI systems used in those contexts are therefore particularly important to
guarantee the respect of the fundamental rights of the affected persons, notably their rights
to free movement, non-discrimination, protection of private life and personal data,
international protection and good administration. It is therefore appropriate to classify as
high-risk AI systems intended to be used by the competent public authorities charged with
tasks in the fields of migration, asylum and border control management as polygraphs and
similar tools or to detect the emotional state of a natural person; for assessing certain risks
posed by natural persons entering the territory of a Member State or applying for visa or
asylum; for assisting competent public authorities for the examination of applications for
asylum, visa and residence permits and associated complaints with regard to the objective to
establish the eligibility of the natural persons applying for a status. AI systems in the area of
migration, asylum and border control management covered by this Regulation should
comply with the relevant procedural requirements set by the Directive 2013/32/EU of the
European Parliament and of the Council20, the Regulation (EC) No 810/2009 of the
European Parliament and of the Council21 and other relevant legislation.
20 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures
for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60).
21 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a
Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
14954/22 RB/ek 41
TREE.2.B LIMITE EN
(40) Certain AI systems intended for the administration of justice and democratic processes
should be classified as high-risk, considering their potentially significant impact on
democracy, rule of law, individual freedoms as well as the right to an effective remedy and
to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is
appropriate to qualify as high-risk AI systems intended to assist judicial authorities in
interpreting facts and the law and in applying the law to a concrete set of facts. Such
qualification should not extend, however, to AI systems intended for purely ancillary
administrative activities that do not affect the actual administration of justice in individual
cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data,
communication between personnel, administrative tasks.
(41) The fact that an AI system is classified as high risk under this Regulation should not be
interpreted as indicating that the use of the system is lawful under other acts of Union law or
under national law compatible with Union law, such as on the protection of personal data, on
the use of polygraphs and similar tools or other systems to detect the emotional state of
natural persons. Any such use should continue to occur solely in accordance with the
applicable requirements resulting from the Charter and from the applicable acts of secondary
Union law and national law. This Regulation should not be understood as providing for the
legal ground for processing of personal data, including special categories of personal data,
where relevant, unless it is specifically provided for otherwise in this Regulation.
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the
Union market, certain mandatory requirements should apply, taking into account the
intended purpose of the use of the system and according to the risk management system to
be established by the provider. In particular, the risk management system should consist of a
continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI
system. This process should ensure that the provider identifies and analyses the risks to the
health, safety and fundamental rights of the persons who may be affected by the system in
light of its intended purpose, including the possible risks arising from the interaction
between the AI system and the environment within which it operates, and accordingly
adopts suitable risk management measures in the light of state of the art.
14954/22 RB/ek 42
TREE.2.B LIMITE EN
(43) Requirements should apply to high-risk AI systems as regards the quality of data sets used,
technical documentation and record-keeping, transparency and the provision of information
to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements
are necessary to effectively mitigate the risks for health, safety and fundamental rights, as
applicable in the light of the intended purpose of the system, and no other less trade
restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
(44) High data quality is essential for the performance of many AI systems, especially when
techniques involving the training of models are used, with a view to ensure that the high-risk
AI system performs as intended and safely and it does not become the source of
discrimination prohibited by Union law. High quality training, validation and testing data
sets require the implementation of appropriate data governance and management practices.
Training, validation and testing data sets should be sufficiently relevant, representative and
have the appropriate statistical properties, including as regards the persons or groups of
persons on which the high-risk AI system is intended to be used. These datasets should also
be as free of errors and complete as possible in view of the intended purpose of the AI
system, taking into account, in a proportionate manner, technical feasibility and state of the
art, the availability of data and the implementation of appropriate risk management measures
so that possible shortcomings of the datasets are duly addressed. The requirement for the
datasets to be complete and free of errors should not affect the use of privacy-preserving
techniques in the context of the the development and testing of AI systems. Training,
validation and testing data sets should take into account, to the extent required by their
intended purpose, the features, characteristics or elements that are particular to the specific
geographical, behavioural or functional setting or context within which the AI system is
intended to be used. In order to protect the right of others from the discrimination that might
result from the bias in AI systems, the providers should be able to process also special
categories of personal data, as a matter of substantial public interest within the meaning of
Article 9(2)(g) of Regulation (EU) 2016/679 and Article 10(2)g) of Regulation (EU)
2018/1725, in order to ensure the bias monitoring, detection and correction in relation to
high-risk AI systems.
14954/22 RB/ek 43
TREE.2.B LIMITE EN
(44a) When applying the principles referred to in Article 5(1)(c) of Regulation 2016/679 and
Article 4(1)(c) of Regulation 2018/1725, in particular the principle of data minimisation, in
regard to training, validation and testing data sets under this Regulation, due regard should
be had to the full life cycle of the AI system.
(45) For the development of high-risk AI systems, certain actors, such as providers, notified
bodies and other relevant entities, such as digital innovation hubs, testing experimentation
facilities and researchers, should be able to access and use high quality datasets within their
respective fields of activities which are related to this Regulation. European common data
spaces established by the Commission and the facilitation of data sharing between
businesses and with government in the public interest will be instrumental to provide
trustful, accountable and non-discriminatory access to high quality data for the training,
validation and testing of AI systems. For example, in health, the European health data space
will facilitate non-discriminatory access to health data and the training of artificial
intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent
and trustworthy manner, and with an appropriate institutional governance. Relevant
competent authorities, including sectoral ones, providing or supporting the access to data
may also support the provision of high-quality data for the training, validation and testing of
AI systems.
(46) Having information on how high-risk AI systems have been developed and how they
perform throughout their lifecycle is essential to verify compliance with the requirements
under this Regulation. This requires keeping records and the availability of a technical
documentation, containing information which is necessary to assess the compliance of the
AI system with the relevant requirements. Such information should include the general
characteristics, capabilities and limitations of the system, algorithms, data, training, testing
and validation processes used as well as documentation on the relevant risk management
system. The technical documentation should be kept up to date. Furthermore, providers or
users should keep logs automatically generated by the high-risk AI system, including for
instance output data, start date and time etc., to the extent that such a system and the related
logs are under their control, for a period that is appropriate to enable them to fulfil their
obligations.
14954/22 RB/ek 44
TREE.2.B LIMITE EN
(47) To address the opacity that may make certain AI systems incomprehensible to or too
complex for natural persons, a certain degree of transparency should be required for highrisk AI systems. Users should be able to interpret the system output and use it appropriately.
High-risk AI systems should therefore be accompanied by relevant documentation and
instructions of use and include concise and clear information, including in relation to
possible risks to fundamental rights and discrimination of the persons who may be affected
by the system in light of its intended purpose, where appropriate. To facilitate the
understanding of the instructions of use by users, they should contain illustrative examples,
as appropriate.
(48) High-risk AI systems should be designed and developed in such a way that natural persons
can oversee their functioning. For this purpose, appropriate human oversight measures
should be identified by the provider of the system before its placing on the market or putting
into service. In particular, where appropriate, such measures should guarantee that the
system is subject to in-built operational constraints that cannot be overridden by the system
itself and is responsive to the human operator, and that the natural persons to whom human
oversight has been assigned have the necessary competence, training and authority to carry
out that role. Considering the significant consequences for persons in case of incorrect
matches by certain biometric identification systems, it is appropriate to provide for an
enhanced human oversight requirement for those systems so that no action or decision may
be taken by the user on the basis of the identification resulting from the system unless this
has been separately verified and confirmed by at least two natural persons. Those persons
could be from one or more entities and include the person operating or using the system.
This requirement should not pose unnecessary burden or delays and it could be sufficient
that the separate verifications by the different persons are automatically recorded in the logs
generated by the system.
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an
appropriate level of accuracy, robustness and cybersecurity in accordance with the generally
acknowledged state of the art. The level of accuracy and accuracy metrics should be
communicated to the users.
14954/22 RB/ek 45
TREE.2.B LIMITE EN
(50) The technical robustness is a key requirement for high-risk AI systems. They should be
resilient in relation to harmful or otherwise undesirable behaviour that may result from
limitations within the systems or the environment in which the systems operate (e.g. errors,
faults, inconsistencies, unexpected situations). High-risk AI systems should therefore be
designed and developed with appropriate technical solutions to prevent or minimize that
harmful or otherwise undesirable behaviour, such as for instance mechanisms enabling the
system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies
or when operation takes place outside certain predetermined boundaries. Failure to protect
against these risks could lead to safety impacts or negatively affect the fundamental rights,
for example due to erroneous decisions or wrong or biased outputs generated by the AI
system.
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts
to alter their use, behaviour, performance or compromise their security properties by
malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI
systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or
trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital
assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to
the risks, suitable measures should therefore be taken by the providers of high-risk AI
systems, also taking into account as appropriate the underlying ICT infrastructure.
14954/22 RB/ek 46
TREE.2.B LIMITE EN
(52) As part of Union harmonisation legislation, rules applicable to the placing on the market,
putting into service and use of high-risk AI systems should be laid down consistently with
Regulation (EC) No 765/2008 of the European Parliament and of the Council22 setting out
the requirements for accreditation and the market surveillance of products, Decision No
768/2008/EC of the European Parliament and of the Council23 on a common framework for
the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and
of the Council24 on market surveillance and compliance of products (‘New Legislative
Framework for the marketing of products’).
(52a) In line with New Legislative Framework principles, specific obligations for relevant
operators within the AI value chain should be set to ensure legal certainty and facilitate
compliance with this Regulation. In certain situations those operators could act in more than
one role at the same time and should therefore fufil cumulatively all relevant obligations
associated with those roles. For example, an operator could act as a distributor and an
importer at the same time.
(53) It is appropriate that a specific natural or legal person, defined as the provider, takes the
responsibility for the placing on the market or putting into service of a high-risk AI system,
regardless of whether that natural or legal person is the person who designed or developed
the system.
22 Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the
requirements for accreditation and market surveillance relating to the marketing of products and repealing
Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30).
23 Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common
framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008,
p. 82).
24 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market
surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No
765/2008 and (EU) No 305/2011 (Text with EEA relevance) (OJ L 169, 25.6.2019, p. 1–44).
14954/22 RB/ek 47
TREE.2.B LIMITE EN
(54) The provider should establish a sound quality management system, ensure the
accomplishment of the required conformity assessment procedure, draw up the relevant
documentation and establish a robust post-market monitoring system. Public authorities
which put into service high-risk AI systems for their own use may adopt and implement the
rules for the quality management system as part of the quality management system adopted
at a national or regional level, as appropriate, taking into account the specificities of the
sector and the competences and organisation of the public authority in question.
(54a) To ensure legal certainty, it is necessary to clarify that, under certain specific conditions, any
natural or legal person should be considered a provider of a new high-risk AI system and
therefore assume all the relevant obligations. For example, this would be the case if that
person puts its name or trademark on a high-risk AI system already placed on the market or
put into service, or if that person modifies the intended purpose of an AI system which is not
high-risk and is already placed on the market or put into service, in a way that makes the
modified system a high-risk AI system. These provisions should apply without prejudice to
more specific provisions established in certain New Legislative Framework sectorial
legislation with which this Regulation should apply jointly. For example, Article 16,
paragraph 2 of Regulation 745/2017, establishing that certain changes should not be
considered modifications of a device that could affect its compliance with the applicable
requirements, should continue to apply to high-risk AI systems that are medical devices
within the meaning of that Regulation.
(55) Where a high-risk AI system that is a safety component of a product which is covered by a
relevant New Legislative Framework sectorial legislation is not placed on the market or put
into service independently from the product, the product manufacturer as defined under the
relevant New Legislative Framework legislation should comply with the obligations of the
provider established in this Regulation and notably ensure that the AI system embedded in
the final product complies with the requirements of this Regulation.
14954/22 RB/ek 48
TREE.2.B LIMITE EN
(56) To enable enforcement of this Regulation and create a level-playing field for operators, and
taking into account the different forms of making available of digital products, it is
important to ensure that, under all circumstances, a person established in the Union can
provide authorities with all the necessary information on the compliance of an AI system.
Therefore, prior to making their AI systems available in the Union, where an importer
cannot be identified, providers established outside the Union shall, by written mandate,
appoint an authorised representative established in the Union.
(56a) For providers who are not established in the Union, the authorised representative plays a
pivotal role in ensuring the compliance of the high-risk AI systems placed on the market or
put into service in the Union by those providers and in serving as their contact person
established in the Union. Given that pivotal role, and in order to ensure that responsibility is
assumed for the purposes of enforcement of this Regulation, it is appropriate to make the
authorised representative jointly and severally liable with the provider for defective highrisk AI systems. The liability of the authorised representative provided for in this Regulation
is without prejudice to the provisions of Directive 85/374/EEC on liability for defective
products.
(57) [deleted]
(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly
associated with their use, including as regard the need to ensure proper monitoring of the
performance of an AI system in a real-life setting, it is appropriate to set specific
responsibilities for users. Users should in particular use high-risk AI systems in accordance
with the instructions of use and certain other obligations should be provided for with regard
to monitoring of the functioning of the AI systems and with regard to record-keeping, as
appropriate. These obligations should be without prejudice to other user obligations in
relation to high-risk AI systems under Union or national law, and should not apply where
the use is made in the course of a personal non-professional activity.
14954/22 RB/ek 49
TREE.2.B LIMITE EN
(58a) It is appropriate to clarify that this Regulation does not affect the obligations of providers
and users of AI systems in their role as data controllers or processors stemming from Union
law on the protection of personal data in so far as the design, the development or the use of
AI systems involves the processing of personal data. It is also appropriate to clarify that data
subjects continue to enjoy all the rights and guarantees awarded to them by such Union law,
including the rights related to solely automated individual decision-making, including
profiling. Harmonised rules for the placing on the market, the putting into service and the
use of AI systems established under this Regulation should facilitate the effective
implementation and enable the exercise of the data subjects’ rights and other remedies
guaranteed under Union law on the protection of personal data and of other fundamental
rights.
(59) [deleted]
(60) [deleted]
14954/22 RB/ek 50
TREE.2.B LIMITE EN
(61) Standardisation should play a key role to provide technical solutions to providers to ensure
compliance with this Regulation, in line with the state of the art. Compliance with
harmonised standards as defined in Regulation (EU) No 1025/2012 of the European
Parliament and of the Council25, which are normally expected to reflect the state of the
art,should be a means for providers to demonstrate conformity with the requirements of this
Regulation. However, in the absence of relevant references to harmonised standards, the
Commission should be able to establish, via implementing acts, common specifications for
certain requirements under this Regulation as an exceptional fall back solution to facilitate
the provider’s obligation to comply with the requirements of this Regulation, when the
standardisation process is blocked or when there are delays in the establishment of an
appropriate harmonised standard. If such delay is due to the technical complexity of the
standard in question, this should be considered by the Commission before contemplating the
establishment of common specifications. An appropriate involvement of small and medium
enterprises in the elaboration of standards supporting the implementation of this Regulation
is essential to promote innovation and competitiveness in the field of artificial intelligence
within the Union. Such involvement should be appropriately ensured in accordance with
Article 5 and 6 of Regulation 1025/2012.
25 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on
European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC,
94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the
European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No
1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
14954/22 RB/ek 51
TREE.2.B LIMITE EN
(61a) It is appropriate that, without prejudice to the use of harmonised standards and common
specifications, providers benefit from a presumption of conformity with the relevant
requirement on data when their high-risk AI system has been trained and tested on data
reflecting the specific geographical, behavioural or functional setting within which the AI
system is intended to be used. Similarly, in line with Article 54(3) of Regulation (EU)
2019/881 of the European Parliament and of the Council, high-risk AI systems that have
been certified or for which a statement of conformity has been issued under a cybersecurity
scheme pursuant to that Regulation and the references of which have been published in the
Official Journal of the European Union should be presumed to be in compliance with the
cybersecurity requirement of this Regulation. This remains without prejudice to the
voluntary nature of that cybersecurity scheme.
(62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems
should be subject to a conformity assessment prior to their placing on the market or putting
into service.
14954/22 RB/ek 52
TREE.2.B LIMITE EN
(63) It is appropriate that, in order to minimise the burden on operators and avoid any possible
duplication, for high-risk AI systems related to products which are covered by existing
Union harmonisation legislation following the New Legislative Framework approach, the
compliance of those AI systems with the requirements of this Regulation should be assessed
as part of the conformity assessment already foreseen under that legislation. The
applicability of the requirements of this Regulation should thus not affect the specific logic,
methodology or general structure of conformity assessment under the relevant specific New
Legislative Framework legislation. This approach is fully reflected in the interplay between
this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring
safety functions in machinery are addressed by the requirements of this Regulation, certain
specific requirements in the [Machinery Regulation] will ensure the safe integration of the
AI system into the overall machinery, so as not to compromise the safety of the machinery
as a whole. The [Machinery Regulation] applies the same definition of AI system as this
Regulation. With regard to high-risk AI systems related to products covered by Regulations
745/2017 and 746/2017 on medical devices, the applicability of the requirements of this
Regulation should be without prejudice and take into account the risk management logic and
benefit-risk assessment performed under the medical device framework.
(64) Given the more extensive experience of professional pre-market certifiers in the field of
product safety and the different nature of risks involved, it is appropriate to limit, at least in
an initial phase of application of this Regulation, the scope of application of third-party
conformity assessment for high-risk AI systems other than those related to products.
Therefore, the conformity assessment of such systems should be carried out as a general rule
by the provider under its own responsibility, with the only exception of AI systems intended
to be used for the remote biometric identification of persons, for which the involvement of a
notified body in the conformity assessment should be foreseen, to the extent they are not
prohibited.
14954/22 RB/ek 53
TREE.2.B LIMITE EN
(65) In order to carry out third-party conformity assessment for AI systems intended to be used
for the remote biometric identification of persons, notified bodies should be notified under
this Regulation by the national competent authorities, provided they are compliant with a set
of requirements, notably on independence, competence and absence of conflicts of interests.
Notification of those bodies should be sent by national competent authorities to the
Commission and the other Member States by means of the electronic notification tool
developed and managed by the Commission pursuant to Article R23 of Decision 768/2008.
(66) In line with the commonly established notion of substantial modification for products
regulated by Union harmonisation legislation, it is appropriate that whenever a change
occurs which may affect the compliance of a high risk AI system with this Regulation (e.g.
change of operating system or software architecture), or when the intended purpose of the
system changes, that AI system should be considered a new AI system which should
undergo a new conformity assessment. However, changes occuring to the algorithm and the
performance of AI systems which continue to ‘learn’ after being placed on the market or put
into service (i.e. automatically adapting how functions are carried out) should not constitute
a substantial modification, provided that those changes have been pre-determined by the
provider and assessed at the moment of the conformity assessment.
(67) High-risk AI systems should bear the CE marking to indicate their conformity with this
Regulation so that they can move freely within the internal market. Member States should
not create unjustified obstacles to the placing on the market or putting into service of highrisk AI systems that comply with the requirements laid down in this Regulation and bear the
CE marking.
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for
health and safety of persons and for society as a whole. It is thus appropriate that under
exceptional reasons of public security or protection of life and health of natural persons and
the protection of industrial and commercial property, Member States could authorise the
placing on the market or putting into service of AI systems which have not undergone a
conformity assessment.
14954/22 RB/ek 54
TREE.2.B LIMITE EN
(69) In order to facilitate the work of the Commission and the Member States in the artificial
intelligence field as well as to increase the transparency towards the public, providers of
high-risk AI systems other than those related to products falling within the scope of relevant
existing Union harmonisation legislation, should be required to register themselves and
information about their high-risk AI system in a EU database, to be established and managed
by the Commission. Before using a high-risk AI system listed in Annex III, users of highrisk AI systems that are public authorities, agencies or bodies, with the exception of law
enforcement, border control, immigration or asylum authorities, and authorities that are
users of high-risk AI systems in the area of critical infrastructure shall also register
themselves in such database and select the system that they envisage to use. The
Commission should be the controller of that database, in accordance with Regulation (EU)
2018/1725 of the European Parliament and of the Council26. In order to ensure the full
functionality of the database, when deployed, the procedure for setting the database should
include the elaboration of functional specifications by the Commission and an independent
audit report.
26 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection
of natural persons with regard to the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
14954/22 RB/ek 55
TREE.2.B LIMITE EN
(70) Certain AI systems intended to interact with natural persons or to generate content may pose
specific risks of impersonation or deception irrespective of whether they qualify as high-risk
or not. In certain circumstances, the use of these systems should therefore be subject to
specific transparency obligations without prejudice to the requirements and obligations for
high-risk AI systems. In particular, natural persons should be notified that they are
interacting with an AI system, unless this is obvious from the point of view of a natural
person who is reasonably well-informed, observant and circumspect taking into account the
circumstances and the context of use. When implementing such obligation, the
characteristics of individuals belonging to vulnerable groups due to their age or disability
should be taken into account to the extent the AI system is intended to interact with those
groups as well. Moreover, natural persons should be notified when they are exposed to
systems that, by processing their biometric data, can identify or infer the emotions or
intentions of those persons or assign them to specific categories. Such specific categories
can relate to aspects such as sex, age, hair colour, eye colour, tatoos, personal traits, ethnic
origin, personal preferences and interests or to other aspects such as sexual or political
orientation. Such information and notifications should be provided in accessible formats for
persons with disabilities. Further, users, who use an AI system to generate or manipulate
image, audio or video content that appreciably resembles existing persons, places or events
and would falsely appear to a person to be authentic, should disclose that the content has
been artificially created or manipulated by labelling the artificial intelligence output
accordingly and disclosing its artificial origin. The compliance with the information
obligations referred to above should not be interpreted as indicating that the use of the
system or its output is lawful under this Regulation or other Union and Member State law
and should be without prejudice to other transparency obligations for users of AI systems
laid down in Union or national law. Furthermore it should also not be interpreted as
indicating that the use of the system or its output impedes the right to freedom of expression
and the right to freedom of the arts and sciences guaranteed in the Charter of Funadamental
Rights of the EU, in particular where the content is part of an evidently creative, satirical,
artistic or fictional work or programme, subject to appropriate safeguards for the rights and
freedoms of third parties.
14954/22 RB/ek 56
TREE.2.B LIMITE EN
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel
forms of regulatory oversight and a safe space for experimentation, while ensuring
responsible innovation and integration of appropriate safeguards and risk mitigation
measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient
to disruption, national competent authorities from one or more Member States should be
encouraged to establish artificial intelligence regulatory sandboxes to facilitate the
development and testing of innovative AI systems under strict regulatory oversight before
these systems are placed on the market or otherwise put into service.
14954/22 RB/ek 57
TREE.2.B LIMITE EN
(72) The objectives of the AI regulatory sandboxes should be to foster AI innovation by
establishing a controlled experimentation and testing environment in the development and
pre-marketing phase with a view to ensuring compliance of the innovative AI systems with
this Regulation and other relevant Union and Member States legislation; to enhance legal
certainty for innovators and the competent authorities’ oversight and understanding of the
opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets,
including by removing barriers for small and medium enterprises (SMEs), including startups. The participation in the AI regulatory sandbox should focus on issues that raise legal
uncertainty for providers and prospective providers to innovate, experiment with AI in the
Union and contribute to evidence-based regulatory learning. The supervision of the AI
systems in the AI regulatory sandbox should therefore cover their development, training,
testing and validation before the systems are placed on the market or put into service, as well
as the notion and occurrence of substantial modification that may require a new conformity
assessment procedure. Where appropriate, national competent authorities establishing AI
regulatory sandboxes should cooperate with other relevant authorities, including those
supervising the protection of fundamental rights, and could allow for the involvement of
other actors within the AI ecosystem such as national or European standardisation
organisations, notified bodies, testing and experimentation facilities, research and
experimentation labs, innovation hubs and relevant stakeholder and civil society
organisations. To ensure uniform implementation across the Union and economies of scale,
it is appropriate to establish common rules for the regulatory sandboxes’ implementation
and a framework for cooperation between the relevant authorities involved in the
supervision of the sandboxes. AI regulatory sandboxes established under this Regulation
should be without prejudice to other legislation allowing for the establishment of other
sandboxes aiming at ensuring compliance with legislation other that this Regulation. Where
appropriate, relevant competent authorities in charge of those other regulatory sandboxes
should consider the benefits of using those sandboxes also for the purpose of ensuring
compliance of AI systems with this Regulation. Upon agreement between the national
competent authorities and the participants in the AI regulatory sandbox, testing in real world
conditions may also be operated and supervised in the framework of the AI regulatory
sandbox.
14954/22 RB/ek 58
TREE.2.B LIMITE EN
(-72a) This Regulation should provide the legal basis for the participants in the AI regulatory
sandbox to use personal data collected for other purposes for developing certain AI systems
in the public interest within the AI regulatory sandbox, in line with Article 6(4) and 9(2)(g)
of Regulation (EU) 2016/679, and Article 5 and 10 of Regulation (EU) 2018/1725, and
without prejudice to Articles 4(2) and 10 of Directive (EU) 2016/680. All other obligations
of data controllers and rights of data subjects under Regulation (EU) 2016/679, Regulation
(EU) 2018/1725 and Directive (EU) 2016/680 remain applicable. In particular, this
Regulation should not provide a legal basis in the meaning of Article 22(2)(b) of Regulation
(EU) 2016/679 and Article 24(2)(b) of Regulation (EU) 2018/1725. Participants in the
sandbox should ensure appropriate safeguards and cooperate with the competent authorities,
including by following their guidance and acting expeditiously and in good faith to mitigate
any high-risks to safety and fundamental rights that may arise during the development and
experimentation in the sandbox. The conduct of the participants in the sandbox should be
taken into account when competent authorities decide whether to impose an administrative
fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
(72a) In order to accelerate the process of development and placing on the market of high-risk AI
systems listed in Annex III, it is important that providers or prospective providers of such
systems may also benefit from a specific regime for testing those systems in real world
conditions, without participating in an AI regulatory sandbox. However, in such cases and
taking into account the possible consequences of such testing on individuals, it should be
ensured that appropriate and sufficient guarantees and conditions are introduced by the
Regulation for providers or prospective providers. Such guarantees should include, among
others, requesting informed consent of natural persons to participate in testing in real world
conditions, with the exception of law enforcement in cases where the seeking of informed
consent would prevent the AI system from being tested. Consent of subjects to participate in
such testing under this Regulation is distinct from and without prejudice to consent of data
subjects for the processing of their personal data under the relevant data protection law.
14954/22 RB/ek 59
TREE.2.B LIMITE EN
(73) In order to promote and protect innovation, it is important that the interests of SME
providers and users of AI systems are taken into particular account. To this objective,
Member States should develop initiatives, which are targeted at those operators, including
on awareness raising and information communication. Moreover, the specific interests and
needs of SME providers shall be taken into account when notified bodies set conformity
assessment fees. Translation costs related to mandatory documentation and communication
with authorities may constitute a significant cost for providers and other operators, notably
those of a smaller scale. Member States should possibly ensure that one of the languages
determined and accepted by them for relevant providers’ documentation and for
communication with operators is one which is broadly understood by the largest possible
number of cross-border users.
(73a) In order to promote and protect innovation, the AI-on demand platform, all relevant EU
funding programmes and projects, such as Digital Europe Programme, Horizon Europe,
implemented by the Commission and the Member States at national or EU level should
contribute to the achievement of the objectives of this Regulation.
(74) In particular, in order to minimise the risks to implementation resulting from lack of
knowledge and expertise in the market as well as to facilitate compliance of providers,
notably SMEs, and notified bodies with their obligations under this Regulation, the AI-on
demand platform, the European Digital Innovation Hubs and the Testing and
Experimentation Facilities established by the Commission and the Member States at national
or EU level should possibly contribute to the implementation of this Regulation. Within
their respective mission and fields of competence, they may provide in particular technical
and scientific support to providers and notified bodies.
(74a) Moreover, in order to ensure proportionality considering the very small size of some
operators regarding costs of innovation, it is appropriate to exempt microenterprises from
the most costly obligations, such as to establish a quality management system which would
reduce the administrative burden and the costs for those enterprises without affecting the
level of protection and the need for compliance with the requirements for high-risk AI
systems.
14954/22 RB/ek 60
TREE.2.B LIMITE EN
(75) It is appropriate that the Commission facilitates, to the extent possible, access to Testing and
Experimentation Facilities to bodies, groups or laboratories established or accredited
pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the
context of conformity assessment of products or devices covered by that Union
harmonisation legislation. This is notably the case for expert panels, expert laboratories and
reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745
and Regulation (EU) 2017/746.
14954/22 RB/ek 61
TREE.2.B LIMITE EN
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a
European Artificial Intelligence Board should be established. The Board should reflect the
various interests of the AI eco-system and be composed of representatives of the Member
States. In order to ensure the involvement of relevant stakeholders, a standing subgroup of
the Board should be created. The Board should be responsible for a number of advisory
tasks, including issuing opinions, recommendations, advice or contributing to guidance on
matters related to the implementation of this Regulation, including on enforcement matters,
technical specifications or existing standards regarding the requirements established in this
Regulation and providing advice to the Commission and the Member States and their
national competent authorities on specific questions related to artificial intelligence. In order
to give some flexibility to Member States in the designation of their representatives in the AI
Board, such representatives may be any persons belonging to public entities who should
have the relevant competences and powers to facilitate coordination at national level and
contribute to the achievement of the Board's tasks. The Board should establish two standing
sub-groups to provide a platform for cooperation and exchange among market surveillance
authorities and notifying authorities on issues related respectively to market surveillance and
notified bodies. The standing subgroup for market surveillance should act as the
Administrative Cooperation Group (ADCO) for this Regulation in the meaning of Article 30
of Regulation (EU) 2019/1020. In line with the role and tasks of the Commission pursuant to
Article 33 of Regulation (EU) 2019/1020, the Commission should support the activities of
the standing subgroup for market surveillance by undertaking market evaluations or studies,
notably with a view to identifying aspects of this Regulation requiring specific and urgent
coordination among market surveillance authorities. The Board may establish other standing
or temporary sub-groups as appropriate for the purpose of examining specific issues. The
Board should also cooperate, as appropriate, with relevant EU bodies, experts groups and
networks active in the context of relevant EU legislation, including in particular those active
under relevant EU regulation on data, digital products and services.
14954/22 RB/ek 62
TREE.2.B LIMITE EN
(76a) The Commission should actively support the Member States and operators in the
implementation and enforcement of this Regulation. In this regard it should develop
guidelines on particular topics aiming at facilitating the application of this Regulation, while
paying particular attention to the needs of SMEs and start-us in sectors most likely to be
affected. In order to support adequate enforcement and the capacities of the Member States,
Union testing facilities on AI and a pool of relevant experts should be established and made
available to the Member States.
(77) Member States hold a key role in the application and enforcement of this Regulation. In this
respect, each Member State should designate one or more national competent authorities for
the purpose of supervising the application and implementation of this Regulation. Member
States may decide to appoint any kind of public entity to perform the tasks of the national
competent authorities within the meaning of this Regulation, in accordance with their
specific national organisational characteristics and needs.
(78) In order to ensure that providers of high-risk AI systems can take into account the
experience on the use of high-risk AI systems for improving their systems and the design
and development process or can take any possible corrective action in a timely manner, all
providers should have a post-market monitoring system in place. This system is also key to
ensure that the possible risks emerging from AI systems which continue to ‘learn’ after
being placed on the market or put into service can be more efficiently and timely addressed.
In this context, providers should also be required to have a system in place to report to the
relevant authorities any serious incidents resulting from the use of their AI systems.
14954/22 RB/ek 63
TREE.2.B LIMITE EN
(79) In order to ensure an appropriate and effective enforcement of the requirements and
obligations set out by this Regulation, which is Union harmonisation legislation, the system
of market surveillance and compliance of products established by Regulation (EU)
2019/1020 should apply in its entirety. Market surveillance authorities designated pursuant
to this Regulation should have all enforcement powers under this Regulation and Regulation
(EU) 2019/1020 and should exercise their powers and carry out their duties independently,
impartially and without bias. Although the majority of AI systems are not subject to specific
requirements and obligations under this Regualtion, market surveillance authorities may take
measures in relation to all AI systems when they present a risk in accordance with this
Regulation. Due to the specific nature of Union institutions, agencies and bodies falling
within the scope of this Regulation, it is appropriate to designate the European Data
Protection Supervisor as a competent market surveillance authority for them. This should be
without prejudice to the designation of national competent authorities by the Member States.
Market surveillance activities should not affect the ability of the supervised entities to carry
out their tasks independently, when such independence is required by Union law.
(79a) This Regulation is without prejudice to the competences, tasks, powers and independence of
relevant national public authorities or bodies which supervise the application of Union law
protecting fundamental rights, including equality bodies and data protection authorities.
Where necessary for their mandate, those national public authorities or bodies should also
have access to any documentation created under this Regulation. A specific safeguard
procedure should be set for ensuring adequate and timely enforcement against AI systems
presenting a risk to health, safety and fundamental rights. The procedure for such AI
systems presenting a risk should be applied to high-risk AI systems presenting a risk,
prohibited systems which have been placed on the market, put into service or used in
violation of the prohibited practices laid down in this Regulation and AI systems which have
been made available in violation of the transparency requirements laid down in this
Regulation and present a risk.
14954/22 RB/ek 64
TREE.2.B LIMITE EN
(80) Union legislation on financial services includes internal governance and risk management
rules and requirements which are applicable to regulated financial institutions in the course
of provision of those services, including when they make use of AI systems. In order to
ensure coherent application and enforcement of the obligations under this Regulation and
relevant rules and requirements of the Union financial services legislation, the authorities
responsible for the supervision and enforcement of the financial services legislation should
be designated as competent authorities for the purpose of supervising the implementation of
this Regulation, including for market surveillance activities, as regards AI systems provided
or used by regulated and supervised financial institutions unless Member States decide to
designate another authority to fulfill these market surveillance tasks. Those competent
authorities should have all powers under this Regulation and Regulation (EU) 2019/1020 on
market surveillance to enforce the requirements and obligations of this Regulation, including
powers to carry our ex post market surveillance activities that can be integrated, as
appropriate, into their existing supervisory mechanisms and procedures under the relevant
Union financial services legislation. It is appropriate to envisage that, when acting as market
surveillance authorities under this Regulation, the national authorities responsible for the
supervision of credit institutions regulated under Directive 2013/36/EU, which are
participating in the Single Supervisory Mechanism (SSM) established by Council
Regulation No 1024/2013, should report, without delay, to the European Central Bank any
information identified in the course of their market surveillance activities that may be of
potential interest for the European Central Bank’s prudential supervisory tasks as specified
in that Regulation. To further enhance the consistency between this Regulation and the rules
applicable to credit institutions regulated under Directive 2013/36/EU of the European
Parliament and of the Council27, it is also appropriate to integrate some of the providers’
procedural obligations in relation to risk management, post marketing monitoring and
documentation into the existing obligations and procedures under Directive 2013/36/EU. In
order to avoid overlaps, limited derogations should also be envisaged in relation to the
quality management system of providers and the monitoring obligation placed on users of
high-risk AI systems to the extent that these apply to credit institutions regulated by
Directive 2013/36/EU. The same regime should apply to insurance and re-insurance
undertakings and insurance holding companies under Directive 2009/138/EU (Solvency II)
27 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity
of credit institutions and the prudential supervision of credit institutions and investment firms, amending
Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
14954/22 RB/ek 65
TREE.2.B LIMITE EN
and the insurance intermediaries under Directive 2016/97/EU and other types of financial
institutions subject to requirements regarding internal governance, arrangements or
processes established pursuant to the relevant Union financial services legislation to ensure
consistency and equal treatment in the financial sector.
(81) The development of AI systems other than high-risk AI systems in accordance with the
requirements of this Regulation may lead to a larger uptake of trustworthy artificial
intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to
create codes of conduct intended to foster the voluntary application of the requirements
applicable to high-risk AI systems, adapted in light of the intended purpose of the systems
and the lower risk involved. Providers should also be encouraged to apply on a voluntary
basis additional requirements related, for example, to environmental sustainability,
accessibility to persons with disability, stakeholders’ participation in the design and
development of AI systems, and diversity of the development teams. The Commission may
develop initiatives, including of a sectorial nature, to facilitate the lowering of technical
barriers hindering cross-border exchange of data for AI development, including on data
access infrastructure, semantic and technical interoperability of different types of data.
(82) It is important that AI systems related to products that are not high-risk in accordance with
this Regulation and thus are not required to comply with the requirements set out herein are
nevertheless safe when placed on the market or put into service. To contribute to this
objective, the Directive 2001/95/EC of the European Parliament and of the Council28
would
apply as a safety net.
(83) In order to ensure trustful and constructive cooperation of competent authorities on Union
and national level, all parties involved in the application of this Regulation should respect
the confidentiality of information and data obtained in carrying out their tasks, in accordance
with Union or national law.
28 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product
safety (OJ L 11, 15.1.2002, p. 4).
14954/22 RB/ek 66
TREE.2.B LIMITE EN
(84) Member States should take all necessary measures to ensure that the provisions of this
Regulation are implemented, including by laying down effective, proportionate and
dissuasive penalties for their infringement, and in respect of the ne bis in idem principle. For
certain specific infringements, Member States should take into account the margins and
criteria set out in this Regulation. The European Data Protection Supervisor should have the
power to impose fines on Union institutions, agencies and bodies falling within the scope of
this Regulation.
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power
to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission
to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems
listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the
content of the EU declaration of conformity in Annex V, the provisions regarding the
conformity assessment procedures in Annex VI and VII and the provisions establishing the
high-risk AI systems to which the conformity assessment procedure based on assessment of
the quality management system and assessment of the technical documentation should
apply. It is of particular importance that the Commission carry out appropriate consultations
during its preparatory work, including at expert level, and that those consultations be
conducted in accordance with the principles laid down in the Interinstitutional Agreement of
13 April 2016 on Better Law-Making29. In particular, to ensure equal participation in the
preparation of delegated acts, the European Parliament and the Council receive all
documents at the same time as Member States’ experts, and their experts systematically
have access to meetings of Commission expert groups dealing with the preparation of
delegated acts. Such consultations and advisory support should also be carried out in the
framework of the activities of the AI Board and its subgroups.
29 OJ L 123, 12.5.2016, p. 1.
14954/22 RB/ek 67
TREE.2.B LIMITE EN
(86) In order to ensure uniform conditions for the implementation of this Regulation,
implementing powers should be conferred on the Commission. Those powers should be
exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and
of the Council30. It is of particular importance that, in accordance with the principles laid
down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making, whenever
broader expertise is needed in the early preparation of draft implementing acts, the
Commission makes use of expert groups, consults targeted stakeholders or carries out public
consultations, as appropriate. Such consultations and advisory support should also be carried
out in the framework of the activities of the AI Board and its subgroups, including the
preparation of implementing acts in relation to Articles 4, 4b and 6.
(87) Since the objective of this Regulation cannot be sufficiently achieved by the Member States
and can rather, by reason of the scale or effects of the action, be better achieved at Union
level, the Union may adopt measures in accordance with the principle of subsidiarity as set
out in Article 5 TEU. In accordance with the principle of proportionality as set out in that
Article, this Regulation does not go beyond what is necessary in order to achieve that
objective.
(87a) In order to ensure legal certainty, ensure an appropriate adaptation period for operators and
avoid disruption to the market, including by ensuring continuity of the use of AI systems, it
is appropriate that this Regulation applies to the high-risk AI systems that have been placed
on the market or put into service before the general date of application thereof, only if, from
that date, those systems are subject to significant changes in their design or intended
purpose. It is appropriate to clarify that, in this respect, the concept of significant change
should be understood as equivalent in substance to the notion of substantial modification,
which is used with regard only to high-risk AI systems as defined in this Regulation.
30 Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying
down the rules and general principles concerning mechanisms for control by the Member States of the
Commission's exercise of implementing powers (OJ L 55, 28.2.2011, p.13).
14954/22 RB/ek 68
TREE.2.B LIMITE EN
(88) This Regulation should apply from … [OP – please insert the date established in Art. 85].
However, the infrastructure related to the governance and the conformity assessment system
should be operational before that date, therefore the provisions on notified bodies and
governance structure should apply from … [OP – please insert the date – three months
following the entry into force of this Regulation]. In addition, Member States should lay
down and notify to the Commission the rules on penalties, including administrative fines,
and ensure that they are properly and effectively implemented by the date of application of
this Regulation. Therefore the provisions on penalties should apply from [OP – please insert
the date – twelve months following the entry into force of this Regulation].
(89) The European Data Protection Supervisor and the European Data Protection Board were
consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an
opinion on […]”.
HAVE ADOPTED THIS REGULATION:
TITLE I
GENERAL PROVISIONS
Article 1
Subject matter
This Regulation lays down:
(a) harmonised rules for the placing on the market, the putting into service and the use of
artificial intelligence systems (‘AI systems’) in the Union;
(a) prohibitions of certain artificial intelligence practices;
(b) specific requirements for high-risk AI systems and obligations for operators of such
systems;
14954/22 RB/ek 69
TREE.2.B LIMITE EN
(c) harmonised transparency rules for certain AI systems;
(d) rules on market monitoring, market surveillance and governance;
(e) measures in support of innovation.
Article 2
Scope
1. This Regulation applies to:
(a) providers placing on the market or putting into service AI systems in the Union,
irrespective of whether those providers are physically present or established within
the Union or in a third country;
(b) users of AI systems who are physically present or established within the Union;
(c) providers and users of AI systems who are physically present or established in a third
country, where the output produced by the system is used in the Union;
(d) importers and distributors of AI systems;
(e) product manufacturers placing on the market or putting into service an AI system
together with their product and under their own name or trademark;
(f) authorised representatives of providers, which are established in the Union;
2. For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2)
related to products covered by Union harmonisation legislation listed in Annex II, section
B only Article 84 of this Regulation shall apply. Article 53 shall apply only insofar as the
requirements for high-risk AI systems under this Regulation have been integrated under
that Union harmonisation legislation.
14954/22 RB/ek 70
TREE.2.B LIMITE EN
3. This Regulation shall not apply to AI systems if and insofar placed on the market, put into
service, or used with or without modification of such systems for the purpose of activities
which fall outside the scope of Union law, and in any event activities concerning military,
defence or national security, regardless of the type of entity carrying out those activities.
In addition, this Regulation shall not apply to AI systems which are not placed on the
market or put into service in the Union, where the output is used in the Union for the
purpose of activities which fall outside the scope of Union law, and in any event activities
concerning military, defence or national security, regardless of the type of entity carrying
out those activities.
4. This Regulation shall not apply to public authorities in a third country nor to international
organisations falling within the scope of this Regulation pursuant to paragraph 1, where
those authorities or organisations use AI systems in the framework of international
agreements for law enforcement and judicial cooperation with the Union or with one or
more Member States.
5. This Regulation shall not affect the application of the provisions on the liability of
intermediary service providers set out in Chapter II, Section 4 of Directive 2000/31/EC of
the European Parliament and of the Council31 [as to be replaced by the corresponding
provisions of the Digital Services Act].
6. This Regulation shall not apply to AI systems, including their output, specifically developed
and put into service for the sole purpose of scientific research and development.
7. This Regulation shall not apply to any research and development activity regarding AI
systems.
8. This Regulation shall not apply to obligations of users who are natural persons using AI
systems in the course of a purely personal non-professional activity, except Article 52.
31 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of
information society services, in particular electronic commerce, in the Internal Market ('Directive on electronic
commerce') (OJ L 178, 17.7.2000, p. 1).
14954/22 RB/ek 71
TREE.2.B LIMITE EN
Article 3
Definitions
For the purpose of this Regulation, the following definitions apply:
(1) ‘artificial intelligence system’ (AI system) means a system that is designed to operate with
elements of autonomy and that, based on machine and/or human-provided data and inputs,
infers how to achieve a given set of objectives using machine learning and/or logic- and
knowledge based approaches, and produces system-generated outputs such as content
(generative AI systems), predictions, recommendations or decisions, influencing the
environments with which the AI system interacts;
(1a) ‘life cycle of an AI system’ means the duration of an AI system, from design through
retirement. Without prejudice to the powers of the market surveillance authorities, such
retirement may happen at any point in time during the post-market monitoring phase upon
the decision of the provider and implies that the system may not be used further. An AI
system lifecycle is also ended by a substantial modification to the AI system made by the
provider or any other natural or legal person, in which case the substantially modified AI
system shall be considered as a new AI system.
(1b) ‘general purpose AI system’ means an AI system that - irrespective of how it is placed on
the market or put into service, including as open source software - is intended by the
provider to perform generally applicable functions such as image and speech recognition,
audio and video generation, pattern detection, question answering, translation and others; a
general purpose AI system may be used in a plurality of contexts and be integrated in a
plurality of other AI systems;
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that
develops an AI system or that has an AI system developed and places that system on the
market or puts it into service under its own name or trademark, whether for payment or
free of charge;
14954/22 RB/ek 72
TREE.2.B LIMITE EN
(3) [deleted];
(3a) ‘small and medium-sized enterprise’ (SMEs) means an enterprise as defined in the Annex
of Commission Recommendation 2003/361/EC concerning the definition of micro, small
and medium-sized enterprises;
(4) ‘user’ means any natural or legal person, including a public authority, agency or other
body, under whose authority the system is used;
(5) ‘authorised representative’ means any natural or legal person physically present or
established in the Union who has received and accepted a written mandate from a provider
of an AI system to, respectively, perform and carry out on its behalf the obligations and
procedures established by this Regulation;
(5a) ‘product manufacturer’ means a manufacturer within the meaning of any of the Union
harmonisation legislation listed in Annex II;
(6) ‘importer’ means any natural or legal person physically present or established in the Union
that places on the market an AI system that bears the name or trademark of a natural or
legal person established outside the Union;
(7) ‘distributor’ means any natural or legal person in the supply chain, other than the provider
or the importer, that makes an AI system available on the Union market;
(8) ‘operator’ means the provider, the product manufacturer, the user, the authorised
representative, the importer or the distributor;
(9) ‘placing on the market’ means the first making available of an AI system on the Union
market;
(10) ‘making available on the market’ means any supply of an AI system for distribution or use
on the Union market in the course of a commercial activity, whether in return for payment
or free of charge;
14954/22 RB/ek 73
TREE.2.B LIMITE EN
(11) ‘putting into service’ means the supply of an AI system for first use directly to the user or
for own use in the Union for its intended purpose;
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider,
including the specific context and conditions of use, as specified in the information
supplied by the provider in the instructions for use, promotional or sales materials and
statements, as well as in the technical documentation;
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in
accordance with its intended purpose, but which may result from reasonably foreseeable
human behaviour or interaction with other systems;
(14) ‘safety component of a product or system’ means a component of a product or of a system
which fulfils a safety function for that product or system or the failure or malfunctioning of
which endangers the health and safety of persons or property;
(15) ‘instructions for use’ means the information provided by the provider to inform the user of
in particular an AI system’s intended purpose and proper use;
(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider
or taking it out of service or disabling the use of an AI system made available to users;
(17) ‘withdrawal of an AI system’ means any measure aimed at preventing an AI system in the
supply chain being made available on the market;
(18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended
purpose;
(19) ‘conformity assessment’ means the process of verifying whether the requirements set out
in Title III, Chapter 2 of this Regulation relating to a high-risk AI system have been
fulfilled;
14954/22 RB/ek 74
TREE.2.B LIMITE EN
(20) ‘notifying authority’ means the national authority responsible for setting up and carrying
out the necessary procedures for the assessment, designation and notification of conformity
assessment bodies and for their monitoring;
(21) ‘conformity assessment body’ means a body that performs third-party conformity
assessment activities, including testing, certification and inspection;
(22) ‘notified body’ means a conformity assessment body designated in accordance with this
Regulation and other relevant Union harmonisation legislation;
(23) ‘substantial modification’ means a change to the AI system following its placing on the
market or putting into service which affects the compliance of the AI system with the
requirements set out in Title III, Chapter 2 of this Regulation, or a modification to the
intended purpose for which the AI system has been assessed. For high-risk AI systems that
continue to learn after being placed on the market or put into service, changes to the highrisk AI system and its performance that have been pre-determined by the provider at the
moment of the initial conformity assessment and are part of the information contained in
the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a
substantial modification.
(24) ‘CE marking of conformity’ (CE marking) means a marking by which a provider indicates
that an AI system is in conformity with the requirements set out in Title III, Chapter 2 or in
Article 4b of this Regulation and other applicable Union legal act harmonising the
conditions for the marketing of products (‘Union harmonisation legislation’) providing for
its affixing;
(25) ‘post-market monitoring system’ means all activities carried out by providers of AI
systems to collect and review experience gained from the use of AI systems they place on
the market or put into service for the purpose of identifying any need to immediately apply
any necessary corrective or preventive actions;
(26) ‘market surveillance authority’ means the national authority carrying out the activities and
taking the measures pursuant to Regulation (EU) 2019/1020;
14954/22 RB/ek 75
TREE.2.B LIMITE EN
(27) ‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of
Regulation (EU) No 1025/2012;
(28) ‘common specification’ means a set of technical specifications, as defined in point 4 of
Article 2 of Regulation (EU) No 1025/2012 providing means to comply with certain
requirements established under this Regulation;
(29) ‘training data’ means data used for training an AI system through fitting its learnable
parameters;
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and
for tuning its non-learnable parameters and its learning process, among other things, in
order to prevent overfitting; whereas the validation dataset can be a separate dataset or part
of the training dataset, either as a fixed or variable split;
(31) ‘testing data’ means data used for providing an independent evaluation of the trained and
validated AI system in order to confirm the expected performance of that system before its
placing on the market or putting into service;
(32) ‘input data’ means data provided to or directly acquired by an AI system on the basis of
which the system produces an output;
(33) ‘biometric data’ means personal data resulting from specific technical processing relating
to the physical, physiological or behavioural characteristics of a natural person, such as
facial images or dactyloscopic data;
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or
inferring psychological states, emotions or intentions of natural persons on the basis of
their biometric data;
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural
persons to specific categories on the basis of their biometric data;
14954/22 RB/ek 76
TREE.2.B LIMITE EN
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying
natural persons typically at a distance, without their active involvement, through the
comparison of a person’s biometric data with the biometric data contained in a reference
data repository;
(37) ‘‘real-time’ remote biometric identification system’ means a remote biometric
identification system whereby the capturing of biometric data, the comparison and the
identification all occur instantaneously or near instantaneously;
(38) [deleted]
(39) ‘publicly accessible space’ means any publicly or privately owned physical place
accessible to an undetermined number of natural persons regardless of whether certain
conditions or circumstances for access have been predetermined, and regardless of the
potential capacity restrictions;
(40) ‘law enforcement authority’ means:
(a) any public authority competent for the prevention, investigation, detection or
prosecution of criminal offences or the execution of criminal penalties, including the
safeguarding against and the prevention of threats to public security; or
(b) any other body or entity entrusted by Member State law to exercise public authority
and public powers for the purposes of the prevention, investigation, detection or
prosecution of criminal offences or the execution of criminal penalties, including the
safeguarding against and the prevention of threats to public security;
(41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their
behalf for the prevention, investigation, detection or prosecution of criminal offences or the
execution of criminal penalties, including the safeguarding against and the prevention of
threats to public security;
(42) [deleted]
14954/22 RB/ek 77
TREE.2.B LIMITE EN
(43) ‘national competent authority’ means any of the following: the notifying authority and the
market surveillance authority. As regards AI systems put into service or used by EU
institutions, agencies, offices and bodies, the European Data Protection Supervisor shall
fulfil the responsibilities that in the Member States are entrusted to the national competent
authority and, as relevant, any reference to national competent authorities or market
surveillance authorities in this Regulation shall be understood as referring to the European
Data Protection Supervisor;
(44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or
indirectly leads to any of the following:
(a) the death of a person or serious damage to a person’s health;
(b) a serious and irreversible disruption of the management and operation of critical
infrastructure;
(c) breach of obligations under Union law intended to protect fundamental rights;
(d) serious damage to property or the environment.
(45) ‘critical infrastructure’ means an asset, system or part thereof which is necessary for the
delivery of a service that is essential for the maintenance of vital societal functions or
economic activities within the meaning of Article 2(4) and (5) of Directive …../….. on the
resilience of critical entities;
(46) ‘personal data’ means data as defined in point (1) of Article 4 of Regulation (EU)
2016/679;
(47) ‘non-personal data’ means data other than personal data as defined in point (1) of Article 4
of Regulation (EU) 2016/679;
14954/22 RB/ek 78
TREE.2.B LIMITE EN
(48) ‘testing in real world conditions’ means the temporary testing of an AI system for its
intended purpose in real world conditions outside of a laboratory or otherwise simulated
environment with a view to gathering reliable and robust data and to assessing and
verifying the conformity of the AI system with the requirements of this Regulation; testing
in real world conditions shall not be considered as placing the AI system on the market or
putting it into service within the meaning of this Regulation, provided that all conditions
under Article 53 or Article 54a are fulfilled;
(49) ‘real world testing plan’ means a document that describes the objectives, methodology,
geographical, population and temporal scope, monitoring, organisation and conduct of
testing in real world conditions;
(50) ‘subject’ for the purpose of real world testing means a natural person who participates in
testing in real world conditions;
(51) ‘informed consent’ means a subject's free and voluntary expression of his or her
willingness to participate in a particular testing in real world conditions, after having been
informed of all aspects of the testing that are relevant to the subject's decision to
participate; in the case of minors and of incapacitated subjects, the informed consent shall
be given by their legally designated representative;
(52) ‘AI regulatory sandbox’ means a concrete framework set up by a national competent
authority which offers providers or prospective providers of AI systems the possibility to
develop, train, validate and test, where appropriate in real world conditions, an innovative
AI system, pursuant to a specific plan for a limited time under regulatory supervision.
14954/22 RB/ek 79
TREE.2.B LIMITE EN
Article 4
Implementing acts
In order to ensure uniform conditions for the implementation of this Regulation as regards machine
learning approaches and logic- and knowledged based approaches referred to in Article 3(1), the
Commission may adopt implementing acts to specify the technical elements of those approaches,
taking into account market and technological developments. Those implementing acts shall be
adopted in accordance with the examination procedure referred to in Article 74(2).
TITLE IA
GENERAL PURPOSE AI SYSTEMS
Article 4a
Compliance of general purpose AI systems with this Regulation
1. Without prejudice to Articles 5, 52, 53 and 69 of this Regulation, general purpose AI
systems shall only comply with the requirements and obligations set out in Article 4b.
2. Such requirements and obligations shall apply irrespective of whether the general
purpose AI system is placed on the market or put into service as a pre-trained model
and whether further fine-tuning of the model is to be performed by the user of the
general purpose AI system.
14954/22 RB/ek 80
TREE.2.B LIMITE EN
Article 4b
Requirements for general purpose AI systems and obligations for providers of such
systems
1. General purpose AI systems which may be used as high risk AI systems or as
components of high risk AI systems in the meaning of Article 6, shall comply with the
requirements established in Title III, Chapter 2 of this Regulation as from the date of
application of the implementing acts adopted by the Commission in accordance with
the examination procedure referred to in Article 74(2) no later than 18 months after the
entry into force of this Regulation. Those implementing acts shall specify and adapt the
application of the requirements established in Title III, Chapter 2 to general purpose AI
systems in the light of their characteristics, technical feasibility, specificities of the AI
value chain and of market and technological developments. When fulfilling those
requirements, the generally acknowledged state of the art shall be taken into account.
2. Providers of general purpose AI systems referred to in paragraph 1 shall comply, as
from the date of application of the implementing acts referred to in paragraph 1, with
the obligations set out in Articles 16aa, 16e, 16f, 16g, 16i, 16j, 25, 48 and 61.
3. For the purpose of complying with the obligations set out in Article 16e, providers
shall follow the conformity assessment procedure based on internal control set out in
Annex VI, points 3 and 4.
4. Providers of such systems shall also keep the technical documentation referred to in
Article 11 at the disposal of the national competent authorities for a period ending ten
years after the general purpose AI system is placed on the Union market or put into
service in the Union.
14954/22 RB/ek 81
TREE.2.B LIMITE EN
5. Providers of general purpose AI systems shall cooperate with and provide the
necessary information to other providers intending to put into service or place such
systems on the Union market as high-risk AI systems or as components of high-risk AI
systems, with a view to enabling the latter to comply with their obligations under this
Regulation. Such cooperation between providers shall preserve, as appropriate,
intellectual property rights, and confidential business information or trade secrets in
accordance with Article 70. In order to ensure uniform conditions for the
implementation of this Regulation as regards the information to be shared by the
providers of general purpose AI systems, the Commission may adopt implementing
acts in accordance with the examination procedure referred to in Article 74(2).
6. In complying with the requirements and obligations referred to in paragraphs 1, 2 and
3:
- any reference to the intended purpose shall be understood as referring to possible use
of the general purpose AI systems as high risk AI systems or as components of AI high
risk systems in the meaning of Article 6;
- any reference to the requirements for high-risk AI systems in Chapter II, Title III shall
be understood as referring only to the requirements set out in the present Article.
Article 4c
Exceptions to Article 4b
1. Article 4b shall not apply when the provider has explicitly excluded all high-risk uses
in the instructions of use or information accompanying the general purpose AI system.
2. Such exclusion shall be made in good faith and shall not be deemed justified if the
provider has sufficient reasons to consider that the system may be misused.
3. When the provider detects or is informed about market misuse they shall take all
necessary and proportionate measures to prevent such further misuse, in particular
taking into account the scale of the misuse and the seriousness of the associated risks.
14954/22 RB/ek 82
TREE.2.B LIMITE EN
TITLE II
PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES
Article 5
1. The following artificial intelligence practices shall be prohibited:
(a) the placing on the market, putting into service or use of an AI system that deploys
subliminal techniques beyond a person’s consciousness with the objective to or the
effect of materially distorting a person’s behaviour in a manner that causes or is
reasonably likely to cause that person or another person physical or psychological
harm;
(b) the placing on the market, putting into service or use of an AI system that exploits
any of the vulnerabilities of a specific group of persons due to their age, disability or
a specific social or economic situation, with the objective to or the effect of
materially distorting the behaviour of a person pertaining to that group in a manner
that causes or is reasonably likely to cause that person or another person physical or
psychological harm;
(c) the placing on the market, putting into service or use of AI systems for the evaluation
or classification of natural persons over a certain period of time based on their social
behaviour or known or predicted personal or personality characteristics, with the
social score leading to either or both of the following:
(i) detrimental or unfavourable treatment of certain natural persons or groups
thereof in social contexts which are unrelated to the contexts in which the data
was originally generated or collected;
14954/22 RB/ek 83
TREE.2.B LIMITE EN
(ii) detrimental or unfavourable treatment of certain natural persons or groups
thereof that is unjustified or disproportionate to their social behaviour or its
gravity;
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible
spaces by law enforcement authorities or on their behalf for the purpose of law
enforcement, unless and in as far as such use is strictly necessary for one of the
following objectives:
(i) the targeted search for specific potential victims of crime;
(ii) the prevention of a specific and substantial threat to the critical infrastructure,
life, health or physical safety of natural persons or the prevention of terrorist
attacks;
(iii) the localisation or identification of a natural person for the purposes of
conducting a criminal investigation, prosecution or executing a criminal
penalty for offences, referred to in Article 2(2) of Council Framework Decision
2002/584/JHA32 and punishable in the Member State concerned by a custodial
sentence or a detention order for a maximum period of at least three years, or
other specific offences punishable in the Member State concerned by a
custodial sentence or a detention order for a maximum period of at least five
years, as determined by the law of that Member State.
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces
for the purpose of law enforcement for any of the objectives referred to in paragraph 1
point d) shall take into account the following elements:
(a) the nature of the situation giving rise to the possible use, in particular the seriousness,
probability and scale of the harm caused in the absence of the use of the system;
32 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender
procedures between Member States (OJ L 190, 18.7.2002, p. 1).
14954/22 RB/ek 84
TREE.2.B LIMITE EN
(b) the consequences of the use of the system for the rights and freedoms of all persons
concerned, in particular the seriousness, probability and scale of those consequences.
In addition, the use of ‘real-time’ remote biometric identification systems in publicly
accessible spaces for the purpose of law enforcement for any of the objectives referred to
in paragraph 1 point d) shall comply with necessary and proportionate safeguards and
conditions in relation to the use, in particular as regards the temporal, geographic and
personal limitations.
3. As regards paragraphs 1, point (d) and 2, each use for the purpose of law enforcement of a
‘real-time’ remote biometric identification system in publicly accessible spaces shall be
subject to a prior authorisation granted by a judicial authority or by an independent
administrative authority of the Member State in which the use is to take place, issued upon
a reasoned request and in accordance with the detailed rules of national law referred to in
paragraph 4. However, in a duly justified situation of urgency, the use of the system may
be commenced without an authorisation provided that, such authorisation shall be
requested without undue delay during use of the AI system, and if such authorisation is
rejected, its use shall be stopped with immediate effect.
The competent judicial or administrative authority shall only grant the authorisation where
it is satisfied, based on objective evidence or clear indications presented to it, that the use
of the ‘real-time’ remote biometric identification system at issue is necessary for and
proportionate to achieving one of the objectives specified in paragraph 1, point (d), as
identified in the request. In deciding on the request, the competent judicial or
administrative authority shall take into account the elements referred to in paragraph 2.
14954/22 RB/ek 85
TREE.2.B LIMITE EN
4. A Member State may decide to provide for the possibility to fully or partially authorise the
use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for
the purpose of law enforcement within the limits and under the conditions listed in
paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the
necessary detailed rules for the request, issuance and exercise of, as well as supervision
and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall
also specify in respect of which of the objectives listed in paragraph 1, point (d), including
which of the criminal offences referred to in point (iii) thereof, the competent authorities
may be authorised to use those systems for the purpose of law enforcement.
TITLE III
HIGH-RISK AI SYSTEMS
CHAPTER 1
CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK
Article 6
Classification rules for high-risk AI systems
1. An AI system that is itself a product covered by the Union harmonisation legislation listed
in Annex II shall be considered as high risk if it is required to undergo a third-party
conformity assessment with a view to the placing on the market or putting into service of
that product pursuant to the above mentioned legislation.
14954/22 RB/ek 86
TREE.2.B LIMITE EN
2. An AI system intended to be used as a safety component of a product covered by the
legislation referred to in paragraph 1 shall be considered as high risk if it is required to
undergo a third-party conformity assessment with a view to the placing on the market or
putting into service of that product pursuant to above mentioned legislation. This provision
shall apply irrespective of whether the AI system is placed on the market or put into
service independently from the product.
3. AI systems referred to in Annex III shall be considered high-risk unless the output of the
system is purely accessory in respect of the relevant action or decision to be taken and is
not therefore likely to lead to a significant risk to the health, safety or fundamental rights.
In order to ensure uniform conditions for the implementation of this Regulation, the
Commission shall, no later than one year after the entry into force of this Regulation, adopt
implementing acts to specify the circumstances where the output of AI systems referred to
in Annex III would be purely accessory in respect of the relevant action or decision to be
taken. Those implementing acts shall be adopted in accordance with the examination
procedure referred to in Article 74, paragraph 2.
Article 7
Amendments to Annex III
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to
amend the list in Annex III by adding high-risk AI systems where both of the following
conditions are fulfilled:
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of
Annex III;
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse
impact on fundamental rights, that is, in respect of its severity and probability of
occurrence, equivalent to or greater than the risk of harm or of adverse impact posed
by the high-risk AI systems already referred to in Annex III.
14954/22 RB/ek 87
TREE.2.B LIMITE EN
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm
to the health and safety or a risk of adverse impact on fundamental rights that is equivalent
to or greater than the risk of harm posed by the high-risk AI systems already referred to in
Annex III, the Commission shall take into account the following criteria:
(a) the intended purpose of the AI system;
(b) the extent to which an AI system has been used or is likely to be used;
(c) the extent to which the use of an AI system has already caused harm to the health and
safety or adverse impact on the fundamental rights or has given rise to significant
concerns in relation to the materialisation of such harm or adverse impact, as
demonstrated by reports or documented allegations submitted to national competent
authorities;
(d) the potential extent of such harm or such adverse impact, in particular in terms of its
intensity and its ability to affect a plurality of persons;
(e) the extent to which potentially harmed or adversely impacted persons are dependent
on the outcome produced with an AI system, in particular because for practical or
legal reasons it is not reasonably possible to opt-out from that outcome;
(f) the extent to which potentially harmed or adversely impacted persons are in a
vulnerable position in relation to the user of an AI system, in particular due to an
imbalance of power, knowledge, economic or social circumstances, or age;
(g) the extent to which the outcome produced with an AI system is not easily reversible,
whereby outcomes having an impact on the health or safety of persons shall not be
considered as easily reversible;
14954/22 RB/ek 88
TREE.2.B LIMITE EN
(h) the extent to which existing Union legislation provides for:
(i) effective measures of redress in relation to the risks posed by an AI system,
with the exclusion of claims for damages;
(ii) effective measures to prevent or substantially minimise those risks;
(i) the magnitude and likelihood of benefit of the AI use for individuals, groups, or
society at large.
3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to
amend the list in Annex III by removing high-risk AI systems where both of the following
conditions are fulfilled:
(a) the high-risk AI system(s) concerned no longer pose any significant risks to
fundamental rights, health or safety, taking into account the criteria listed in
paragraph 2;
(b) the deletion does not decrease the overall level of protection of health, safety and
fundamental rights under Union law.
CHAPTER 2
REQUIREMENTS FOR HIGH-RISK AI SYSTEMS
Article 8
Compliance with the requirements
1. High-risk AI systems shall comply with the requirements established in this Chapter,
taking into account the generally acknowledged state of the art.
14954/22 RB/ek 89
TREE.2.B LIMITE EN
2. The intended purpose of the high-risk AI system and the risk management system referred
to in Article 9 shall be taken into account when ensuring compliance with those
requirements.
Article 9
Risk management system
1. A risk management system shall be established, implemented, documented and maintained
in relation to high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned
and run throughout the entire lifecycle of a high-risk AI system, requiring regular
systematic updating. It shall comprise the following steps:
(a) identification and analysis of the known and foreseeable risks most likely to occur to
health, safety and fundamental rights in view of the intended purpose of the high-risk
AI system;
(b) [deleted];
(c) evaluation of other possibly arising risks based on the analysis of data gathered from
the post-market monitoring system referred to in Article 61;
(d) adoption of suitable risk management measures in accordance with the provisions of
the following paragraphs.
The risks referred to in this paragraph shall concern only those which may be reasonably
mitigated or eliminated through the development or design of the high-risk AI system, or
the provision of adequate technical information.
14954/22 RB/ek 90
TREE.2.B LIMITE EN
3. The risk management measures referred to in paragraph 2, point (d) shall give due
consideration to the effects and possible interaction resulting from the combined
application of the requirements set out in this Chapter 2, with a view to minimising risks
more effectively while achieving an appropriate balance in implementing the measures to
fulfil those requirements.
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any
residual risk associated with each hazard as well as the overall residual risk of the high-risk
AI systems is judged acceptable.
In identifying the most appropriate risk management measures, the following shall be
ensured:
(a) elimination or reduction of risks identified and evaluated pursuant to paragraph 2 as
far as possible through adequate design and development of the high risk AI system;
(b) where appropriate, implementation of adequate mitigation and control measures in
relation to risks that cannot be eliminated;
(c) provision of adequate information pursuant to Article 13, in particular as regards the
risks referred to in paragraph 2, point (b) of this Article, and, where appropriate,
training to users.
With a view to eliminating or reducing risks related to the use of the high-risk AI system,
due consideration shall be given to the technical knowledge, experience, education,
training to be expected by the user and the environment in which the system is intended to
be used.
5. High-risk AI systems shall be tested in order to ensure that high-risk AI systems perform in
a manner that is consistent with their intended purpose and they are in compliance with the
requirements set out in this Chapter.
6. Testing procedures may include testing in real world conditions in accordance with Article
54a.
14954/22 RB/ek 91
TREE.2.B LIMITE EN
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in
time throughout the development process, and, in any event, prior to the placing on the
market or the putting into service. Testing shall be made against preliminarily defined
metrics and probabilistic thresholds that are appropriate to the intended purpose of the
high-risk AI system.
8. The risk management system described in paragraphs 1 to 7 shall give specific
consideration to whether the high-risk AI system is likely to be accessed by or have an
impact on persons under the age of 18.
9. For providers of high-risk AI systems that are subject to requirements regarding internal
risk management processes under relevant sectorial Union law, the aspects described in
paragraphs 1 to 8 may be part of the risk management procedures established pursuant to
that law.
Article 10
Data and data governance
1. High-risk AI systems which make use of techniques involving the training of models with
data shall be developed on the basis of training, validation and testing data sets that meet
the quality criteria referred to in paragraphs 2 to 5.
2. Training, validation and testing data sets shall be subject to appropriate data governance
and management practices. Those practices shall concern in particular,
(a) the relevant design choices;
(b) data collection processes;
(c) relevant data preparation processing operations, such as annotation, labelling,
cleaning, enrichment and aggregation;
14954/22 RB/ek 92
TREE.2.B LIMITE EN
(d) the formulation of relevant assumptions, notably with respect to the information that
the data are supposed to measure and represent;
(e) a prior assessment of the availability, quantity and suitability of the data sets that are
needed;
(f) examination in view of possible biases that are likely to affect health and safety of
natural persons or lead to discrimination prohibited by Union law;
(g) the identification of any possible data gaps or shortcomings, and how those gaps and
shortcomings can be addressed.
3. Training, validation and testing data sets shall be relevant, representative, and to the best
extent possible, free of errors and complete. They shall have the appropriate statistical
properties, including, where applicable, as regards the persons or groups of persons on
which the high-risk AI system is intended to be used. These characteristics of the data sets
may be met at the level of individual data sets or a combination thereof.
4. Training, validation and testing data sets shall take into account, to the extent required by
the intended purpose, the characteristics or elements that are particular to the specific
geographical, behavioural or functional setting within which the high-risk AI system is
intended to be used.
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring,
detection and correction in relation to the high-risk AI systems, the providers of such
systems may process special categories of personal data referred to in Article 9(1) of
Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of
Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights
and freedoms of natural persons, including technical limitations on the re-use and use of
state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or
encryption where anonymisation may significantly affect the purpose pursued.
14954/22 RB/ek 93
TREE.2.B LIMITE EN
6. For the development of high-risk AI systems not using techniques involving the training of
models, paragraphs 2 to 5 shall apply only to the testing data sets.
Article 11
Technical documentation
1. The technical documentation of a high-risk AI system shall be drawn up before that system
is placed on the market or put into service and shall be kept up-to date.
The technical documentation shall be drawn up in such a way to demonstrate that the highrisk AI system complies with the requirements set out in this Chapter and provide national
competent authorities and notified bodies with all the necessary information in a clear and
comprehensive form to assess the compliance of the AI system with those requirements. It
shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs,
including start-ups, any equivalent documentation meeting the same objectives, unless
deemed inappropriate by the competent authority.
2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II,
section A apply, is placed on the market or put into service one single technical
documentation shall be drawn up containing all the information set out in Annex IV as
well as the information required under those legal acts.
3. The Commission is empowered to adopt delegated acts in accordance with Article 73 to
amend Annex IV where necessary to ensure that, in the light of technical progress, the
technical documentation provides all the necessary information to assess the compliance of
the system with the requirements set out in this Chapter.
14954/22 RB/ek 94
TREE.2.B LIMITE EN
Article 12
Record-keeping
1. High-risk AI systems shall technically allow for the automatic recording of events (‘logs’)
over the duration of the life cycle of the system.
2. In order to ensure a level of traceability of the AI system’s functioning that is appropriate
to the intended purpose of the system, logging capabilities shall enable the recording of
events relevant for
(i) identification of situations that may result in the AI system presenting a risk within the
meaning of Article 65(1) or in a substantial modification;
(ii) facilitation of the post-market monitoring referred to in Article 61; and
(iii) monitoring of the operation of high-risk AI systems referred to in Article 29(4).
4. For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging
capabilities shall provide, at a minimum:
(a) recording of the period of each use of the system (start date and time and end date
and time of each use);
(b) the reference database against which input data has been checked by the system;
(c) the input data for which the search has led to a match;
(d) the identification of the natural persons involved in the verification of the results, as
referred to in Article 14 (5).
14954/22 RB/ek 95
TREE.2.B LIMITE EN
Article 13
Transparency and provision of information to users
1. High-risk AI systems shall be designed and developed in such a way to ensure that their
operation is sufficiently transparent with a view to achieving compliance with the relevant
obligations of the user and of the provider set out in Chapter 3 of this Title and enabling
users to understand and use the system appropriately.
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital
format or otherwise that include concise, complete, correct and clear information that is
relevant, accessible and comprehensible to users.
3. The information referred to in paragraph 2 shall specify:
(a) the identity and the contact details of the provider and, where applicable, of its
authorised representative;
(b) the characteristics, capabilities and limitations of performance of the high-risk AI
system, including:
(i) its intended purpose, inclusive of the specific geographical, behavioural or
functional setting within which the high-risk AI system is intended to be used;
(ii) the level of accuracy, including its metrics, robustness and cybersecurity
referred to in Article 15 against which the high-risk AI system has been tested
and validated and which can be expected, and any known and foreseeable
circumstances that may have an impact on that expected level of accuracy,
robustness and cybersecurity;
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI
system in accordance with its intended purpose, which may lead to risks to the
health and safety or fundamental rights referred to in Aricle 9(2);
14954/22 RB/ek 96
TREE.2.B LIMITE EN
(iv) when appropriate, its behaviour regarding specific persons or groups of persons
on which the system is intended to be used;
(v) when appropriate, specifications for the input data, or any other relevant
information in terms of the training, validation and testing data sets used,
taking into account the intended purpose of the AI system;
(vi) when appropriate, description of the expected output of the system.
(c) the changes to the high-risk AI system and its performance which have been predetermined by the provider at the moment of the initial conformity assessment, if
any;
(d) the human oversight measures referred to in Article 14, including the technical
measures put in place to facilitate the interpretation of the outputs of AI systems by
the users;
(e) the computational and hardware resources needed, the expected lifetime of the highrisk AI system and any necessary maintenance and care measures, including their
frequency, to ensure the proper functioning of that AI system, including as regards
software updates;
(f) a description of the mechanism included within the AI system that allows users to
properly collect, store and interpret the logs, where relevant.
Article 14
Human oversight
1. High-risk AI systems shall be designed and developed in such a way, including with
appropriate human-machine interface tools, that they can be effectively overseen by natural
persons during the period in which the AI system is in use.
14954/22 RB/ek 97
TREE.2.B LIMITE EN
2. Human oversight shall aim at preventing or minimising the risks to health, safety or
fundamental rights that may emerge when a high-risk AI system is used in accordance with
its intended purpose or under conditions of reasonably foreseeable misuse, in particular
when such risks persist notwithstanding the application of other requirements set out in this
Chapter.
3. Human oversight shall be ensured through either one or all of the following types of
measures:
(a) measures identified and built, when technically feasible, into the high-risk AI system
by the provider before it is placed on the market or put into service;
(b) measures identified by the provider before placing the high-risk AI system on the
market or putting it into service and that are appropriate to be implemented by the
user.
4. For the purpose of implementing paragraphs 1 to 3, the high-risk AI system shall be
provided to the user in such a way that natural persons to whom human oversight is
assigned are enabled, as appropriate and proportionate to the circumstances:
(a) to understand the capacities and limitations of the high-risk AI system and be able to
duly monitor its operation;
(b) to remain aware of the possible tendency of automatically relying or over-relying on
the output produced by a high-risk AI system (‘automation bias’);
(c) to correctly interpret the high-risk AI system’s output, taking into account for
example the interpretation tools and methods available;
(d) to decide, in any particular situation, not to use the high-risk AI system or otherwise
disregard, override or reverse the output of the high-risk AI system;
(e) to intervene on the operation of the high-risk AI system or interrupt the system
through a “stop” button or a similar procedure.
14954/22 RB/ek 98
TREE.2.B LIMITE EN
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in
paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the
user on the basis of the identification resulting from the system unless this has been
separately verified and confirmed by at least two natural persons. The requirement for a
separate verification by at least two natural persons shall not apply to high risk AI systems
used for the purpose of law enforcement, migration, border control or asylum, in cases
where Union or national law considers the application of this requirement to be
disproportionate.
Article 15
Accuracy, robustness and cybersecurity
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in
the light of their intended purpose, an appropriate level of accuracy, robustness and
cybersecurity, and perform consistently in those respects throughout their lifecycle.
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be
declared in the accompanying instructions of use.
3. High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may
occur within the system or the environment in which the system operates, in particular due
to their interaction with natural persons or other systems.
The robustness of high-risk AI systems may be achieved through technical redundancy
solutions, which may include backup or fail-safe plans.
High-risk AI systems that continue to learn after being placed on the market or put into
service shall be developed in such a way to eliminate or reduce as far as possible the risk of
possibly biased outputs influencing input for future operations (‘feedback loops’) are duly
addressed with appropriate mitigation measures.
14954/22 RB/ek 99
TREE.2.B LIMITE EN
4. High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to
alter their use or performance by exploiting the system vulnerabilities.
The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall
be appropriate to the relevant circumstances and the risks.
The technical solutions to address AI specific vulnerabilities shall include, where
appropriate, measures to prevent and control for attacks trying to manipulate the training
dataset (‘data poisoning’), inputs designed to cause the model to make a mistake
(‘adversarial examples’), or model flaws.
CHAPTER 3
OBLIGATIONS OF PROVIDERS AND USERS OF HIGH-RISK AI SYSTEMS AND
OTHER PARTIES
Article 16
Obligations of providers of high-risk AI systems
Providers of high-risk AI systems shall:
(a) ensure that their high-risk AI systems are compliant with the requirements set out in
Chapter 2 of this Title;
(aa) indicate their name, registered trade name or registered trade mark, the address at which
they can be contacted on the high-risk AI system or, where that is not possible, on its
packaging or its accompanying documentation, as applicable;
(b) have a quality management system in place which complies with Article 17;
(c) keep the documentation referred to in Article 18;
14954/22 RB/ek 100
TREE.2.B LIMITE EN
(d) when under their control, keep the logs automatically generated by their high-risk AI
systems as referred to in Article 20;
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment
procedure as referred to in Article 43, prior to its placing on the market or putting into
service;
(f) comply with the registration obligations referred to in Article 51(1);
(g) take the necessary corrective actions as referred to in Article 21, if the high-risk AI system
is not in conformity with the requirements set out in Chapter 2 of this Title;
(h) inform the relevant national competent authority of the Member States in which they made
the AI system available or put it into service and, where applicable, the notified body of the
non-compliance and of any corrective actions taken;
(i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this
Regulation in accordance with Article 49;
(j) upon request of a national competent authority, demonstrate the conformity of the highrisk AI system with the requirements set out in Chapter 2 of this Title.
Article 17
Quality management system
1. Providers of high-risk AI systems shall put a quality management system in place that
ensures compliance with this Regulation. That system shall be documented in a systematic
and orderly manner in the form of written policies, procedures and instructions, and shall
include at least the following aspects:
(a) a strategy for regulatory compliance, including compliance with conformity
assessment procedures and procedures for the management of modifications to the
high-risk AI system;
14954/22 RB/ek 101
TREE.2.B LIMITE EN
(b) techniques, procedures and systematic actions to be used for the design, design
control and design verification of the high-risk AI system;
(c) techniques, procedures and systematic actions to be used for the development,
quality control and quality assurance of the high-risk AI system;
(d) examination, test and validation procedures to be carried out before, during and after
the development of the high-risk AI system, and the frequency with which they have
to be carried out;
(e) technical specifications, including standards, to be applied and, where the relevant
harmonised standards are not applied in full, the means to be used to ensure that the
high-risk AI system complies with the requirements set out in Chapter 2 of this Title;
(f) systems and procedures for data management, including data collection, data
analysis, data labelling, data storage, data filtration, data mining, data aggregation,
data retention and any other operation regarding the data that is performed before and
for the purposes of the placing on the market or putting into service of high-risk AI
systems;
(g) the risk management system referred to in Article 9;
(h) the setting-up, implementation and maintenance of a post-market monitoring system,
in accordance with Article 61;
(i) procedures related to the reporting of a serious incident in accordance with Article
62;
(j) the handling of communication with national competent authorities, competent
authorities, including sectoral ones, providing or supporting the access to data,
notified bodies, other operators, customers or other interested parties;
(k) systems and procedures for record keeping of all relevant documentation and
information;
14954/22 RB/ek 102
TREE.2.B LIMITE EN
(l) resource management, including security of supply related measures;
(m) an accountability framework setting out the responsibilities of the management and
other staff with regard to all aspects listed in this paragraph.
2. The implementation of aspects referred to in paragraph 1 shall be proportionate to the size
of the provider’s organisation.
2a. For providers of high-risk AI systems that are subject to obligations regarding quality
management systems under relevant sectorial Union law, the aspects described in
paragraph 1 may be part of the quality management systems pursuant to that law.
3. For providers that are financial institutions subject to requirements regarding their internal
governance, arrangements or processes under Union financial services legislation, the
obligation to put in place a quality management system with the exception of paragraph 1,
points (g), (h) and (i) shall be deemed to be fulfilled by complying with the rules on
internal governance arrangements or processes pursuant to the relevant Union financial
services legislation. In that context, any harmonised standards referred to in Article 40 of
this Regulation shall be taken into account.
Article 18
Documentation keeping
1. The provider shall, for a period ending 10 years after the AI system has been placed on the
market or put into service, keep at the disposal of the national competent authorities:
(a) the technical documentation referred to in Article 11;
(b) the documentation concerning the quality management system referred to in
Article 17;
(c) the documentation concerning the changes approved by notified bodies where
applicable;
14954/22 RB/ek 103
TREE.2.B LIMITE EN
(d) the decisions and other documents issued by the notified bodies where applicable;
(e) the EU declaration of conformity referred to in Article 48.
1a. Each Member State shall determine conditions under which the documentation referred to
in paragraph 1 remains at the disposal of the national competent authorities for the period
indicated in that paragraph for the cases when a provider or its authorised representative
established on its territory goes bankrupt or ceases its activity prior to the end of that
period.
2. Providers that are financial institutions subject to requirements regarding their internal
governance, arrangements or processes under Union financial services legislation shall
maintain the technical documentation as part of the documentation kept under the relevant
Union financial services legislation.
Article 19
Conformity assessment
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevant
conformity assessment procedure in accordance with Article 43, prior to their placing on
the market or putting into service. Where the compliance of the AI systems with the
requirements set out in Chapter 2 of this Title has been demonstrated following that
conformity assessment, the providers shall draw up an EU declaration of conformity in
accordance with Article 48 and affix the CE marking of conformity in accordance with
Article 49.
2. [deleted]
14954/22 RB/ek 104
TREE.2.B LIMITE EN
Article 20
Automatically generated logs
1. Providers of high-risk AI systems shall keep the logs, referred to in Article 12(1),
automatically generated by their high-risk AI systems, to the extent such logs are under
their control by virtue of a contractual arrangement with the user or otherwise by law. They
shall keep them for a period of at least six months, unless provided otherwise in applicable
Union or national law, in particular in Union law on the protection of personal data.
2. Providers that are financial institutions subject to requirements regarding their internal
governance, arrangements or processes under Union financial services legislation shall
maintain the logs automatically generated by their high-risk AI systems as part of the
documentation kept under the relevant financial service legislation.
Article 21
Corrective actions
Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI
system which they have placed on the market or put into service is not in conformity with this
Regulation shall immediately investigate, where applicable, the causes in collaboration with the
reporting user and take the necessary corrective actions to bring that system into conformity, to
withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI
system in question and, where applicable, the authorised representative and importers accordingly.
14954/22 RB/ek 105
TREE.2.B LIMITE EN
Article 22
Duty of information
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is
known to the provider of the system, that provider shall immediately inform the national competent
authorities of the Member States in which it made the system available and, where applicable, the
notified body that issued a certificate for the high-risk AI system, in particular of the noncompliance and of any corrective actions taken.
Article 23
Cooperation with competent authorities
Providers of high-risk AI systems shall, upon request by a national competent authority, provide
that authority with all the information and documentation necessary to demonstrate the conformity
of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in a language
which can be easily understood by the authority of the Member State concerned. Upon a reasoned
request from a national competent authority, providers shall also give that authority access to the
logs, referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent
such logs are under their control by virtue of a contractual arrangement with the user or otherwise
by law.
Article 23a
Conditions for other persons to be subject to the obligations of a provider
1. Any natural or legal person shall be considered a provider of a new high-risk AI system for
the purposes of this Regulation and shall be subject to the obligations of the provider under
Article 16, in any of the following circumstances:
(a) they put their name or trademark on a high-risk AI system already placed on the
market or put into service, without prejudice to contractual arrangements stipulating
that the obligations are allocated otherwise;
14954/22 RB/ek 106
TREE.2.B LIMITE EN
(b) [deleted]
(c) they make a substantial modification to a high-risk AI system already placed on the
market or put into service;
(d) they modify the intended purpose of an AI system which is not high-risk and is
already placed on the market or put ito service, in a way which makes the modified
system a high-risk AI system;
(e) they place on the market or put into service a general purpose AI system as a highrisk AI system or as a component of a high-risk AI system.
2. Where the circumstances referred to in paragraph 1, point (a) or (c), occur, the provider that
initially placed the high-risk AI system on the market or put it into service shall no longer be
considered a provider for the purposes of this Regulation.
3. For high-risk AI systems that are safety components of products to which the legal acts listed
in Annex II, section A apply, the manufacturer of those products shall be considered the
provider of the high-risk AI system and shall be subject to the obligations under Article 16
under either of the following scenarios:
(i) the high-risk AI system is placed on the market together with the product under the
name or trademark of the product manufacturer;
(ii) the high-risk AI system is put into service under the the name or trademark of the
product manufacturer after the product has been placed on the market.
Article 24
[deleted]
14954/22 RB/ek 107
TREE.2.B LIMITE EN
Article 25
Authorised representatives
1. Prior to making their systems available on the Union market providers established outside
the Union shall, by written mandate, appoint an authorised representative which is
established in the Union.
2. The authorised representative shall perform the tasks specified in the mandate received
from the provider. For the purpose of this Regulation, the mandate shall empower the
authorised representative to carry out only the following tasks:
(-a) verify that the EU declaration of conformity and the technical documentation have
been drawn up and that an appropriate conformity assessment procedure has
been carried out by the provider;
(a) keep at the disposal of the national competent authorities and national authorities
referred to in Article 63(7), for a period ending 10 years after the high-risk AI system
has been placed on the market or put into service, the contact details of the provider
by which the authorised representative has been appointed, a copy of the EU
declaration of conformity, the technical documentation and, if applicable, the
certificate issued by the notified body;
(b) provide a national competent authority, upon a reasoned request, with all the
information and documentation, including that kept according to point (b), necessary
to demonstrate the conformity of a high-risk AI system with the requirements set out
in Chapter 2 of this Title, including access to the logs, referred to in Article 12(1),
automatically generated by the high-risk AI system to the extent such logs are under
the control of the provider by virtue of a contractual arrangement with the user or
otherwise by law;
(c) cooperate with national competent authorities, upon a reasoned request, on any
action the latter takes in relation to the high-risk AI system.
14954/22 RB/ek 108
TREE.2.B LIMITE EN
(d) comply with the registration obligations referred to in Article 51(1) and, if the
registration of the system is carried out by the provider itself, verify that the
information referred to in Annex VIII, Part II, 1 to 11, is correct.
The authorised representative shall terminate the mandate if it has sufficient reasons to consider that
the provider acts contrary to its obligations under this Regulation. In such a case, it shall also
immediately inform the market surveillance authority of the Member State in which it is
established, as well as, where applicable, the relevant notified body, about the termination of the
mandate and the reasons thereof.
The authorised representative shall be legally liable for defective AI systems on the same basis as,
and jointly and severally with, the provider in respect of its potential liability under Council
Directive 85/374/EEC.
Article 26
Obligations of importers
1. Before placing a high-risk AI system on the market, importers of such system shall ensure
that such a system is in conformity with this Regulation by verifying that:
(a) the relevant conformity assessment procedure referred to in Article 43 has been
carried out by the provider of that AI system;
(b) the provider has drawn up the technical documentation in accordance with Annex
IV;
(c) the system bears the required CE conformity marking and is accompanied by the EU
declaration of conformity and instructions of use;
(d) the authorised representative referred to in Article 25 has been established by the
provider.
14954/22 RB/ek 109
TREE.2.B LIMITE EN
2. Where an importer has sufficient reasons to consider that a high-risk AI system is not in
conformity with this Regulation, or is falsified, or accompanied by falsified
documentation, it shall not place that system on the market until that AI system has been
brought into conformity. Where the high-risk AI system presents a risk within the meaning
of Article 65(1), the importer shall inform the provider of the AI system, the authorised
representatives and the market surveillance authorities to that effect.
3. Importers shall indicate their name, registered trade name or registered trade mark, and the
address at which they can be contacted on the high-risk AI system or, where that is not
possible, on its packaging or its accompanying documentation, as applicable.
4. Importers shall ensure that, while a high-risk AI system is under their responsibility, where
applicable, storage or transport conditions do not jeopardise its compliance with the
requirements set out in Chapter 2 of this Title.
4a. Importers shall keep, for a period ending 10 years after the AI system has been placed on
the market or put into service, a copy of the certificate issued by the notified body, where
applicable, of the instructions for use and of the EU declaration of conformity.
5. Importers shall provide national competent authorities, upon a reasoned request, with all
necessary information and documentation, including that kept in accordance with
paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements
set out in Chapter 2 of this Title in a language which can be easily understood by that
national competent authority. To this purpose they shall also ensure that the technical
documentation can be made available to those authorities.
5a. Importers shall cooperate with national competent authorities on any action those
authorities take in relation to an AI system, of which they are the importer.
14954/22 RB/ek 110
TREE.2.B LIMITE EN
Article 27
Obligations of distributors
1. Before making a high-risk AI system available on the market, distributors shall verify that
the high-risk AI system bears the required CE conformity marking, that it is accompanied
by a copy of EU declaration of conformity and instruction of use, and that the provider and
the importer of the system, as applicable, have complied with their obligations set out
Article 16, point (b) and 26(3) respectively.
2. Where a distributor considers or has reason to consider that a high-risk AI system is not in
conformity with the requirements set out in Chapter 2 of this Title, it shall not make the
high-risk AI system available on the market until that system has been brought into
conformity with those requirements. Furthermore, where the system presents a risk within
the meaning of Article 65(1), the distributor shall inform the provider or the importer of the
system, as applicable, to that effect.
3. Distributors shall ensure that, while a high-risk AI system is under their responsibility,
where applicable, storage or transport conditions do not jeopardise the compliance of the
system with the requirements set out in Chapter 2 of this Title.
4. A distributor that considers or has reason to consider that a high-risk AI system which it
has made available on the market is not in conformity with the requirements set out in
Chapter 2 of this Title shall take the corrective actions necessary to bring that system into
conformity with those requirements, to withdraw it or recall it or shall ensure that the
provider, the importer or any relevant operator, as appropriate, takes those corrective
actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1),
the distributor shall immediately inform the national competent authorities of the Member
States in which it has made the product available to that effect, giving details, in particular,
of the non-compliance and of any corrective actions taken.
14954/22 RB/ek 111
TREE.2.B LIMITE EN
5. Upon a reasoned request from a national competent authority, distributors of high-risk AI
systems shall provide that authority with all the information and documentation regarding
its activities as described in paragraph 1 to 4.
5a. Distributors shall cooperate with national competent authorities on any action those
authorities take in relation to an AI system, of which they are the distributor.
Article 28
[deleted]
Article 29
Obligations of users of high-risk AI systems
1. Users of high-risk AI systems shall use such systems in accordance with the instructions of
use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article.
1a. Users shall assign human oversight to natural persons who have the necessary competence,
training and authority.
2. The obligations in paragraph 1 and 1a are without prejudice to other user obligations under
Union or national law and to the user’s discretion in organising its own resources and
activities for the purpose of implementing the human oversight measures indicated by the
provider.
3. Without prejudice to paragraph 1, to the extent the user exercises control over the input
data, that user shall ensure that input data is relevant in view of the intended purpose of the
high-risk AI system.
14954/22 RB/ek 112
TREE.2.B LIMITE EN
4. Users shall implement human oversight and monitor the operation of the high-risk AI
system on the basis of the instructions of use. When they have reasons to consider that the
use in accordance with the instructions of use may result in the AI system presenting a risk
within the meaning of Article 65(1) they shall inform the provider or distributor and
suspend the use of the system. They shall also inform the provider or distributor when they
have identified any serious incident and interrupt the use of the AI system. In case the user
is not able to reach the provider, Article 62 shall apply mutatis mutandis. This obligation
shall not cover sensitive operational data of users of AI systems which are law enforcement
authorities.
For users that are financial institutions subject to requirements regarding their internal
governance, arrangements or processes under Union financial services legislation, the
monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by
complying with the rules on internal governance arrangements, processes and mechanisms
pursuant to the relevant financial service legislation.
5. Users of high-risk AI systems shall keep the logs, referred to in Article 12(1),
automatically generated by that high-risk AI system, to the extent such logs are under their
control. They shall keep them for a period of at least six months, unless provided otherwise
in applicable Union or national law, in particular in Union law on the protection of
personal data.
Users that are financial institutions subject to requirements regarding their internal
governance, arrangements or processes under Union financial services legislation shall
maintain the logs as part of the documentation kept pursuant to the relevant Union
financial service legislation.
5a. Users of high-risk AI systems that are public authorities, agencies or bodies, with the
exception of law enforcement, border control, immigration or asylum authorities, shall
comply with the registration obligations referred to in Article 51. When they find that the
system that they envisage to use has not been registered in the EU database referred to in
Article 60 they shall not use that system and shall inform the provider or the distributor.
14954/22 RB/ek 113
TREE.2.B LIMITE EN
6. Users of high-risk AI systems shall use the information provided under Article 13 to
comply with their obligation to carry out a data protection impact assessment under Article
35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where
applicable.
6a. Users shall cooperate with national competent authorities on any action those authorities
take in relation to an AI system, of which they are the user.
CHAPTER 4
NOTIFIYING AUTHORITIES AND NOTIFIED BODIES
Article 30
Notifying authorities
1. Each Member State shall designate or establish at least one notifying authority responsible
for setting up and carrying out the necessary procedures for the assessment, designation
and notification of conformity assessment bodies and for their monitoring.
2. Member States may decide that the assessment and monitoring referred to in paragraph 1
shall be carried out by a national accreditation body within the meaning of and in
accordance with Regulation (EC) No 765/2008.
3. Notifying authorities shall be established, organised and operated in such a way that no
conflict of interest arises with conformity assessment bodies and the objectivity and
impartiality of their activities are safeguarded.
14954/22 RB/ek 114
TREE.2.B LIMITE EN
4. Notifying authorities shall be organised in such a way that decisions relating to the
notification of conformity assessment bodies are taken by competent persons different
from those who carried out the assessment of those bodies.
5. Notifying authorities shall not offer or provide any activities that conformity assessment
bodies perform or any consultancy services on a commercial or competitive basis.
6. Notifying authorities shall safeguard the confidentiality of the information they obtain in
accordance with Article 70.
7. Notifying authorities shall have an adequate number of competent personnel at their
disposal for the proper performance of their tasks.
8. [deleted]
Article 31
Application of a conformity assessment body for notification
1. Conformity assessment bodies shall submit an application for notification to the notifying
authority of the Member State in which they are established.
2. The application for notification shall be accompanied by a description of the conformity
assessment activities, the conformity assessment module or modules and the AI systems
for which the conformity assessment body claims to be competent, as well as by an
accreditation certificate, where one exists, issued by a national accreditation body attesting
that the conformity assessment body fulfils the requirements laid down in Article 33. Any
valid document related to existing designations of the applicant notified body under any
other Union harmonisation legislation shall be added.
14954/22 RB/ek 115
TREE.2.B LIMITE EN
3. Where the conformity assessment body concerned cannot provide an accreditation
certificate, it shall provide the notifying authority with all the documentary evidence
necessary for the verification, recognition and regular monitoring of its compliance with
the requirements laid down in Article 33. For notified bodies which are designated under
any other Union harmonisation legislation, all documents and certificates linked to those
designations may be used to support their designation procedure under this Regulation, as
appropriate. The notified body shall update the documentation referred to in paragraph 2
and paragraph 3 whenever relevant changes occur, in order to enable the authority
responsible for notified bodies to monitor and verify continuous compliance with all the
requirements laid down in Article 33.
Article 32
Notification procedure
1. Notifying authorities may only notify conformity assessment bodies which have satisfied
the requirements laid down in Article 33.
2. Notifying authorities shall notify those bodies to the Commission and the other Member
States using the electronic notification tool developed and managed by the Commission.
3. The notification referred to in paragraph 2 shall include full details of the conformity
assessment activities, the conformity assessment module or modules and the AI systems
concerned and the relevant attestation of competence. Where a notification is not based on
an accreditation certificate as referred to in Article 31 (2), the notifying authority shall
provide the Commission and the other Member States with documentary evidence which
attests to the conformity assessment body's competence and the arrangements in place to
ensure that that body will be monitored regularly and will continue to satisfy the
requirements laid down in Article 33.
14954/22 RB/ek 116
TREE.2.B LIMITE EN
4. The conformity assessment body concerned may perform the activities of a notified body
only where no objections are raised by the Commission or the other Member States within
two weeks of a notification by a notifying authority where it includes an accreditation
certificate referred to in Article 31(2), or within two months of a notification by the
notifying authority where it includes documentary evidence referred to in Article 31(3).
5. [deleted]
Article 33
Requirements relating to notified bodies
1. A notified body shall be established under national law and have legal personality.
2. Notified bodies shall satisfy the organisational, quality management, resources and process
requirements that are necessary to fulfil their tasks.
3. The organisational structure, allocation of responsibilities, reporting lines and operation of
notified bodies shall be such as to ensure that there is confidence in the performance by
and in the results of the conformity assessment activities that the notified bodies conduct.
4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to
which it performs conformity assessment activities. Notified bodies shall also be
independent of any other operator having an economic interest in the high-risk AI system
that is assessed, as well as of any competitors of the provider.
5. Notified bodies shall be organised and operated so as to safeguard the independence,
objectivity and impartiality of their activities. Notified bodies shall document and
implement a structure and procedures to safeguard impartiality and to promote and apply
the principles of impartiality throughout their organisation, personnel and assessment
activities.
14954/22 RB/ek 117
TREE.2.B LIMITE EN
6. Notified bodies shall have documented procedures in place ensuring that their personnel,
committees, subsidiaries, subcontractors and any associated body or personnel of external
bodies respect the confidentiality of the information in accordance with Article 70 which
comes into their possession during the performance of conformity assessment activities,
except when disclosure is required by law. The staff of notified bodies shall be bound to
observe professional secrecy with regard to all information obtained in carrying out their
tasks under this Regulation, except in relation to the notifying authorities of the Member
State in which their activities are carried out.
7. Notified bodies shall have procedures for the performance of activities which take due
account of the size of an undertaking, the sector in which it operates, its structure, the
degree of complexity of the AI system in question.
8. Notified bodies shall take out appropriate liability insurance for their conformity
assessment activities, unless liability is assumed by the Member State in which they are
located in accordance with national law or that Member State is itself directly responsible
for the conformity assessment.
9. Notified bodies shall be capable of carrying out all the tasks falling to them under this
Regulation with the highest degree of professional integrity and the requisite competence
in the specific field, whether those tasks are carried out by notified bodies themselves or on
their behalf and under their responsibility.
10. Notified bodies shall have sufficient internal competences to be able to effectively evaluate
the tasks conducted by external parties on their behalf. The notified body shall have
permanent availability of sufficient administrative, technical, legal and scientific personnel
who possess experience and knowledge relating to the relevant artificial intelligence
technologies, data and data computing and to the requirements set out in Chapter 2 of this
Title.
14954/22 RB/ek 118
TREE.2.B LIMITE EN
11. Notified bodies shall participate in coordination activities as referred to in Article 38. They
shall also take part directly or be represented in European standardisation organisations, or
ensure that they are aware and up to date in respect of relevant standards.
12. [deleted]
Article 33a
Presumption of conformity with requirements relating to notified bodies
Where a conformity assessment body demonstrates its conformity with the criteria laid down in the
relevant harmonised standards or parts thereof the references of which have been published in the
Official Journal of the European Union it shall be presumed to comply with the requirements set out
in Article 33 in so far as the applicable harmonised standards cover those requirements.
Article 34
Subsidiaries of and subcontracting by notified bodies
1. Where a notified body subcontracts specific tasks connected with the conformity
assessment or has recourse to a subsidiary, it shall ensure that the subcontractor or the
subsidiary meets the requirements laid down in Article 33 and shall inform the notifying
authority accordingly.
2. Notified bodies shall take full responsibility for the tasks performed by subcontractors or
subsidiaries wherever these are established.
3. Activities may be subcontracted or carried out by a subsidiary only with the agreement of
the provider.
14954/22 RB/ek 119
TREE.2.B LIMITE EN
4. The relevant documents concerning the assessment of the qualifications of the
subcontractor or the subsidiary and the work carried out by them under this Regulation
shall be kept at the disposal of the notifying authority for a period of 5 years from the
termination date of the subcontracting activity.
Article 34a
Operational obligations of notified bodies
1. Notified bodies shall verify the conformity of high-risk AI system in accordance with the
conformity assessment procedures referred to in Article 43.
2. Notified bodies shall perform their activities while avoiding unnecessary burdens for
providers, and taking due account of the size of an undertaking, the sector in which it
operates, its structure and the degree of complexity of the high risk AI system in question.
In so doing, the notified body shall nevertheless respect the degree of rigour and the level
of protection required for the compliance of the high risk AI system with the requirements
of this Regulation.
3. Notified bodies shall make available and submit upon request all relevant documentation,
including the providers’ documentation, to the notifying authority referred to in Article 30
to allow that authority to conduct its assessment, designation, notification, monitoring
activities and to facilitate the assessment outlined in this Chapter.
Article 35
Identification numbers and lists of notified bodies designated under this Regulation
1. The Commission shall assign an identification number to notified bodies. It shall assign a
single number, even where a body is notified under several Union acts.
14954/22 RB/ek 120
TREE.2.B LIMITE EN
2. The Commission shall make publicly available the list of the bodies notified under this
Regulation, including the identification numbers that have been assigned to them and the
activities for which they have been notified. The Commission shall ensure that the list is
kept up to date.
Article 36
Changes to notifications
1. The notifying authority shall notify the Commission and the other Member States of any
relevant changes to the notification of a notified body via the electronic notification tool
referred to in Article 32(2).
2. The procedures described in Article 31 and 32 shall apply to extensions of the scope of the
notification. For changes to the notification other than extensions of its scope, the
procedures laid down in the following paragraphs shall apply.
Where a notified body decides to cease its conformity assessment activities it shall inform
the notifying authority and the providers concerned as soon as possible and in the case of a
planned cessation one year before ceasing its activities. The certificates may remain valid
for a temporary period of nine months after cessation of the notified body's activities on
condition that another notified body has confirmed in writing that it will assume
responsibilities for the AI systems covered by those certificates. The new notified body
shall complete a full assessment of the AI systems affected by the end of that period before
issuing new certificates for those systems. Where the notified body has ceased its activity,
the notifying authority shall withdraw the designation.
14954/22 RB/ek 121
TREE.2.B LIMITE EN
3. Where a notifying authority has sufficient reasons to consider that a notified body no
longer meets the requirements laid down in Article 33, or that it is failing to fulfil its
obligations, the notifying authority shall, provided that the the notified body had the
opportunity to make its views known, restrict, suspend or withdraw notification as
appropriate, depending on the seriousness of the failure to meet those requirements or fulfil
those obligations. It shall immediately inform the Commission and the other Member
States accordingly.
4. Where its designation has been suspended, restricted, or fully or partially withdrawn, the
notified body shall inform the manufacturers concerned at the latest within 10 days.
5. In the event of restriction, suspension or withdrawal of a notification, the notifying
authority shall take appropriate steps to ensure that the files of the notified body concerned
are kept and make them available to notifying authorities in other Member States and to
market surveillance authorities at their request.
6. In the event of restriction, suspension or withdrawal of a designation, the notifying
authority shall:
a) assess the impact on the certificates issued by the notified body;
b) submit a report on its findings to the Commission and the other Member States
within three months of having notified the changes to the notification;
c) require the notified body to suspend or withdraw, within a reasonable period of time
determined by the authority, any certificates which were unduly issued in order to
ensure the conformity of AI systems on the market;
d) inform the Commission and the Member States about certificates of which it has
required their suspension or withdrawal;
14954/22 RB/ek 122
TREE.2.B LIMITE EN
e) provide the national competent authorities of the Member State in which the provider
has its registered place of business with all relevant information about the certificates
for which it has required suspension or withdrawal. That competent authority shall
take the appropriate measures, where necessary, to avoid a potential risk to health,
safety or fundamental rights.
7. With the exception of certificates unduly issued, and where a notification has been
suspended or restricted, the certificates shall remain valid in the following circumstances:
a) the notifying authority has confirmed, within one month of the suspension or
restriction, that there is no risk to health, safety or fundamental rights in relation to
certificates affected by the suspension or restriction, and the notifying authority has
outlined a timeline and actions anticipated to remedy the suspension or restriction; or
b) the notifying authority has confirmed that no certificates relevant to the suspension
will be issued, amended or re-issued during the course of the suspension or
restriction, and states whether the notified body has the capability of continuing to
monitor and remain responsible for existing certificates issued for the period of the
suspension or restriction. In the event that the authority responsible for notified
bodies determines that the notified body does not have the capability to support
existing certificates issued, the provider shall provide to the national competent
authorities of the Member State in which the provider of the system covered by the
certificate has its registered place of business, within three months of the suspension
or restriction, a written confirmation that another qualified notified body is
temporarily assuming the functions of the notified body to monitor and remain
responsible for the certificates during the period of suspension or restriction.
8. With the exception of certificates unduly issued, and where a designation has been
withdrawn, the certificates shall remain valid for a period of nine months in the following
circumstances:
14954/22 RB/ek 123
TREE.2.B LIMITE EN
a) where the national competent authority of the Member State in which the provider of
the AI system covered by the certificate has its registered place of business has
confirmed that there is no risk to health, safety and fundamental rights associated
with the systems in question; and
b) another notified body has confirmed in writing that it will assume immediate
responsibilities for those systems and will have completed assessment of them within
twelve months of the withdrawal of the designation.
In the circumstances referred to in the first subparagraph, the national competent authority
of the Member State in which the provider of the system covered by the certificate has its
place of business may extend the provisional validity of the certificates for further periods
of three months, which altogether shall not exceed twelve months.
The national competent authority or the notified body assuming the functions of the
notified body affected by the change of notification shall immediately inform the
Commission, the other Member States and the other notified bodies thereof.
Article 37
Challenge to the competence of notified bodies
1. The Commission shall, where necessary, investigate all cases where there are reasons to
doubt whether a notified body complies with the requirements laid down in Article 33.
2. The notifying authority shall provide the Commission, on request, with all relevant
information relating to the notification of the notified body concerned.
3. The Commission shall ensure that all confidential information obtained in the course of its
investigations pursuant to this Article is treated confidentially in accordance with Article
70.
14954/22 RB/ek 124
TREE.2.B LIMITE EN
4. Where the Commission ascertains that a notified body does not meet or no longer meets
the requirements laid down in Article 33, it shall inform the notifying authority of the
reasons of such an ascertainment and request it to take the necessary corrective measures,
including the suspension, restriction or withdrawal of the designation if necessary. Where
the notifying authority fails to take the necessary corrective measures, the Commission
may, by means of implementing acts, suspend, restrict or withdraw the notification. That
implementing act shall be adopted in accordance with the examination procedure referred
to in Article 74(2).
Article 38
Coordination of notified bodies
1. The Commission shall ensure that, with regard to high-risk AI systems, appropriate
coordination and cooperation between notified bodies active in the conformity assessment
procedures pursuant to this Regulation are put in place and properly operated in the form
of a sectoral group of notified bodies.
2. The notifying authority shall ensure that the bodies notified by them participate in the work
of that group, directly or by means of designated representatives.
Article 39
Conformity assessment bodies of third countries
Conformity assessment bodies established under the law of a third country with which the Union
has concluded an agreement may be authorised to carry out the activities of notified Bodies under
this Regulation, provided that they meet the requirements in Article 33.
14954/22 RB/ek 125
TREE.2.B LIMITE EN
CHAPTER 5
STANDARDS, CONFORMITY ASSESSMENT, CERTIFICATES, REGISTRATION
Article 40
Harmonised standards
1. High-risk AI systems or general purpose AI systems which are in conformity with harmonised
standards or parts thereof the references of which have been published in the Official Journal
of the European Union shall be presumed to be in conformity with the requirements set out in
Chapter 2 of this Title or, as applicable, with requirements set out in Article 4a and Article 4b,
to the extent those standards cover those requirements.
2. When issuing a standardisation request to European standardisation organisations in
accordance with Article 10 of Regulation 1025/2012, the Commission shall specify that
standards are coherent, clear and drafted in such a way that they aim to fulfil in particular the
following objectives:
a) ensure that AI systems placed on the market or put into service in the Union are safe
and respect Union values and strengthen the Union's open strategic autonomy;
b) promote investment and innovation in AI, including through increasing legal certainty, as
well as competitiveness and growth of the Union market;
c) enhance multistakeholder governance, representative of all relevant European
stakeholders (e.g. industry, SMEs, civil society, researchers);
d) contribute to strengthening global cooperation on standardisation in the field of AI that
is consistent with Union values and interests.
The Commission shall request the European standardisation organisations to provide
evidence of their best efforts to fulfil the above objectives.
14954/22 RB/ek 126
TREE.2.B LIMITE EN
Article 41
Common specifications
1. The Commission is empowered to adopt, after consulting the AI Board referred to in
Article 56, implementing acts in accordance with the examination procedure referred to in
Article 74(2) establishing common technical specifications for the requirements set out in
Chapter 2 of this Title, or, as applicable, with requirements set out in Article 4a and Article
4b, where the following conditions have been fulfilled:
(a) no reference to harmonised standards covering the relevant essential safety or
fundamental right concerns is published in the Official Journal of the European Union
in accordance with Regulation (EU) No 1025/2012;
(b) the Commission has requested, pursuant to Article 10(1) of Regulation 1025/2012, one
or more European standardisation organisations to draft a harmonised standard for the
requirements set out in Chapter 2 of this Title;
(c) the request referred to in point (b) has not been accepted by any of the European
standardisation organisations or the harmonised standards addressing that request are
not delivered within the deadline set in accordance with article 10(1) of Regulation
1025/2012 or those standards do not comply with the request.
1a. Before preparing a draft implementing act, the Commission shall inform the committee
referred to in Article 22 of Regulation EU (No) 1025/2012 that it considers that the
conditions in paragraph 1 are fulfilled.
2. In the early preparation of the draft implementing act establishing the common
specification, the Commission shall fulfil the objectives referred to in Article 40(2) and
gather the views of relevant bodies or expert groups established under relevant sectorial
Union law. Based on that consultation, the Commission shall prepare the draft
implementing act.
14954/22 RB/ek 127
TREE.2.B LIMITE EN
3. High-risk AI systems or general purpose AI systems which are in conformity with the
common specifications referred to in paragraph 1 shall be presumed to be in conformity
with the requirements set out in Chapter 2 of this Title or, as applicable, with requirements
set out in Article 4a and Article 4b, to the extent those common specifications cover those
requirements.
4. When references of a harmonised standard are published in the Official Journal of the
European Union, implementing acts referred to in paragraph 1, which cover the
requirements set out in Chapter 2 of this Title or requirements set out in Article 4a and
Article 4b, shall be repealed, as applicable.
5. When a Member State considers that a common specification does not entirely satisfy the
requirements set out in Chapter 2 of this Title or requirements set out in Article 4a and
Article 4b, as applicable, it shall inform the Commission thereof with a detailed
explanation and the Commission shall assess that information and, if appropriate, amend
the implementing act establishing the common specification in question.
Article 42
Presumption of conformity with certain requirements
1. High-risk AI systems that have been trained and tested on data reflecting the specific
geographical, behavioural or functional setting within which they are intended to be used
shall be presumed to be in compliance with the respective requirements set out in Article
10(4).
14954/22 RB/ek 128
TREE.2.B LIMITE EN
2. High-risk AI systems or general purpose AI systems that have been certified or for which a
statement of conformity has been issued under a cybersecurity scheme pursuant to
Regulation (EU) 2019/881 of the European Parliament and of the Council33 and the
references of which have been published in the Official Journal of the European Union
shall be presumed to be in compliance with the cybersecurity requirements set out in
Article 15 of this Regulation in so far as the cybersecurity certificate or statement of
conformity or parts thereof cover those requirements.
Article 43
Conformity assessment
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the
compliance of a high-risk AI system with the requirements set out in Chapter 2 of this
Title, the provider has applied harmonised standards referred to in Article 40, or, where
applicable, common specifications referred to in Article 41, the provider shall opt for one
of the following procedures:
(a) the conformity assessment procedure based on internal control referred to in Annex
VI; or
(b) the conformity assessment procedure based on assessment of the quality
management system and assessment of the technical documentation, with the
involvement of a notified body, referred to in Annex VII.
Where, in demonstrating the compliance of a high-risk AI system with the requirements set
out in Chapter 2 of this Title, the provider has not applied or has applied only in part
harmonised standards referred to in Article 40, or where such harmonised standards do not
exist and common specifications referred to in Article 41 are not available, the provider
shall follow the conformity assessment procedure set out in Annex VII.
33 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the
European Union Agency for Cybersecurity) and on information and communications technology cybersecurity
certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 1).
14954/22 RB/ek 129
TREE.2.B LIMITE EN
For the purpose of the conformity assessment procedure referred to in Annex VII, the
provider may choose any of the notified bodies. However, when the system is intended to
be put into service by law enforcement, immigration or asylum authorities as well as EU
institutions, bodies or agencies, the market surveillance authority referred to in Article
63(5) or (6), as applicable, shall act as a notified body.
2. For high-risk AI systems referred to in points 2 to 8 of Annex III and for general purpose
AI systems referred in Title 1a, providers shall follow the conformity assessment
procedure based on internal control as referred to in Annex VI, which does not provide for
the involvement of a notified body.
3. For high-risk AI systems, to which legal acts listed in Annex II, section A, apply, the
provider shall follow the relevant conformity assessment as required under those legal acts.
The requirements set out in Chapter 2 of this Title shall apply to those high-risk AI systems
and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point
4.6 of Annex VII shall also apply.
For the purpose of that assessment, notified bodies which have been notified under those
legal acts shall be entitled to control the conformity of the high-risk AI systems with the
requirements set out in Chapter 2 of this Title, provided that the compliance of those
notified bodies with requirements laid down in Article 33(4), (9) and (10) has been
assessed in the context of the notification procedure under those legal acts.
Where the legal acts listed in Annex II, section A, enable the manufacturer of the product
to opt out from a third-party conformity assessment, provided that that manufacturer has
applied all harmonised standards covering all the relevant requirements, that manufacturer
may make use of that option only if he has also applied harmonised standards or, where
applicable, common specifications referred to in Article 41, covering the requirements set
out in Chapter 2 of this Title.
4. [deleted]
14954/22 RB/ek 130
TREE.2.B LIMITE EN
5. The Commission is empowered to adopt delegated acts in accordance with Article 73 for
the purpose of updating Annexes VI and Annex VII in light of technical progress.
6. The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in
order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the
conformity assessment procedure referred to in Annex VII or parts thereof. The
Commission shall adopt such delegated acts taking into account the effectiveness of the
conformity assessment procedure based on internal control referred to in Annex VI in
preventing or minimizing the risks to health and safety and protection of fundamental
rights posed by such systems as well as the availability of adequate capacities and
resources among notified bodies.
Article 44
Certificates
1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a
a language which can be easily undestood by the relevant authorities in the Member State
in which the notified body is established.
2. Certificates shall be valid for the period they indicate, which shall not exceed five years.
On application by the provider, the validity of a certificate may be extended for further
periods, each not exceeding five years, based on a re-assessment in accordance with the
applicable conformity assessment procedures. Any supplement to a certificate shall remain
valid as long as the certificate which it supplements is valid.
3. Where a notified body finds that an AI system no longer meets the requirements set out in
Chapter 2 of this Title, it shall, taking account of the principle of proportionality, suspend
or withdraw the certificate issued or impose any restrictions on it, unless compliance with
those requirements is ensured by appropriate corrective action taken by the provider of the
system within an appropriate deadline set by the notified body. The notified body shall
give reasons for its decision.
14954/22 RB/ek 131
TREE.2.B LIMITE EN
Article 45
Appeal against decisions of notified bodies
An appeal procedure against decisions of the notified bodies shall be available.
Article 46
Information obligations of notified bodies
1. Notified bodies shall inform the notifying authority of the following:
(a) any Union technical documentation assessment certificates, any supplements to those
certificates, quality management system approvals issued in accordance with the
requirements of Annex VII;
(b) any refusal, restriction, suspension or withdrawal of a Union technical
documentation assessment certificate or a quality management system approval
issued in accordance with the requirements of Annex VII;
(c) any circumstances affecting the scope of or conditions for notification;
(d) any request for information which they have received from market surveillance
authorities regarding conformity assessment activities;
(e) on request, conformity assessment activities performed within the scope of their
notification and any other activity performed, including cross-border activities and
subcontracting.
2. Each notified body shall inform the other notified bodies of:
(a) quality management system approvals which it has refused, suspended or withdrawn,
and, upon request, of quality system approvals which it has issued;
14954/22 RB/ek 132
TREE.2.B LIMITE EN
(b) EU technical documentation assessment certificates or any supplements thereto
which it has refused, withdrawn, suspended or otherwise restricted, and, upon
request, of the certificates and/or supplements thereto which it has issued.
3. Each notified body shall provide the other notified bodies carrying out similar conformity
assessment activities covering the same AI systems with relevant information on issues
relating to negative and, on request, positive conformity assessment results.
4. The obligations referred to in paragraphs 1 to 3 shall be complied with in accordance with
Article 70.
Article 47
Derogation from conformity assessment procedure
1. By way of derogation from Article 43 and upon a duly justified request, any market
surveillance authority may authorise the placing on the market or putting into service of
specific high-risk AI systems within the territory of the Member State concerned, for
exceptional reasons of public security or the protection of life and health of persons,
environmental protection and the protection of key industrial and infrastructural assets.
That authorisation shall be for a limited period of time while the necessary conformity
assessment procedures are being carried out, taking into account the exceptional reasons
justifying the derogation. The completion of those procedures shall be undertaken without
undue delay.
1a. In a duly justified situation of urgency for exceptional reasons of public security or in case
of specific, substantial and imminent threat to the life or physical safety of natural persons,
law enforcement authorities or civil protection authorities may put a specific high-risk AI
system into service without the authorisation referred to in paragraph 1 provided that such
authorisation is requested during or after the use without undue delay, and if such
authorisation is rejected, its use shall be stopped with immediate effect and all the results
and outputs of this use shall be immediately discarded.
14954/22 RB/ek 133
TREE.2.B LIMITE EN
2. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance
authority concludes that the high-risk AI system complies with the requirements of Chapter
2 of this Title. The market surveillance authority shall inform the Commission and the
other Member States of any authorisation issued pursuant to paragraph 1. This obligation
shall not cover sensitive operational data in relation to the activities of law enforcement
authorities.
3. [deleted]
4. [deleted]
5. [deleted]
6. For high-risk AI systems related to products covered by Union harmonisation legislation
referred to in Annex II Section A, only the conformity assessment derogation procedures
established in that legislation shall apply.
Article 48
EU declaration of conformity
1. The provider shall draw up a written or electronically signed EU declaration of conformity
for each AI system and keep it at the disposal of the national competent authorities for 10
years after the AI system has been placed on the market or put into service. The EU
declaration of conformity shall identify the AI system for which it has been drawn up. A
copy of the EU declaration of conformity shall be submitted to the relevant national
competent authorities upon request.
2. The EU declaration of conformity shall state that the high-risk AI system in question meets
the requirements set out in Chapter 2 of this Title. The EU declaration of conformity shall
contain the information set out in Annex V and shall be translated into a language that can
be easily understood by the national competent authorities of the Member State(s) in which
the high-risk AI system is made available.
14954/22 RB/ek 134
TREE.2.B LIMITE EN
3. Where high-risk AI systems are subject to other Union harmonisation legislation which
also requires an EU declaration of conformity, a single EU declaration of conformity shall
be drawn up in respect of all Union legislations applicable to the high-risk AI system. The
declaration shall contain all the information required for identification of the Union
harmonisation legislation to which the declaration relates.
4. By drawing up the EU declaration of conformity, the provider shall assume responsibility
for compliance with the requirements set out in Chapter 2 of this Title. The provider shall
keep the EU declaration of conformity up-to-date as appropriate.
5. The Commission shall be empowered to adopt delegated acts in accordance with Article 73
for the purpose of updating the content of the EU declaration of conformity set out in
Annex V in order to introduce elements that become necessary in light of technical
progress.
Article 49
CE marking of conformity
1. The CE marking of conformity shall be subject to the general principles set out in Article
30 of Regulation (EC) No 765/2008.
2. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems.
Where that is not possible or not warranted on account of the nature of the high-risk AI
system, it shall be affixed to the packaging or to the accompanying documentation, as
appropriate.
3. Where applicable, the CE marking shall be followed by the identification number of the
notified body responsible for the conformity assessment procedures set out in Article 43.
The identification number shall also be indicated in any promotional material which
mentions that the high-risk AI system fulfils the requirements for CE marking.
14954/22 RB/ek 135
TREE.2.B LIMITE EN
Article 50
[deleted]
Article 51
Registration of relevant operators and of high-risk AI systems listed in Annex III
1. Before placing on the market or putting into service a high-risk AI system listed in Annex
III with the exception of high-risk AI systems referred to in Annex III, points 1, 6 and 7 in
the areas of law enforcement, migration, asylum and border control management, and high
risk AI systems referred to in Annex III point 2, the provider and where applicable, the
authorised representative shall register themselves in the EU database referred to in Article
60. The provider or, where applicable the authorised representative, shall also register their
systems in that database.
2. Before using a high-risk AI system listed in Annex III, users of high-risk AI systems that
are public authorities, agencies or bodies, or entities acting on their behalf, shall register
themselves in the EU database referred to in Article 60 and select the system that they
envisage to use.
The obligations laid down in the previous subparagraph shall not apply to law
enforcement, border control, immigration or asylum authorities, agencies or bodies and
authorities, agencies or bodies using high-risk AI systems referred to Annex III point 2, as
well as to entities acting on their behalf.
14954/22 RB/ek 136
TREE.2.B LIMITE EN
TITLE IV
TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND USERS OF
CERTAIN AI SYSTEMS
Article 52
Transparency obligations for providers and users of certain AI systems
1. Providers shall ensure that AI systems intended to interact with natural persons are
designed and developed in such a way that natural persons are informed that they are
interacting with an AI system, unless this is obvious from the point of view of a natural
person who is reasonably well-informed, observant and circumspect, taking into account
the circumstances and the context of use. This obligation shall not apply to AI systems
authorised by law to detect, prevent, investigate and prosecute criminal offences, subject to
appropriate safeguards for the rights and freedoms of third parties, unless those systems are
available for the public to report a criminal offence.
2. Users of a biometric categorisation system shall inform of the operation of the system the
natural persons exposed thereto. This obligation shall not apply to AI systems used for
biometric categorisation, which are permitted by law to detect, prevent and investigate
criminal offences, subject to appropriate safeguards for the rights and freedoms of third
parties.
2a. Users of an emotion recognition system shall inform of the operation of the system the
natural persons exposed thereto. This obligation shall not apply to AI systems used for
emotion recognition which are permitted by law to detect, prevent and investigate criminal
offences, subject to appropriate safeguards for the rights and freedoms of third parties.
14954/22 RB/ek 137
TREE.2.B LIMITE EN
3. Users of an AI system that generates or manipulates image, audio or video content that
appreciably resembles existing persons, objects, places or other entities or events and
would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that
the content has been artificially generated or manipulated.
However, the first subparagraph shall not apply where the use is authorised by law to
detect, prevent, investigate and prosecute criminal offences or where the content is part of
an evidently creative, satirical, artistic or fictional work or programme subject to
appropriate safeguards for the rights and freedoms of third parties.
3a. The information referred to in paragraphs 1 to 3 shall be provided to natural persons in a
clear and distinguishable manner at the latest at the time of the first interaction or exposure.
4. Paragraphs 1, 2, 2a and 3 and 3a shall not affect the requirements and obligations set out in
Title III of this Regulation and shall be without prejudice to other transparency obligations
for users of AI systems laid down in Union or national law.
TITLE V
MEASURES IN SUPPORT OF INNOVATION
Article 53
AI regulatory sandboxes
-1a. National competent authorities may establish AI regulatory sandboxes for the
development, training, testing and validation of innovative AI systems under the direct
supervision, guidance and support by the national competent authority, before those
systems are placed on the market or put into service. Such regulatory sandboxes may
include testing in real world conditions supervised by the national competent authorities.
14954/22 RB/ek 138
TREE.2.B LIMITE EN
-1b. [deleted]
-1c Where appropriate, national competent authorities shall cooperate with other relevant
authorities and may allow for the involvement of other actors within the AI ecosystem.
-1d. This Article shall not affect other regulatory sandboxes established under national or Union
law, including in cases where the products or services that are tested in them are linked to
the use of innovative AI systems. Member States shall ensure an appropriate level of
cooperation between the authorities supervising those other sandboxes and the national
competent authorities.
1. [deleted]
1a. [deleted]
1b. The establishment of AI regulatory sandboxes under this Regulation shall aim to contribute
to one or more of the following objectives:
a) foster innovation and competitiveness and facilitate the development of an AI
ecosystem;
b) facilitate and accelerate access to the Union market for AI systems, in particular when
provided by small and medium enterprises (SMEs), including start-ups;
c) improve legal certainty and contribute to the sharing of best practices through
cooperation with the authorities involved in the AI regulatory sandbox with a view to
ensuring future compliance with this Regulation and, where appropriate, with other
Union and Member States legislation;
d) contribute to evidence-based regulatory learning.
2. [deleted]
14954/22 RB/ek 139
TREE.2.B LIMITE EN
2a. Access to the AI regulatory sandboxes shall be open to any provider or prospective
provider of an AI system who fulfils the eligibility and selection criteria referred to in
paragraph 6(a) and who has been selected by the national competent authorities following
the selection procedure referred to in paragraph 6(b). Providers or prospective providers
may also submit applications in partnership with users or any other relevant third parties.
Participation in the AI regulatory sandbox shall be limited to a period that is appropriate to
the complexity and scale of the project. This period may be extended by the national
competent authority.
Participation in the AI regulatory sandbox shall be based on a specific plan referred to in
paragraph 6 of this Article that shall be agreed between the participant(s) and the national
competent authoritie(s), as applicable.
3. The participation in the AI regulatory sandboxes shall not affect the supervisory and
corrective powers of the authorities supervising the sandbox. Those authorities shall
exercice their supervisory powers in a flexible manner within the limits of the relevant
legislation, using their discretionary powers when implementing legal provisions to a
specific AI sandbox project, with the objective of supporting innovation in AI in the
Union.
Provided that the participant(s) respect the sandbox plan and the terms and conditions for
their participation as referred to in paragraph 6(c) and follow in good faith the guidance
given by the authorities, no administrative fines shall be imposed by the authorities for
infringement of applicable Union or Member State legislation relating to the AI system
supervised in the sandbox, including the provisions of this Regulation.
4. The participants remain liable under applicable Union and Member States liability
legislation for any damage caused in the course of their participation in an AI regulatory
sandbox.
14954/22 RB/ek 140
TREE.2.B LIMITE EN
4a. Upon request of the provider or prospective provider of the AI system, the national
competent authority shall provide, where applicable, a written proof of the activities
successfully carried out in the sandbox. The national competent authority shall also
provide an exit report detailing the activities carried out in the sandbox and the related
results and learning outcomes. Such written proof and exit report could be taken into
account by market surveillance authorities or notified bodies, as applicable, in the context
of conformity assessment procedures or market surveillance checks.
Subject to the confidentiality provisions in Article 70 and with the agreement of the
sandbox participants, the European Commission and the AI Board shall be authorised to
access the exit reports and shall take them into account, as appropriate, when exercising
their tasks under this Regulation. If both the participant and the national competent
authority explicitly agree to this, the exit report can be made publicly available through the
single information platform referred to in article 55(3)(b).
4b. The AI regulatory sandboxes shall be designed and implemented in such a way that, where
relevant, they facilitate cross-border cooperation between the national competent
authorities.
5. National competent authorities shall make publicly available annual reports on the
implementation of the AI regulatory sandboxes, including good practices, lessons learnt
and recommendations on their setup and, where relevant, on the application of this
Regulation and other Union legislation supervised within the sandbox. Those annual
reports shall be submitted to the AI Board which shall make publicly available a summary
of all good practices, lessons learnt and recommendations. This obligation to make annual
reports publicly available shall not cover sensitive operational data in relation to the
activities of law enforcement, border control, immigration or asylum authorities. The
Commission and the AI Board shall, where appropriate, take the annual reports into
account when exercising their tasks under this Regulation.
14954/22 RB/ek 141
TREE.2.B LIMITE EN
5b. The Commission shall ensure that information about AI regulatory sandboxes, including
about those established under this Article, is available through the single information
platform referred to in Article 55(3)(b).
6. The modalities and the conditions for the establishment and operation of the AI regulatory
sandboxes under this Regulation shall be adopted through implementing acts in accordance
with the examination procedure referred to in Article 74(2).
The modalities and conditions shall to the best extent possible support flexibility for
national competent authorities to establish and operate their AI regulatory sandboxes,
foster innovation and regulatory learning and shall particularly take into account the
special circumstances and capacities of participating SMEs, including start-ups.
Those implementing acts shall include common main principles on the following issues:
a) eligibility and selection for participation in the AI regulatory sandbox;
b) procedure for the application, participation, monitoring, exiting from and
termination of the AI regulatory sandbox, including the sandbox plan and the exit
report;
c) the terms and conditions applicable to the participants.
7. When national competent authorities consider authorising testing in real world conditions
supervised within the framework of an AI regulatory sandbox established under this
Article, they shall specifically agree with the participants on the terms and conditions of
such testing and in particular on the appropriate safeguards with the view to protect
fundamental rights, health and safety. Where appropriate, they shall cooperate with other
national competent authorities with a view to ensure consistent practices across the Union.
14954/22 RB/ek 142
TREE.2.B LIMITE EN
Article 54
Further processing of personal data for developing certain AI systems in the public interest in the
AI regulatory sandbox
1. In the AI regulatory sandbox personal data lawfully collected for other purposes may be
processed for the purposes of developing, testing and training of innovative AI systems in
the sandbox under the following cumulative conditions:
(a) the innovative AI systems shall be developed for safeguarding substantial public
interest by a public authority or another natural or legal person governed by public
law or by private law and in one or more of the following areas:
(i) [deleted]
(ii) public safety and health, including prevention, control and treatment of disease
and improvement of health care systems;
(iii) protection and improvement of the quality of the environment, including green
transition, climate change mitigation and adaptation;
(iv) energy sustainability, transport and mobility;
(v) efficiency and quality of public administration and public services;
(vi) cybersecurity and resilience of critical infrastructure.
(b) the data processed are necessary for complying with one or more of the requirements
referred to in Title III, Chapter 2 where those requirements cannot be effectively
fulfilled by processing anonymised, synthetic or other non-personal data;
14954/22 RB/ek 143
TREE.2.B LIMITE EN
(c) there are effective monitoring mechanisms to identify if any high risks to the rights
and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU)
2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the
sandbox experimentation as well as response mechanism to promptly mitigate those
risks and, where necessary, stop the processing;
(d) any personal data to be processed in the context of the sandbox are in a functionally
separate, isolated and protected data processing environment under the control of the
participants and only authorised persons have access to that data;
(e) any personal data processed are not to be transmitted, transferred or otherwise
accessed by other parties that are not participants in the sandbox, unless such
disclosure occurs in compliance with Regulation (EU) 2016/679 or, where
applicable, Regulation 2018/725, and all participants have agreed to it;
(f) any processing of personal data in the context of the sandbox shall not affect the
application of the rights of the data subjects as provided for under Union law on the
protection of personal data, in particular in Article 22 of Regulation (EU) 2016/679
and Article 24 of Regulation (EU) 2018/1725;
(g) any personal data processed in the context of the sandbox are protected by means of
appropriate technical and organisational measures and deleted once the participation
in the sandbox has terminated or the personal data has reached the end of its retention
period;
(h) the logs of the processing of personal data in the context of the sandbox are kept for
the duration of the participation in the sandbox, unless provided otherwise by Union
or national law;
(i) complete and detailed description of the process and rationale behind the training,
testing and validation of the AI system is kept together with the testing results as part
of the technical documentation in Annex IV;
14954/22 RB/ek 144
TREE.2.B LIMITE EN
(j) a short summary of the AI project developed in the sandbox, its objectives and
expected results published on the website of the competent authorities. This
obligation shall not cover sensitive operational data in relation to the activities of law
enforcement, border control, immigration or asylum authorities.
1a. For the purpose of prevention, investigation, detection or prosecution of criminal offences
or the execution of criminal penalties, including the safeguarding against and the
prevention of threats to public security, under the control and responsibility of law
enforcement authorities, the processing of personal data in AI regulatory sandboxes shall
be based on a specific Member State or Union law and subject to the same cumulative
conditions as referred to in paragraph 1.
2. Paragraph 1 is without prejudice to Union or Member States laws laying down the basis for
the processing of personal data which is necessary for the purpose of developing, testing
and training of innovative AI systems or any other legal basis, in compliance with Union
law on the protection of personal data.
Article 54a
Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes
1. Testing of AI systems in real world conditions outside AI regulatory sandboxes may be
conducted by providers or prospective providers of high-risk AI systems listed in Annex
III, in accordance with the provisions of this Article and the real-world testing plan
referred to in this Article.
The detailed elements of the real-world testing plan shall be specified in implementing acts
adopted by the Commission in accordance with the examination procedure referred to in
Article 74(2).
14954/22 RB/ek 145
TREE.2.B LIMITE EN
This provision shall be without prejudice to Union or Member State legislation for the
testing in real world conditions of high-risk AI systems related to products covered by
legislation listed in Annex II.
2. Providers or prospective providers may conduct testing of high-risk AI systems referred to
in Annex III in real world conditions at any time before the placing on the market or
putting into service of the AI system on their own or in partnership with one or more
prospective users.
3. The testing of high-risk AI systems in real world conditions under this Article shall be
without prejudice to ethical review that may be required by national or Union law.
4. Providers or prospective providers may conduct the testing in real world conditions only
where all of the following conditions are met:
(a) the provider or prospective provider has drawn up a real-world testing plan and
submitted it to the market surveillance authority in the Member State(s) where the
testing in real world conditions is to be conducted;
(b) the market surveillance authority in the Member State(s) where the testing in real
world conditions is to be conducted have not objected to the testing within 30 days
after its submission;
(c) the provider or prospective provider with the exception of high-risk AI systems
referred to in Annex III, points 1, 6 and 7 in the areas of law enforcement, migration,
asylum and border control management, and high risk AI systems referred to in Annex
III point 2, has registered the testing in real world conditions in the EU database
referred to in Article 60(5a) with a Union-wide unique single identification number
and the information specified in Annex VIIIa;
(d) the provider or prospective provider conducting the testing in real world conditions is
established in the Union or it has appointed a legal representative for the purpose of
the testing in real world conditions who is established in the Union;
14954/22 RB/ek 146
TREE.2.B LIMITE EN
(e) data collected and processed for the purpose of the testing in real world conditions
shall not be transferred to countries outside the Union, unless the transfer and the
processing provides equivalent safeguards to those provided under Union law;
(f) the testing in real world conditions does not last longer than necessary to achieve its
objectives and in any case not longer than 12 months;
(g) persons belonging to vulnerable groups due to their age, physical or mental disability
are appropriately protected;
(h) [deleted]
(i) where a provider or prospective provider organises the testing in real world conditions
in cooperation with one or more prospective users, the latter have been informed of all
aspects of the testing that are relevant to their decision to participate, and given the
relevant instructions on how to use the AI system referred to in Article 13; the
provider or prospective provider and the user(s) shall conclude an agreement
specifying their roles and responsibilities with a view to ensuring compliance with the
provisions for testing in real world conditions under this Regulation and other
applicable Union and Member States legislation;
(j) the subjects of the testing in real world conditions have given informed consent in
accordance with Article 54b, or in the case of law enforcement, where the seeking of
informed consent would prevent the AI system from being tested, the testing itself and
the outcome of the testing in the real world conditions shall not have a negative effect
on the subject;
(k) the testing in real world conditions is effectively overseen by the provider or
prospective provider and user(s) with persons who are suitably qualified in the
relevant field and have the necessary capacity, training and authority to perform their
tasks;
(l) the predictions, recommendations or decisions of the AI system can be effectively
reversed or disregarded.
14954/22 RB/ek 147
TREE.2.B LIMITE EN
5. Any subject of the testing in real world conditions, or his or her legally designated
representative, as appropriate, may, without any resulting detriment and without having to
provide any justification, withdraw from the testing at any time by revoking his or her
informed consent. The withdrawal of the informed consent shall not affect the activities
already carried out and the use of data obtained based on the informed consent before its
withdrawal.
6. Any serious incident identified in the course of the testing in real world conditions shall be
reported to the national market surveillance authority in accordance with Article 62 of this
Regulation. The provider or prospective provider shall adopt immediate mitigation
measures or, failing that, suspend the testing in real world conditions until such mitigation
takes place or otherwise terminate it. The provider or prospective provider shall establish a
procedure for the prompt recall of the AI system upon such termination of the testing in
real world conditions.
7. Providers or prospective providers shall notify the national market surveillance authority in
the Member State(s) where the testing in real world conditions is to be conducted of the
suspension or termination of the testing in real world conditions and the final outcomes.
8. The provider and prospective provider shall be liable under applicable Union and Member
States liability legislation for any damage caused in the course of their participation in the
testing in real world conditions.
Article 54b
Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes
1. For the purpose of testing in real world conditions under Article 54a, informed consent
shall be freely given by the subject of testing prior to his or her participation in such testing
and after having been duly informed with concise, clear, relevant, and understandable
information regarding:
14954/22 RB/ek 148
TREE.2.B LIMITE EN
(i) the nature and objectives of the testing in real world conditions and the possible
inconvenience that may be linked to his or her participation;
(ii) the conditions under which the testing in real world conditions is to be conducted,
including the expected duration of the subject's participation;
(iii) the subject's rights and guarantees regarding participation, in particular his or her
right to refuse to participate in and the right to withdraw from testing in real world
conditions at any time without any resulting detriment and without having to provide
any justification;
(iv) the modalities for requesting the reversal or the disregard of the predictions,
recommendations or decisions of the AI system;
(v) the Union-wide unique single identification number of the testing in real world
conditions in accordance with Article 54a(4c) and the contact details of the provider
or its legal representative from whom further information can be obtained.
2. The informed consent shall be dated and documented and a copy shall be given to the
subject or his or her legal representative.
Article 55
Support measures for operators, in particular SMEs, including start-ups
1. Member States shall undertake the following actions:
(a) provide SMEs, including start-ups, with priority access to the AI regulatory
sandboxes to the extent that they fulfil the eligibility and selection criteria;
(b) organise specific awareness raising and training activities about the application of
this Regulation tailored to the needs of the SMEs, including start-ups, and, as
appropriate, local public authorities;
14954/22 RB/ek 149
TREE.2.B LIMITE EN
(c) where appropriate, establish a dedicated channel for communication with SMEs,
including start-ups and, as appropriate, local public authorities to provide advice and
respond to queries about the implementation of this Regulation, including as regards
participation in AI regulatory sandboxes.
2. The specific interests and needs of the SME providers, including start-ups, shall be taken
into account when setting the fees for conformity assessment under Article 43, reducing
those fees proportionately to their size, market size and other relevant indicators.
3. The Commission shall undertake the following actions:
(a) upon request of the AI Board, provide standardised templates for the areas covered
by this Regulation;
(b) develop and maintain a single information platform providing easy to use
information in relation to this Regulation for all operators across the Union;
(c) organise appropriate communication campaigns to raise awareness about the
obligations arising from this Regulation;
(d) evaluate and promote the convergence of best practices in public procurement
procedures in relation to AI systems.
14954/22 RB/ek 150
TREE.2.B LIMITE EN
Article 55a
Derogations for specific operators
1. The obligations laid down in Article 17 of this Regulation shall not apply to microenterprises
as defined in Article 2(3) of the Annex to the Commission Recommendation 2003/361/EC
concerning the definition of micro, small and medium-sized enterprises, provided those
enterprises do not have partner enterprises or linked enterprises as defined in Article 3 of the
same Annex.
2. Paragraph 1 shall not be interpreted as exempting those operators from fulfilling any other
requirements and obligations laid down in this Regulation, including those established in
Articles 9, 61 and 62.
3. Requirements and obligations for general purpose AI systems laid down in Article 4b shall
not apply to micro, small and medium-sized enterprises, provided those enterprises do not
have partner enterprises or linked enterprises as defined in Article 3 of the the Annex to the
Commission Recommendation 2003/361/EC concerning the definition of micro, small and
medium-sized enterprises.
14954/22 RB/ek 151
TREE.2.B LIMITE EN
TITLE VI
GOVERNANCE
CHAPTER 1
EUROPEAN ARTIFICIAL INTELLIGENCE BOARD
Article 56
Establishment and structure of the European Artificial Intelligence Board
1. A ‘European Artificial Intelligence Board’ (the ‘Board’) is established.
2. The Board shall be composed of one representative per Member State. The European Data
Protection Supervisor shall participate as an observer. The Commission shall also attend
the Board’s meetings without taking part in the votes.
Other national and Union authorities, bodies or experts may be invited to the meetings by
the Board on a case by case basis, where the issues discussed are of relevance for them.
2a. Each representative shall be designated by their Member State for a period of 3 years,
renewable once.
2aa. Member States shall ensure that their representatives in the Board:
(i) have the relevant competences and powers in their Member State so as to contribute
actively to the achievement of the board’s tasks referred to in Article 58;
(ii) are designated as a single contact point vis-à-vis the Board and, where appropriate,
taking into account Member States’ needs, as a single contact point for stakeholders;
14954/22 RB/ek 152
TREE.2.B LIMITE EN
(iii) are empowered to facilitate consistency and coordination between national
competent authorities in their Member State as regards the implementation of this
Regulation, including through the collection of relevant data and information for the
purpose of fulfilling their tasks on the Board.
3. The designated representatives of the Member States shall adopt the Board’s rules of
procedure by a two-thirds majority.
The rules of procedure shall, in particular, lay down procedures for the selection process,
duration of mandate and specifications of the tasks of the Chair, the voting modalities, and
the organisation of the Board’s activities and its sub-groups.
The Board shall establish a standing subgroup serving as a platform for stakeholders to
advise the Board on all issues related to the implementation of this Regulation, including
on the preparation of implementing and delegated acts. To this purpose, organisations
representing the interests of the providers and users of AI systems, including SMEs and
start-ups, as well as civil society organisations, representatives of affected persons,
researchers, standardisation organisations, notified bodies, laboratories and testing and
experimentation facilities shall be invited to participate to this sub-group. The Board shall
establish two standing sub-groups to provide a platform for cooperation and exchange
among market surveillance authorities and notifying authorities on issues related to market
surveillance and notified bodies respectively.
The Board may establish other standing or temporary sub-groups as appropriate for the
purpose of examining specific issues. Where appropriate, stakeholders referred to in the
previous subparagraph may be invited to such sub-groups or to specific meetings of those
subgroups in the capacity of observers.
3a. The Board shall be organised and operated so as to safeguard the objectivity and
impartiality of its activities.
14954/22 RB/ek 153
TREE.2.B LIMITE EN
4. The Board shall be chaired by one of the representatives of the Member States. Upon
request of the Chair, the Commission shall convene the meetings and prepare the agenda in
accordance with the tasks of the Board pursuant to this Regulation and its rules of
procedure. The Commission shall provide administrative and analytical support for the
activities of the Board pursuant to this Regulation.
Article 57
[deleted]
Article 58
Tasks of the Board
The Board shall advice and assist the Commission and the Member States in order to facilitate the
consistent and effective application of this Regulation. For this purpose the Board may in particular:
(a) collect and share technical and regulatory expertise and best practices among Member
States;
(b) contribute to the harmonisation of administrative practices in the Member States, including
in relation to the derogation from the conformity assessment procedures referred to in
Article 47, the functioning of regulatory sandboxes and testing in real world conditions
referred to in Article 53, 54 and 54a;
(c) upon the request of the Commission or on its own initiative, issue recommendations and
written opinions on any relevant matters related to the implementation of this Regulation
and to its consistent and effective application, including:
(i) on technical specifications or existing standards regarding the requirements set out in
Title III, Chapter 2,
(ii) on the use of harmonised standards or common specifications referred to in Articles
40 and 41,
14954/22 RB/ek 154
TREE.2.B LIMITE EN
(iii) on the preparation of guidance documents, including the guidelines concerning the
setting of administrative fines referred to in Article 71;
(d) advise the Commission on the potential need for amendment of Annex III in accordance with
Articles 4 and 7, taking into account relevant available evidence and the latest develoments in
technology;
(e) advise the Commission during the preparation of delegated or implementing act pursuant to
this Regulation;
f) cooperate, as appropriate, with relevant EU bodies, experts groups and networks in particular
in the fields of product safety, cybersecurity, competition, digital and media services,
financial services, cryptocurrencies, consumer protection, data and fundamental rights
protection;
g) contribute and provide relevant advice to the Commission in the development of the guidance
referred to in Article 58a or request the development of such guidance;
(h) to assist the work of market surveillance authorities and, in cooperation and subject to
agreement of the concerned market surveillance authorities, promote and support cross-border
market surveillance investigations, including with respect to the emergence of risks of
systemic nature that may stem from AI systems;
(i) contribute to the assessment of training needs for staff of Member States involved in
implementing this Regulation;
(j) advise the Commission in relation to international matters on artificial intelligence.
14954/22 RB/ek 155
TREE.2.B LIMITE EN
CHAPTER 1A
GUIDELINES FROM THE COMMISSION
Article 58a
Guidelines from the Commission on the implementation of this Regulation
1. Upon the request of the Member States or the Board, or on its own initiative, the Commission
shall issue guidelines on the practical implementation of this Regulation, and in particular on
(i) the application of the requirements referred to in Articles 8 - 15;
(ii) the prohibited practices referred to in Article 5;
(iii) the practical implementation of the provisions related to substantial modification;
(iv) the practical implementation of uniform conditions referred to in Article 6, paragraph 3,
including examples in relation to high risk AI systems referred to in Annex III;
(v) the practical implementation of transparency obligations laid down in Article 52;
(vi) the relationship of this Regulation with other relevant Union legislation, including as
regards consistency in their enforcement.
When issuing such guidelines, the Commission shall pay particular attention to the needs of SMEs
including start-ups, local public authorities and sectors most likely to be affected by this Regulation.
14954/22 RB/ek 156
TREE.2.B LIMITE EN
CHAPTER 2
NATIONAL COMPETENT AUTHORITIES
Article 59
Designation of national competent authorities
1. [deleted]
2. Each Member State shall establish or designate at least one notifying authority and at least
one market surveillance authority for the purpose of this Regulation as national competent
authorities. These national competent authorities shall be organised so as to safeguard the
priniciples of objectivity and impartiality of their activities and tasks. Provided that those
prinicples are respected, such activities and tasks may be performed by one or several
designated authorities, in accordance with the organisational needs of the Member State.
3. Member States shall inform the Commission of their designation or designations.
4. Member States shall ensure that national competent authorities are provided with adequate
financial resources, technical equipment and well qualified human resources to effectively
fulfil their tasks under this Regulation.
5. By [one year after entry into force of this Regulation] and afterwards six months before
the deadline referred to in Article 84(2) Member States shall inform the Commission on
the status of the financial resources, technical equipment and human resources of the
national competent authorities with an assessment of their adequacy. The Commission
shall transmit that information to the Board for discussion and possible recommendations.
6. The Commission shall facilitate the exchange of experience between national competent
authorities.
14954/22 RB/ek 157
TREE.2.B LIMITE EN
7. National competent authorities may provide advice on the implementation of this
Regulation, including tailored to SME providers, including start-ups. Whenever national
competent authorities intend to provide guidance and advice with regard to an AI system in
areas covered by other Union legislation, the competent national authorities under that
Union legislation shall be consulted, as appropriate. Member States may also establish one
central contact point for communication with operators.
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, the
European Data Protection Supervisor shall act as the competent authority for their
supervision.
TITLE VII
EU DATABASE FOR HIGH-RISK AI SYSTEMS LISTED IN ANNEX III
Article 60
EU database for high-risk AI systems listed in Annex III
1. The Commission shall, in collaboration with the Member States, set up and maintain a EU
database containing information referred to in paragraph 2 concerning relevant operators
and high-risk AI systems listed in Annex III which are registered in accordance with
Articles 51 and 54a. When setting the fuctional specifications of such database, the
Commission shall consult the AI Board.
14954/22 RB/ek 158
TREE.2.B LIMITE EN
2. The data listed in Annex VIII, Part I, shall be entered into the EU database by the
providers, authorised representatives and relevant users, as applicable, upon their
registration. The data listed in Annex VIII, Part II, 1 to 11, shall be entered into the EU
database by the providers, or where applicable by the authorised representative, in
accordance with Article 51. The data referred in Annex VIII, Part II, 12 shall be
automatically generated by the database based on the information provided by relevant
users pursuant to Article 51(2). The data listed in Annex VIIIa shall be entered into the
database by the prospective providers or providers in accordance with Article 54a.
3. [deleted]
4. The EU database shall contain no personal data, except for the information listed in Annex
VIII, and shall be without prejudice to Article 70.
5. The Commission shall be the controller of the EU database. It shall make available to
providers, prospective providers and users adequate technical and administrative support.
5a. Information contained in the EU database registered in accordance with Article 51 shall be
accessible to the public. The information registered in accordance with Article 54a shall be
accessible only to market surveillance authorites and the Commission, unless the
prospective provider or provider has given consent for making this information also
accessible the public.
14954/22 RB/ek 159
TREE.2.B LIMITE EN
TITLE VIII
POST-MARKET MONITORING,INFORMATION SHARING, MARKET
SURVEILLANCE
CHAPTER 1
POST-MARKET MONITORING
Article 61
Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems
1. Providers shall establish and document a post-market monitoring system in a manner that
is proportionate to the risks of the high-risk AI system.
2. In order to allow the provider to evaluate the compliance of AI systems with the
requirements set out in Title III, Chapter 2 throughout their life cycle, the post-market
monitoring system shall collect, document and analyse relevant data, which may be
provided by users or which may be collected through other sources on the performance of
high-risk AI systems. This obligation shall not cover sensitive operational data of users of
AI systems which are law enforcement authorities.
3. The post-market monitoring system shall be based on a post-market monitoring plan. The
post-market monitoring plan shall be part of the technical documentation referred to in
Annex IV. The Commission shall adopt an implementing act laying down detailed
provisions establishing a template for the post-market monitoring plan and the list of
elements to be included in the plan.
14954/22 RB/ek 160
TREE.2.B LIMITE EN
4. For high-risk AI systems covered by the legal acts referred to in Annex II, Section A,
where a post-market monitoring system and plan is already established under that
legislation, the post-market monitoring documentation as prepared under that legislation
shall be deemed sufficient, provided that the template referred to paragraph 3 is used.
The first subparagraph shall also apply high-risk AI systems referred to in point 5 of
Annex III placed on the market or put into service by financial institutions that are subject
to requirements regarding their internal governance, arrangements or processes under
Union financial services legislation.
CHAPTER 2
SHARING OF INFORMATION ON SERIOUS INCIDENTS
Article 62
Reporting of serious incidents
1. Providers of high-risk AI systems placed on the Union market shall report any serious
incident to the market surveillance authorities of the Member States where that incident
occurred.
Such notification shall be made immediately after the provider has established a causal link
between the AI system and the serious incident or the reasonable likelihood of such a link,
and, in any event, not later than 15 days after the providers becomes aware of the serious
incident.
2. Upon receiving a notification related to a serious incident referred to in Article 3(44)(c),
the relevant market surveillance authority shall inform the national public authorities or
bodies referred to in Article 64(3). The Commission shall develop dedicated guidance to
facilitate compliance with the obligations set out in paragraph 1. That guidance shall be
issued 12 months after the entry into force of this Regulation, at the latest.
14954/22 RB/ek 161
TREE.2.B LIMITE EN
3. For high-risk AI systems referred to in point 5 of Annex III which are placed on the market
or put into service by providers that are financial institutions that are subject to
requirements regarding their internal governance, arrangements or processes under Union
financial services legislation, the notification of serious incidents shall be limited to those
referred to in Article 3(44)(c).
4. For high-risk AI systems which are safety components of devices, or are themselves
devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746 the
notification of serious incidents shall be limited to those referred to in Article 3(44)(c) and
be made to the national competent authority chosen for this purpose by the Member States
where that incident occurred.
CHAPTER 3
ENFORCEMENT
Article 63
Market surveillance and control of AI systems in the Union market
1. Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation.
However, for the purpose of the effective enforcement of this Regulation:
(a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be
understood as including all operators identified in Article 2 of this Regulation;
(b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as
including all AI systems falling within the scope of this Regulation.
14954/22 RB/ek 162
TREE.2.B LIMITE EN
2. As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020,
the market surveillance authorities shall report to the Commission about the outcomes of
relevant market surveillance activities under this Regulation.
3. For high-risk AI systems, related to products to which legal acts listed in Annex II, section
A apply, the market surveillance authority for the purposes of this Regulation shall be the
authority responsible for market surveillance activities designated under those legal acts or,
in justified circumstances and provided that coordination is ensured, another relevant
authority identified by the Member State.
The procedures referred to in Articles 65, 66, 67 and 68 of this Regulation shall not apply
to AI systems related to products, to which legal acts listed in Annex II, section A apply,
when such legal acts already provide for procedures having the same objective. In such a
case, these sectoral procedures shall apply instead.
4. For high-risk AI systems placed on the market, put into service or used by financial
institutions regulated by Union legislation on financial services, the market surveillance
authority for the purposes of this Regulation shall be the relevant national authority
responsible for the financial supervision of those institutions under that legislation in so far
as the placement on the market, putting into service or the use of the AI system is in direct
connection with the provision of those financial services.
By way of a derogation from the previous subparagraph, in justified circumstances and
provided that coordination is ensured, another relevant authority may be identified by the
Member State as market surveillance authority for the purposes of this Regulation.
National market surveillance authorities supervising regulated credit institutions regulated
under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism
(SSM) established by Council Regulation No 1204/2013, should report, without delay, to
the European Central Bank any information identified in the course of their market
surveillance activities that may be of potential interest for the European Central Bank’s
prudential supervisory tasks as specified in that Regulation.
14954/22 RB/ek 163
TREE.2.B LIMITE EN
5. For high-risk AI systems listed in point 1(a) in so far as the systems are used for law
enforcement purposes, points 6, 7 and 8 of Annex III, Member States shall designate as
market surveillance authorities for the purposes of this Regulation either the national
authorities supervising the activities of the law enforcement, border control, immigration,
asylum or judicial authorities, or the competent data protection supervisory authorities
under Directive (EU) 2016/680, or Regulation 2016/679. Market surveillance activities
shall in no way affect the independence of judicial authorities or otherwise interfere with
their activities when acting in their judicial capacity.
6. Where Union institutions, agencies and bodies fall within the scope of this Regulation, the
European Data Protection Supervisor shall act as their market surveillance authority.
7. Member States shall facilitate the coordination between market surveillance authorities
designated under this Regulation and other relevant national authorities or bodies which
supervise the application of Union harmonisation legislation listed in Annex II or other
Union legislation that might be relevant for the high-risk AI systems referred to in Annex
III.
8. Without prejudice to powers provided under Regulation (EU) 2019/1020, and where
relevant and limited to what is necessary to fulfil their tasks, the market surveillance
authorities shall be granted full access by the provider to the documentation as well as the
training, validation and testing datasets used for the development of the high-risk AI
system, including, where appropriate and subject to security safeguards, through
application programming interfaces (‘API’) or other relevant technical means and tools
enabling remote access.
9. Market surveillance authorities shall be granted access to the source code of the high-risk
AI system upon a reasoned request and only when the following cumulative conditions are
fulfilled:
14954/22 RB/ek 164
TREE.2.B LIMITE EN
a) Access to source code is necessary to assess the conformity of a high-risk AI system
with the requirements set out in Title III, Chapter 2, and
b) testing/auditing procedures and verifications based on the data and documentation
provided by the provider have been exhausted or proved insufficient.
10. Any information and documentation obtained by market surveillance authorities shall be
treated in compliance with the confidentiality obligations set out in Article 70.
11. Complaints to the relevant market surveillance authority can be submitted by any natural
or legal person having grounds to consider that there has been an infringement of the
provisions of this Regulation.
In accordance with Article 11(3)(e) and (7)(a) of Regulation (EU) 2019/1020, complaints
shall be taken into account for the purpose of conducting the market surveillance activities
and be handled in line with the dedicated procedures established therefore by the market
surveillance authorities.
Article 63a
Supervision of testing in real world conditions by market surveillance authorities
1. Market surveillance authorities shall have the competence and powers to ensure that testing
in real world conditions is in accordance with this Regulation.
2. Where testing in real world conditions is conducted for AI systems that are supervised
within an AI regulatory sandbox under Article 54, the market surveillance authorities shall
verify the compliance with the provisions of Article 54a as part of their supervisory role
for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in
real world conditions to be conducted by the provider or prospective provider in derogation
to the conditions set out in Article 54a(4) (f) and (g).
14954/22 RB/ek 165
TREE.2.B LIMITE EN
3. Where a market surveillance authority has been informed by the prospective provider, the
provider or any third party of a serious incident or has other grounds for considering
that the conditions set out in Articles 54a and 54b are not met, it may take any of the
following decisions on its territory, as appropriate:
(a) suspend or terminate the testing in real world conditions;
(b) require the provider or prospective provider and user(s) to modify any aspect of the
testing in real world conditions.
4. Where a market surveillance authority has taken a decision referred to in paragraph 3 of
this Article or has issued an objection within the meaning of Article 54a(4)(b), the decision
or the objection shall indicate the grounds thereof and the modalities and conditions for the
provider or prospective provider to challenge the decision or objection.
5. Where applicable, where a market surveillance authority has taken a decision referred to in
paragraph 3 of this Article, it shall communicate the grounds therefor to the market
surveillance authorities of the other Member States in which the AI system has been tested
in accordance with the testing plan.
Article 64
Powers of authorities protecting fundamental rights
1. [deleted]
2. [deleted]
14954/22 RB/ek 166
TREE.2.B LIMITE EN
3. National public authorities or bodies which supervise or enforce the respect of obligations
under Union law protecting fundamental rights, including the right to non-discrimination,
in relation to the use of high-risk AI systems referred to in Annex III shall have the power
to request and access any documentation created or maintained under this Regulation when
access to that documentation is necessary for the fulfilment of the competences under their
mandate within the limits of their jurisdiction. The relevant public authority or body shall
inform the market surveillance authority of the Member State concerned of any such
request.
4. By 3 months after the entering into force of this Regulation, each Member State shall
identify the public authorities or bodies referred to in paragraph 3 and make the list
publicly available. Member States shall notify the list to the Commission and all other
Member States and keep the list up to date.
5. Where the documentation referred to in paragraph 3 is insufficient to ascertain whether a
breach of obligations under Union law intended to protect fundamental rights has occurred,
the public authority or body referred to paragraph 3 may make a reasoned request to the
market surveillance authority to organise testing of the high-risk AI system through
technical means. The market surveillance authority shall organise the testing with the close
involvement of the requesting public authority or body within reasonable time following
the request.
6. Any information and documentation obtained by the national public authorities or bodies
referred to in paragraph 3 pursuant to the provisions of this Article shall be treated in
compliance with the confidentiality obligations set out in Article 70.
14954/22 RB/ek 167
TREE.2.B LIMITE EN
Article 65
Procedure for dealing with AI systems presenting a risk at national level
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in
Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or
to fundamental rights of persons are concerned.
2. Where the market surveillance authority of a Member State has sufficient reasons to
consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out
an evaluation of the AI system concerned in respect of its compliance with all the
requirements and obligations laid down in this Regulation. When risks to fundamental
rights are identified, the market surveillance authority shall also inform the relevant
national public authorities or bodies referred to in Article 64(3). The relevant operators
shall cooperate as necessary with the market surveillance authorities and the other national
public authorities or bodies referred to in Article 64(3).
Where, in the course of that evaluation, the market surveillance authority finds that the AI
system does not comply with the requirements and obligations laid down in this
Regulation, it shall without undue delay require the relevant operator to take all appropriate
corrective actions to bring the AI system into compliance, to withdraw the AI system from
the market, or to recall it, within a period it may prescribe.
The market surveillance authority shall inform the relevant notified body accordingly.
Article 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the
second subparagraph.
3. Where the market surveillance authority considers that non-compliance is not restricted to
its national territory, it shall inform the Commission and the other Member States without
undue delay of the results of the evaluation and of the actions which it has required the
operator to take.
14954/22 RB/ek 168
TREE.2.B LIMITE EN
4. The operator shall ensure that all appropriate corrective action is taken in respect of all the
AI systems concerned that it has made available on the market throughout the Union.
5. Where the operator of an AI system does not take adequate corrective action within the
period referred to in paragraph 2, the market surveillance authority shall take all
appropriate provisional measures to prohibit or restrict the AI system's being made
available on its national market, to withdraw the product from that market or to recall it.
That authority shall notify the Commission and the other Member States, without undue
delay, of those measures.
6. The notification referred to in paragraph 5 shall include all available details, in particular
the information necessary for the identification of the non-compliant AI system, the origin
of the AI system, the nature of the non-compliance alleged and the risk involved, the
nature and duration of the national measures taken and the arguments put forward by the
relevant operator. In particular, the market surveillance authorities shall indicate whether
the non-compliance is due to one or more of the following:
(-a) non-compliance with the prohibition of the artificial intelligence practices referred to
in Article 5;
(a) a failure of a high-risk AI system to meet requirements set out in Title III, Chapter 2;
(b) shortcomings in the harmonised standards or common specifications referred to in
Articles 40 and 41 conferring a presumption of conformity.
(c) non-compliance with provisions set out in Article 52;
(d) non-compliance of general purpose AI systems with the requirements and obligations
referred to in Article 4a;
14954/22 RB/ek 169
TREE.2.B LIMITE EN
7. The market surveillance authorities of the Member States other than the market
surveillance authority of the Member State initiating the procedure shall without undue
delay inform the Commission and the other Member States of any measures adopted and of
any additional information at their disposal relating to the non-compliance of the AI
system concerned, and, in the event of disagreement with the notified national measure, of
their objections.
8. Where, within three months of receipt of the notification referred to in paragraph 5, no
objection has been raised by either a Member State or the Commission in respect of a
provisional measure taken by a Member State, that measure shall be deemed justified. This
is without prejudice to the procedural rights of the concerned operator in accordance with
Article 18 of Regulation (EU) 2019/1020. The period referred to in the first sentence of
this paragraph shall be reduced to 30 days in the case of non-compliance with the
prohibition of the artificial intelligence practices referred to in Article 5.
9. The market surveillance authorities of all Member States shall then ensure that appropriate
restrictive measures are taken in respect of the AI system concerned, such as withdrawal of
the product from their market, without undue delay.
14954/22 RB/ek 170
TREE.2.B LIMITE EN
Article 66
Union safeguard procedure
1. Where, within three months of receipt of the notification referred to in Article 65(5), or 30
days in the case of non-compliance with the prohibition of the artificial intelligence
practices referred to in Article 5, objections are raised by a Member State against a
measure taken by another Member State, or where the Commission considers the measure
to be contrary to Union law, the Commission shall without undue delay enter into
consultation with the relevant Member State’s market surveillance authority and operator
or operators and shall evaluate the national measure. On the basis of the results of that
evaluation, the Commission shall decide whether the national measure is justified or not
within 9 months, or 60 days in the case of non-compliance with the prohibition of the
artificial intelligence practices referred to in Article 5, starting from the notification
referred to in Article 65(5). It shall and notify such decision to the Member State
concerned. The Commission shall also inform all other Member States of such decision.
2. If the measure taken by the relevant Member State’s market surveillance authority is
considered justified by the Commission, the market surveillance authorities of all Member
States shall ensure that appropriate restrictive measures are taken in respect of the AI
system concerned, such as withdrawal of the AI system from their market without undue
delay, and shall inform the Commission accordingly. If the national measure is considered
unjustified by the Commission, the market surveillance authority of the Member State
concerned shall withdraw the measure and inform the Commission accordingly.
3. Where the national measure is considered justified and the non-compliance of the AI
system is attributed to shortcomings in the harmonised standards or common specifications
referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the
procedure provided for in Article 11 of Regulation (EU) No 1025/2012.
14954/22 RB/ek 171
TREE.2.B LIMITE EN
Article 67
Compliant high-risk or general purpose AI systems which present a risk
1. Where, having performed an evaluation under Article 65, the market surveillance authority
of a Member State finds that although a high-risk or general purpose AI system is in
compliance with this Regulation, it presents a risk to the health or safety of persons or to
fundamental rights, it shall require the relevant operator to take all appropriate measures to
ensure that the AI system concerned, when placed on the market or put into service, no
longer presents that risk, to withdraw the AI system from the market or to recall it without
undue delay, within a period it may prescribe.
2. The provider or other relevant operators shall ensure that corrective action is taken in
respect of all the AI systems concerned that they have made available on the market
throughout the Union within the timeline prescribed by the market surveillance authority of
the Member State referred to in paragraph 1.
3. The Member State shall immediately inform the Commission and the other Member States.
That information shall include all available details, in particular the data necessary for the
identification of the AI system concerned, the origin and the supply chain of the AI system,
the nature of the risk involved and the nature and duration of the national measures taken.
4. The Commission shall without undue delay enter into consultation with the Member States
concerned and the relevant operator and shall evaluate the national measures taken. On the
basis of the results of that evaluation, the Commission shall decide whether the measure is
justified or not and, where necessary, propose appropriate measures.
5. The Commission shall address its decision to the Member States concerned, and inform all
other Member States.
14954/22 RB/ek 172
TREE.2.B LIMITE EN
Article 68
Formal non-compliance
1. Where the market surveillance authority of a Member State makes one of the following
findings, it shall require the relevant provider to put an end to the non-compliance
concerned, within a period it may prescribe:
(a) the conformity marking has been affixed in violation of Article 49;
(b) the conformity marking has not been affixed;
(c) the EU declaration of conformity has not been drawn up;
(d) the EU declaration of conformity has not been drawn up correctly;
(e) the identification number of the notified body, which is involved in the conformity
assessment procedure, where applicable, has not been affixed;
2. Where the non-compliance referred to in paragraph 1 persists, the Member State concerned
shall take all appropriate measures to restrict or prohibit the high-risk AI system being
made available on the market or ensure that it is recalled or withdrawn from the market.
Article 68a
Union testing facilities in the area of artificial intelligence
1. The Commission shall designate one or more Union testing facilities pursuant to Article 21
of Regulation (EU) 1020/2019 in the area of artificial intelligence.
14954/22 RB/ek 173
TREE.2.B LIMITE EN
2. Without prejudice to the activities of Union testing facilities referred to in Article 21(6) of
Regulation (EU) 1020/2019, Union testing facilities referred to in paragraph 1 shall also
provide independent technical or scientific advice at the request of the Board or market
surveillance authorities.
Article 68b
Central pool of independent experts
1. Upon request of the AI Board, the Commission shall, by means of an implementing act,
make provisions on the creation, maintenance and financing of a central pool of
independent experts to support the enforcement activities under this Regulation.
2. Experts shall be selected by the Commission and included in the central pool on the basis
of up-to-date scientific or technical expertise in the field of artificial intelligence, having
due regard to the technical areas covered by the requirements and obligations in this
Regulation and the activities of market surveillance authorities pursuant to Article 11 of
Regulation (EU) 1020/2019. The Commission shall determine the number of experts in the
pool in accordance with the required needs.
3. Experts may have the following tasks:
(a) provide advice to and support the work of market surveillance authorities, at their
request;
(b) support cross-border market surveillance investigations as referred to in Article 58(h),
without prejudice of the powers of market surveillance authorities;
(c) advise and support the Commission when carrying out its duties in the context of the
safeguard clause pursuant to Article 66.
14954/22 RB/ek 174
TREE.2.B LIMITE EN
4. The experts shall perform their tasks with impartiality, objectivity and ensure the
confidentiality of information and data obtained in carrying out their tasks and activities.
Each expert shall draw up a declaration of interests, which shall be made publicly
available. The Commission shall establish systems and procedures to actively manage and
prevent potential conflicts of interest.
5. The Member States may be required to pay fees for the advice and support by the experts.
The structure and the level of fees as well as the scale and structure of recoverable costs
shall be adopted by the Commission by means of the implementing act referred to in
paragraph 1, taking into account the objectives of the adequate implementation of this
Regulation, cost-effectiveness and the necessity to ensure an effective access to experts by
all Member States.
6. The Commission shall facilitate timely access to the experts by the Member States, as
needed, and ensure that the combination of support activities carried out by Union testing
facilities pursuant to Article 68a and experts pursuant to this Article is efficiently organised
and provides the best possible added value.
14954/22 RB/ek 175
TREE.2.B LIMITE EN
TITLE IX
CODES OF CONDUCT
Article 69
Codes of conduct for voluntary application of specific requirements
1. The Commission, and the Member States shall facilitate the drawing up of codes of
conduct intended to encourage the voluntary application to AI systems other than high-risk
AI systems of one or more of the requirements set out in Title III, Chapter 2 of this
Regulation to the best extent possible, taking into account the available, technical solutions
allowing for the application of such requirements.
2. The Commission and the Member States shall facilitate the drawing up of codes of conduct
intended to encourage the voluntary application to all AI systems of specific requirements
related, for example, to environmental sustainability, including as regards energy-efficient
programming, accessibility for persons with a disability, stakeholders participation in the
design and development of the AI systems and diversity of development teams on the basis
of clear objectives and key performance indicators to measure the achievement of those
objectives. The Commission and the Member States shall also facilitate, where appropriate,
the drawing of codes of conduct applicable on a voluntary basis with regard to users'
obligations in relation to AI systems.
3. Codes of conduct applicable on a voluntary basis may be drawn up by individual providers
of AI systems or by organisations representing them or by both, including with the
involvement of users and any interested stakeholders and their representative
organisations, or, where appropriate, by users with regard to their obligations. Codes of
conduct may cover one or more AI systems taking into account the similarity of the
intended purpose of the relevant systems.
4. The Commission and the Member States shall take into account the specific interests and
needs of SME providers, including start-ups, when encouraging and facilitating the
drawing up of codes of conduct referred to in this Article.
14954/22 RB/ek 176
TREE.2.B LIMITE EN
TITLE X
CONFIDENTIALITY AND PENALTIES
Article 70
Confidentiality
1. National competent authorities, notified bodies, the Commission, the Board, and any other
natural or legal person involved in the application of this Regulation shall, in accordance
with Union or national law, put appropriate technical and organisational measures in place
to ensure the confidentiality of information and data obtained in carrying out their tasks
and activities in such a manner as to protect, in particular:
(a) intellectual property rights, and confidential business information or trade secrets of a
natural or legal person, including source code, except the cases referred to in Article 5 of
Directive 2016/943 on the protection of undisclosed know-how and business information
(trade secrets) against their unlawful acquisition, use and disclosure apply.
(b) the effective implementation of this Regulation, in particular for the purpose of
inspections, investigations or audits;
(c) public and national security interests;
(d) integrity of criminal or administrative proceedings;
(e) the integrity of information classified in accordance with Union or national law.
14954/22 RB/ek 177
TREE.2.B LIMITE EN
2. Without prejudice to paragraph 1, information exchanged on a confidential basis between
the national competent authorities and between national competent authorities and the
Commission shall not be disclosed without the prior consultation of the originating
national competent authority and the user when high-risk AI systems referred to in points
1, 6 and 7 of Annex III are used by law enforcement, border control, immigration or
asylum authorities, when such disclosure would jeopardise public and national security
interests. This obligation to exchange information shall not cover sensitive operational
data in relation to the activities of law enforcement, border control, immigration or asylum
authorities.
When the law enforcement, immigration or asylum authorities are providers of high-risk
AI systems referred to in points 1, 6 and 7 of Annex III, the technical documentation
referred to in Annex IV shall remain within the premises of those authorities. Those
authorities shall ensure that the market surveillance authorities referred to in Article 63(5)
and (6), as applicable, can, upon request, immediately access the documentation or obtain a
copy thereof. Only staff of the market surveillance authority holding the appropriate level
of security clearance shall be allowed to access that documentation or any copy thereof.
3. Paragraphs 1 and 2 shall not affect the rights and obligations of the Commission, Member
States and their relevant authorities, as well as notified bodies, with regard to the exchange
of information and the dissemination of warnings, including in the context of cross-border
cooperation, nor the obligations of the parties concerned to provide information under
criminal law of the Member States.
14954/22 RB/ek 178
TREE.2.B LIMITE EN
Article 71
Penalties
1. In compliance with the terms and conditions laid down in this Regulation, Member States
shall lay down the rules on penalties, including administrative fines, applicable to
infringements of this Regulation and shall take all measures necessary to ensure that they
are properly and effectively implemented. The penalties provided for shall be effective,
proportionate, and dissuasive. They shall take into particular account the size and interests
of SME providers, including start-ups, and their economic viability. They shall also take
into account whether the use of the AI system is in the context of personal nonprofessional activity.
2. The Member States shall without delay notify the Commission of those rules and of those
measures and of any subsequent amendment affecting them.
3. Non-compliance with any of the prohibitions of the artificial intelligence practices referred
to in Article 5 shall be subject to administrative fines of up to 30 000 000 EUR or, if the
offender is company, up to 6 % of its total worldwide annual turnover for the preceding
financial year, whichever is higher. In case of SMEs, including start-ups, these fines shall
be up to 3% of their worldwide annual turnover for the preceding financial year.
4. Infringements of the following provisions related to operators or notified bodies, shall be
subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company,
up to 4 % of its total worldwide annual turnover for the preceding financial year,
whichever is higher:
-a) obligations of providers pursuant to Articles 4b and 4c;
a) obligations of providers pursuant to Article 16;
b) obligations for certain other persons pursuant to Article 23a;
14954/22 RB/ek 179
TREE.2.B LIMITE EN
c) obligations of authorised representatives pursuant to Article 25;
d) obligations of importers pursuant to Article 26;
e) obligations of distributors pursuant to Article 27;
f) obligations of users pursuant to Article 29, paragraphs 1 to 6a;
g) requirements and obligations of notified bodies pursuant to Article 33, 34(1), 34(3),
34(4), 34a;
h) transparency obligations for providers and users pursuant to Article 52.
In case of SMEs, including start-ups, these fines shall be up to 2% of their worldwide
annual turnover for the preceding financial year.
5. The supply of incorrect, incomplete or misleading information to notified bodies and
national competent authorities in reply to a request shall be subject to administrative fines
of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide
annual turnover for the preceding financial year, whichever is higher. In case of SMEs,
including start-ups, these fines shall be up to 1% of their worldwide annual turnover for
the preceding financial year.
6. When deciding on the amount of the administrative fine in each individual case, all
relevant circumstances of the specific situation shall be taken into account and due regard
shall be given to the following:
(a) the nature, gravity and duration of the infringement and of its consequences;
(aa) the intentional or negligent character of the infringement;
(ab) any action taken by the operator in order to remedy the infringement and mitigate the
possible adverse effects of the infringement;
14954/22 RB/ek 180
TREE.2.B LIMITE EN
(b) whether administrative fines have been already applied by other market surveillance
authorities in other Member States to the same operator for the same infringement;
(ba) whether administrative fines have been already applied by other authorities to the
same operator for infringements of other Union or national law, when such
infringements result from the same activity or omission constituting a relevant
infringement of this Act;
(c) the size, the annual turnover and market share of the operator committing the
infringement;
(d) any other aggravating or mitigating factor applicable to the circumstances of the
case, such as financial benefits gained, or losses avoided, directly or indirectly, from
the infringement.
7. Each Member State shall lay down rules on whether and to what extent administrative
fines may be imposed on public authorities and bodies established in that Member State.
8. Depending on the legal system of the Member States, the rules on administrative fines may
be applied in such a manner that the fines are imposed by competent national courts or
other bodies as applicable in those Member States. The application of such rules in those
Member States shall have an equivalent effect.
9. The exercise by the market surveillance authority of its powers under this Article shall be
subject to appropriate procedural safeguards in accordance with Union and Member State
law, including effective judicial remedy and due process.
14954/22 RB/ek 181
TREE.2.B LIMITE EN
Article 72
Administrative fines on Union institutions, agencies and bodies
1. The European Data Protection Supervisor may impose administrative fines on Union
institutions, agencies and bodies falling within the scope of this Regulation. When deciding
whether to impose an administrative fine and deciding on the amount of the administrative
fine in each individual case, all relevant circumstances of the specific situation shall be
taken into account and due regard shall be given to the following:
(a) the nature, gravity and duration of the infringement and of its consequences;
(b) the cooperation with the European Data Protection Supervisor in order to remedy the
infringement and mitigate the possible adverse effects of the infringement, including
compliance with any of the measures previously ordered by the European Data
Protection Supervisor against the Union institution or agency or body concerned with
regard to the same subject matter;
(c) any similar previous infringements by the Union institution, agency or body;
2. Non-compliance with any of the prohibitions of the artificial intelligence practices referred
to in Article 5 shall be subject to administrative fines of up to 500 000 EUR.
3. Non-compliance of the AI system with any requirements or obligations under this
Regulation, other than those laid down in Articles 5 and 10, shall be subject to
administrative fines of up to 250 000 EUR.
4. Before taking decisions pursuant to this Article, the European Data Protection Supervisor
shall give the Union institution, agency or body which is the subject of the proceedings
conducted by the European Data Protection Supervisor the opportunity of being heard on
the matter regarding the possible infringement. The European Data Protection Supervisor
shall base his or her decisions only on elements and circumstances on which the parties
concerned have been able to comment. Complainants, if any, shall be associated closely
with the proceedings.
14954/22 RB/ek 182
TREE.2.B LIMITE EN
5. The rights of defense of the parties concerned shall be fully respected in the proceedings.
They shall be entitled to have access to the European Data Protection Supervisor’s file,
subject to the legitimate interest of individuals or undertakings in the protection of their
personal data or business secrets.
6. Funds collected by imposition of fines in this Article shall be the income of the general
budget of the Union.
TITLE XI
DELEGATION OF POWER AND COMMITTEE PROCEDURE
Article 73
Exercise of the delegation
1. The power to adopt delegated acts is conferred on the Commission subject to the
conditions laid down in this Article.
2. The delegation of power referred to in Article 7(1), Article 7(3), Article 11(3), Article
43(5) and (6) and Article 48(5) shall be conferred on the Commission for a period of five
years from [entering into force of the Regulation].
The Commission shall draw up a report in respect of the delegation of power not later than
nine months before the end of the 5 year period. The delegation of power shall be tacitly
extended for periods of an identical duration, unless the European Parliament or the
Council opposes such extension not later than three months before the end of each period.
14954/22 RB/ek 183
TREE.2.B LIMITE EN
3. The delegation of power referred to in Article 7(1), Article 7(3), Article 11(3), Article
43(5) and (6) and Article 48(5) may be revoked at any time by the European Parliament or
by the Council. A decision of revocation shall put an end to the delegation of power
specified in that decision. It shall take effect the day following that of its publication in the
Official Journal of the European Union or at a later date specified therein. It shall not
affect the validity of any delegated acts already in force.
4. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the
European Parliament and to the Council.
5. Any delegated act adopted pursuant to Article 7(1), Article 7(3), Article 11(3), Article
43(5) and (6) and Article 48(5) shall enter into force only if no objection has been
expressed by either the European Parliament or the Council within a period of three
months of notification of that act to the European Parliament and the Council or if, before
the expiry of that period, the European Parliament and the Council have both informed the
Commission that they will not object. That period shall be extended by three months at the
initiative of the European Parliament or of the Council.
Article 74
Committee procedure
1. The Commission shall be assisted by a committee. That committee shall be a committee
within the meaning of Regulation (EU) No 182/2011.
2. Where reference is made to this paragraph, Article 5 of Regulation (EU) No 182/2011 shall
apply.
14954/22 RB/ek 184
TREE.2.B LIMITE EN
TITLE XII
FINAL PROVISIONS
Article 75
Amendment to Regulation (EC) No 300/2008
In Article 4(3) of Regulation (EC) No 300/2008, the following subparagraph is added:
“When adopting detailed measures related to technical specifications and procedures for approval
and use of security equipment concerning Artificial Intelligence systems in the meaning of
Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the
Council*, the requirements set out in Chapter 2, Title III of that Regulation shall be taken into
account.”
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
14954/22 RB/ek 185
TREE.2.B LIMITE EN
Article 76
Amendment to Regulation (EU) No 167/2013
In Article 17(5) of Regulation (EU) No 167/2013, the following subparagraph is added:
“When adopting delegated acts pursuant to the first subparagraph concerning artificial intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III,
Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
Article 77
Amendment to Regulation (EU) No 168/2013
In Article 22(5) of Regulation (EU) No 168/2013, the following subparagraph is added:
“When adopting delegated acts pursuant to the first subparagraph concerning Artificial Intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX on [Artificial
Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III,
Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
14954/22 RB/ek 186
TREE.2.B LIMITE EN
Article 78
Amendment to Directive 2014/90/EU
In Article 8 of Directive 2014/90/EU, the following paragraph is added:
“4. For Artificial Intelligence systems which are safety components in the meaning of Regulation
(EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, when
carrying out its activities pursuant to paragraph 1 and when adopting technical specifications and
testing standards in accordance with paragraphs 2 and 3, the Commission shall take into account the
requirements set out in Title III, Chapter 2 of that Regulation.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”.
14954/22 RB/ek 187
TREE.2.B LIMITE EN
Article 79
Amendment to Directive (EU) 2016/797
In Article 5 of Directive (EU) 2016/797, the following paragraph is added:
“12. When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to
paragraph 11 concerning Artificial Intelligence systems which are safety components in the
meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and
of the Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into
account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”.
Article 80
Amendment to Regulation (EU) 2018/858
In Article 5 of Regulation (EU) 2018/858 the following paragraph is added:
“4. When adopting delegated acts pursuant to paragraph 3 concerning Artificial Intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence] of the European Parliament and of the Council *, the requirements set out in Title III,
Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”.
14954/22 RB/ek 188
TREE.2.B LIMITE EN
Article 81
Amendment to Regulation (EU) 2018/1139
Regulation (EU) 2018/1139 is amended as follows:
(1) In Article 17, the following paragraph is added:
“3. Without prejudice to paragraph 2, when adopting implementing acts pursuant to paragraph 1
concerning Artificial Intelligence systems which are safety components in the meaning of
Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the
Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into
account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
(2) In Article 19, the following paragraph is added:
“4. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into
account.”
(3) In Article 43, the following paragraph is added:
“4. When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into
account.”
14954/22 RB/ek 189
TREE.2.B LIMITE EN
(4) In Article 47, the following paragraph is added:
“3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into
account.”
(5) In Article 57, the following paragraph is added:
“When adopting those implementing acts concerning Artificial Intelligence systems which are
safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence], the
requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.”
(6) In Article 58, the following paragraph is added:
“3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence] , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into
account.”.
Article 82
Amendment to Regulation (EU) 2019/2144
In Article 11 of Regulation (EU) 2019/2144, the following paragraph is added:
“3. When adopting the implementing acts pursuant to paragraph 2, concerning artificial intelligence
systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial
Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III,
Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”.
14954/22 RB/ek 190
TREE.2.B LIMITE EN
Article 83
AI systems already placed on the market or put into service
1. This Regulation shall not apply to the AI systems which are components of the large-scale
IT systems established by the legal acts listed in Annex IX that have been placed on the
market or put into service before [12 months after the date of application of this
Regulation referred to in Article 85(2)], unless the replacement or amendment of those
legal acts leads to a significant change in the design or intended purpose of the AI system
or AI systems concerned.
The requirements laid down in this Regulation shall be taken into account, where
applicable, in the evaluation of each large-scale IT systems established by the legal acts
listed in Annex IX to be undertaken as provided for in those respective acts.
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in
paragraph 1, that have been placed on the market or put into service before [date of
application of this Regulation referred to in Article 85(2)], only if, from that date, those
systems are subject to significant changes in their design or intended purpose.
Article 84
Evaluation and review
1. [deleted]
1b. The Commission shall assess the need for amendment of the list in Annex III every 24
months following the entry into force of this Regulation and until the end of the period of
the delegation of power. The findings of that assessment shall be presented to the European
Parliament and the Council.
14954/22 RB/ek 191
TREE.2.B LIMITE EN
2. By [three years after the date of application of this Regulation referred to in Article 85(2)]
and every four years thereafter, the Commission shall submit a report on the evaluation and
review of this Regulation to the European Parliament and to the Council. The reports shall
be made public.
3. The reports referred to in paragraph 2 shall devote specific attention to the following:
(a) the status of the financial resources, technical equipment and human resources of the
national competent authorities in order to effectively perform the tasks assigned to
them under this Regulation;
(b) the state of penalties, and notably administrative fines as referred to in Article 71(1),
applied by Member States to infringements of the provisions of this Regulation.
4. Within [three years after the date of application of this Regulation referred to in Article
85(2)] and every four years thereafter, where appropriate, the Commission shall evaluate
the impact and effectiveness of voluntary codes of conduct to foster the application of the
requirements set out in Title III, Chapter 2 for AI systems other than high-risk AI systems
and possibly other additional requirements for AI systems, including as regards
environmental sustainability.
5. For the purpose of paragraphs 1a to 4 the Board, the Member States and national
competent authorities shall provide the Commission with information on its request.
6. In carrying out the evaluations and reviews referred to in paragraphs 1a to 4 the
Commission shall take into account the positions and findings of the Board, of the
European Parliament, of the Council, and of other relevant bodies or sources.
7. The Commission shall, if necessary, submit appropriate proposals to amend this
Regulation, in particular taking into account developments in technology and in the light of
the state of progress in the information society.
14954/22 RB/ek 192
TREE.2.B LIMITE EN
Article 85
Entry into force and application
1. This Regulation shall enter into force on the twentieth day following that of its publication
in the Official Journal of the European Union.
2. This Regulation shall apply from [36 months following the entering into force of the
Regulation].
3. By way of derogation from paragraph 2:
(a) Title III, Chapter 4 and Title VI shall apply from [twelve months following the
entry into force of this Regulation];
(b) Article 71 shall apply from [twelve months following the entry into force of this
Regulation].
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Done at Brussels,
For the European Parliament For the Council
The President The President
14954/22 RB/ek 193
TREE.2.B LIMITE EN
ANNEX I
[deleted]
14954/22 RB/ek 194
TREE.2.B LIMITE EN
ANNEX II
LIST OF UNION HARMONISATION LEGISLATION
Section A – List of Union harmonisation legislation based on the New Legislative Framework
1. Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on
machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24) [as repealed by
the Machinery Regulation];
2. Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on
the safety of toys (OJ L 170, 30.6.2009, p. 1);
3. Directive 2013/53/EU of the European Parliament and of the Council of 20 November
2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L
354, 28.12.2013, p. 90);
4. Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014
on the harmonisation of the laws of the Member States relating to lifts and safety
components for lifts (OJ L 96, 29.3.2014, p. 251);
5. Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014
on the harmonisation of the laws of the Member States relating to equipment and
protective systems intended for use in potentially explosive atmospheres (OJ L 96,
29.3.2014, p. 309);
6. Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on
the harmonisation of the laws of the Member States relating to the making available on the
market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p.
62);
7. Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on
the harmonisation of the laws of the Member States relating to the making available on the
market of pressure equipment (OJ L 189, 27.6.2014, p. 164);
14954/22 RB/ek 195
TREE.2.B LIMITE EN
8. Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016
on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1);
9. Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016
on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81,
31.3.2016, p. 51);
10. Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016
on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81,
31.3.2016, p. 99);
11. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017
on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and
Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and
93/42/EEC (OJ L 117, 5.5.2017, p. 1;
12. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017
on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission
Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
14954/22 RB/ek 196
TREE.2.B LIMITE EN
Section B. List of other Union harmonisation legislation
1. Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March
2008 on common rules in the field of civil aviation security and repealing Regulation (EC)
No 2320/2002 (OJ L 97, 9.4.2008, p. 72).
2. Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15
January 2013 on the approval and market surveillance of two- or three-wheel vehicles and
quadricycles (OJ L 60, 2.3.2013, p. 52);
3. Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5
February 2013 on the approval and market surveillance of agricultural and forestry
vehicles (OJ L 60, 2.3.2013, p. 1);
4. Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on
marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p.
146);
5. Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016
on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016,
p. 44).
6. Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018
on the approval and market surveillance of motor vehicles and their trailers, and of
systems, components and separate technical units intended for such vehicles, amending
Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC
(OJ L 151, 14.6.2018, p. 1);
14954/22 RB/ek 197
TREE.2.B LIMITE EN
7. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27
November 2019 on type-approval requirements for motor vehicles and their trailers, and
systems, components and separate technical units intended for such vehicles, as regards
their general safety and the protection of vehicle occupants and vulnerable road users,
amending Regulation (EU) 2018/858 of the European Parliament and of the Council and
repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the
European Parliament and of the Council and Commission Regulations (EC) No 631/2009,
(EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No
1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011,
(EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No
1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1);
8. Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018
on common rules in the field of civil aviation and establishing a European Union Aviation
Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU)
No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the
European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and
(EC) No 216/2008 of the European Parliament and of the Council and Council Regulation
(EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and
placing on the market of aircrafts referred to in points (a) and (b) of Article 2(1) thereof,
where it concerns unmanned aircraft and their engines, propellers, parts and equipment to
control them remotely, are concerned.
14954/22 RB/ek 198
TREE.2.B LIMITE EN
ANNEX III
HIGH-RISK AI SYSTEMS REFERRED TO IN ARTICLE 6(3)
In each of the areas listed under points 1-8, the AI systems specifically mentioned under each letter
are considered to be high-risk AI systems pursuant to Article 6(3):
1. Biometrics:
(a) Remote biometric identification systems.
2. Critical infrastructure:
(a) AI systems intended to be used as safety components in the management and
operation of critical digital infrastructure, road traffic and the supply of water, gas,
heating and electricity.
3. Education and vocational training:
(a) AI systems intended to be used to determine access, admission or to assign natural
persons to educational and vocational training institutions or programmes at all
levels;
(b) AI systems intended to be used to evaluate learning outcomes, including when those
outcomes are used to steer the learning process of natural persons in educational and
vocational training institutions or programmes at all levels.
4. Employment, workers management and access to self-employment:
(a) AI systems intended to be used for recruitment or selection of natural persons,
notably to place targeted job advertisements, to analyse and filter job applications,
and to evaluate candidates;
14954/22 RB/ek 199
TREE.2.B LIMITE EN
(b) AI intended to be used to make decisions on promotion and termination of workrelated contractual relationships, to allocate tasks based on individual behavior or
personal traits or characteristics and to monitor and evaluate performance and
behavior of persons in such relationships.
5. Access to and enjoyment of essential private services and essential public services and
benefits:
(a) AI systems intended to be used by public authorities or on behalf of public
authorities to evaluate the eligibility of natural persons for essential public assistance
benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and
services;
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or
establish their credit score, with the exception of AI systems put into service by
providers that are micro and small-sized enterprises as defined in the Annex of
Commission Recommendation 2003/361/EC for their own use;
(c) AI systems intended to be used to dispatch, or to establish priority in the dispatching
of emergency first response services, including by firefighters and medical aid;
(d) AI systems intended to be used for risk assessment and pricing in relation to natural
persons in the case of life and health insurance with the exception of AI systems put
into service by providers that are micro and small-sized enterprises as defined in the
Annex of Commission Recommendation 2003/361/EC for their own use.
6. Law enforcement:
(a) AI systems intended to be used by law enforcement authorities or on their behalf to
assess the risk of a natural person for offending or reoffending or the risk for a
natural person to become a potential victim of criminal offences;
14954/22 RB/ek 200
TREE.2.B LIMITE EN
(b) AI systems intended to be used by law enforcement authorities or on their behalf as
polygraphs and similar tools or to detect the emotional state of a natural person;
(c) [deleted]
(d) AI systems intended to be used by law enforcement authorities or on their behalf to
evaluate the reliability of evidence in the course of investigation or prosecution of
criminal offences;
(e) AI systems intended to be used by law enforcement authorities or on their behalf to
predict the occurrence or reoccurrence of an actual or potential criminal offence
based on profiling of natural persons as referred to in Article 3(4) of Directive (EU)
2016/680 or to assess personality traits and characteristics or past criminal behaviour
of natural persons or groups;
(f) AI systems intended to be used by law enforcement authorities or on their behalf to
profile natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in
the course of detection, investigation or prosecution of criminal offences.
(g) [deleted]
7. Migration, asylum and border control management:
(a) AI systems intended to be used by competent public authorities or on their behalf as
polygraphs and similar tools or to detect the emotional state of a natural person;
(b) AI systems intended to be used by competent public authorities or on their behalf to
assess a risk, including a security risk, a risk of irregular migration, or a health risk,
posed by a natural person who intends to enter or has entered into the territory of a
Member State;
14954/22 RB/ek 201
TREE.2.B LIMITE EN
(c) [deleted]
(d) AI systems intended to be used by competent public authorities or on their behalf to
examine applications for asylum, visa and residence permits and associated
complaints with regard to the eligibility of the natural persons applying for a status.
8. Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority or on their behalf to interpret
facts or the law and to apply the law to a concrete set of facts.
14954/22 RB/ek 202
TREE.2.B LIMITE EN
ANNEX IV
TECHNICAL DOCUMENTATION referred to in Article 11(1)
The technical documentation referred to in Article 11(1) shall contain at least the following
information, as applicable to the relevant AI system:
1. A general description of the AI system including:
(a) its intended purpose, the person/s developing the system the date and the version of
the system;
(b) how the AI system interacts or can be used to interact with hardware or software that
is not part of the AI system itself, where applicable;
(c) the versions of relevant software or firmware and any requirement related to version
update;
(d) the description of all forms in which the AI system is placed on the market or put
into service (e.g. software package embedded into hardware, downloadable, API
etc.);
(e) the description of hardware on which the AI system is intended to run;
(f) where the AI system is a component of products, photographs or illustrations
showing external features, marking and internal layout of those products;
(g) instructions of use for the user and, where applicable installation instructions;
2. A detailed description of the elements of the AI system and of the process for its
development, including:
(a) the methods and steps performed for the development of the AI system, including,
where relevant, recourse to pre-trained systems or tools provided by third parties and
how these have been used, integrated or modified by the provider;
14954/22 RB/ek 203
TREE.2.B LIMITE EN
(b) the design specifications of the system, namely the general logic of the AI system
and of the algorithms; the key design choices including the rationale and assumptions
made, also with regard to persons or groups of persons on which the system is
intended to be used; the main classification choices; what the system is designed to
optimise for and the relevance of the different parameters; the description of the
expected output of the system; the decisions about any possible trade-off made
regarding the technical solutions adopted to comply with the requirements set out in
Title III, Chapter 2;
(c) the description of the system architecture explaining how software components build
on or feed into each other and integrate into the overall processing; the computational
resources used to develop, train, test and validate the AI system;
(d) where relevant, the data requirements in terms of datasheets describing the training
methodologies and techniques and the training data sets used, including a general
description of these data sets, information about their provenance, scope and main
characteristics; how the data was obtained and selected; labelling procedures (e.g. for
supervised learning), data cleaning methodologies (e.g. outliers detection);
(e) assessment of the human oversight measures needed in accordance with Article 14,
including an assessment of the technical measures needed to facilitate the
interpretation of the outputs of AI systems by the users, in accordance with Articles
13(3)(d);
(f) where applicable, a detailed description of pre-determined changes to the AI system
and its performance, together with all the relevant information related to the technical
solutions adopted to ensure continuous compliance of the AI system with the
relevant requirements set out in Title III, Chapter 2;
14954/22 RB/ek 204
TREE.2.B LIMITE EN
(g) the validation and testing procedures used, including information about the validation
and testing data used and their main characteristics; metrics used to measure
accuracy, robustness, cybersecurity and compliance with other relevant requirements
set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs
and all test reports dated and signed by the responsible persons, including with
regard to pre-determined changes as referred to under point (f).
3. Detailed information about the monitoring, functioning and control of the AI system, in
particular with regard to: its capabilities and limitations in performance, including the
degrees of accuracy for specific persons or groups of persons on which the system is
intended to be used and the overall expected level of accuracy in relation to its intended
purpose; the foreseeable unintended outcomes and sources of risks to health and safety,
fundamental rights and discrimination in view of the intended purpose of the AI system;
the human oversight measures needed in accordance with Article 14, including the
technical measures put in place to facilitate the interpretation of the outputs of AI systems
by the users; specifications on input data, as appropriate;
4. A detailed description of the risk management system in accordance with Article 9;
5. A description of relevant changes made by the provider to the system through its lifecycle;
6. A list of the harmonised standards applied in full or in part the references of which have
been published in the Official Journal of the European Union; where no such harmonised
standards have been applied, a detailed description of the solutions adopted to meet the
requirements set out in Title III, Chapter 2, including a list of other relevant standards and
technical specifications applied;
7. A copy of the EU declaration of conformity;
8. A detailed description of the system in place to evaluate the AI system performance in the
post-market phase in accordance with Article 61, including the post-market monitoring
plan referred to in Article 61(3).
14954/22 RB/ek 205
TREE.2.B LIMITE EN
ANNEX V
EU DECLARATION OF CONFORMITY
The EU declaration of conformity referred to in Article 48, shall contain all of the following
information:
1. AI system name and type and any additional unambiguous reference allowing
identification and traceability of the AI system;
2. Name and address of the provider or, where applicable, their authorised representative;
3. A statement that the EU declaration of conformity is issued under the sole responsibility of
the provider;
4. A statement that the AI system in question is in conformity with this Regulation and, if
applicable, with any other relevant Union legislation that provides for the issuing of an EU
declaration of conformity;
5. References to any relevant harmonised standards used or any other common specification
in relation to which conformity is declared;
6. Where applicable, the name and identification number of the notified body, a description
of the conformity assessment procedure performed and identification of the certificate
issued;
7. Place and date of issue of the declaration, name and function of the person who signed it as
well as an indication for, and on behalf of whom, that person signed, signature.
14954/22 RB/ek 206
TREE.2.B LIMITE EN
ANNEX VI
CONFORMITY ASSESSMENT PROCEDURE BASED ON INTERNAL CONTROL
1. The conformity assessment procedure based on internal control is the conformity
assessment procedure based on points 2 to 4.
2. The provider verifies that the established quality management system is in compliance with
the requirements of Article 17.
3. The provider examines the information contained in the technical documentation in order
to assess the compliance of the AI system with the relevant essential requirements set out
in Title III, Chapter 2.
4. The provider also verifies that the design and development process of the AI system and its
post-market monitoring as referred to in Article 61 is consistent with the technical
documentation.
14954/22 RB/ek 207
TREE.2.B LIMITE EN
ANNEX VII
CONFORMITY BASED ON ASSESSMENT OF QUALITY MANAGEMENT SYSTEM
AND ASSESSMENT OF TECHNICAL DOCUMENTATION
1. Introduction
Conformity based on assessment of quality management system and assessment of the
technical documentation is the conformity assessment procedure based on points 2 to 5.
2. Overview
The approved quality management system for the design, development and testing of AI
systems pursuant to Article 17 shall be examined in accordance with point 3 and shall be
subject to surveillance as specified in point 5. The technical documentation of the AI
system shall be examined in accordance with point 4.
3. Quality management system
3.1. The application of the provider shall include:
(a) the name and address of the provider and, if the application is lodged by the
authorised representative, their name and address as well;
(b) the list of AI systems covered under the same quality management system;
(c) the technical documentation for each AI system covered under the same quality
management system;
(d) the documentation concerning the quality management system which shall cover all
the aspects listed under Article 17;
14954/22 RB/ek 208
TREE.2.B LIMITE EN
(e) a description of the procedures in place to ensure that the quality management system
remains adequate and effective;
(f) a written declaration that the same application has not been lodged with any other
notified body.
3.2. The quality management system shall be assessed by the notified body, which shall
determine whether it satisfies the requirements referred to in Article 17.
The decision shall be notified to the provider or its authorised representative.
The notification shall contain the conclusions of the assessment of the quality management
system and the reasoned assessment decision.
3.3. The quality management system as approved shall continue to be implemented and
maintained by the provider so that it remains adequate and efficient.
3.4. Any intended change to the approved quality management system or the list of AI systems
covered by the latter shall be brought to the attention of the notified body by the provider.
The proposed changes shall be examined by the notified body, which shall decide whether
the modified quality management system continues to satisfy the requirements referred to
in point 3.2 or whether a reassessment is necessary.
The notified body shall notify the provider of its decision. The notification shall contain
the conclusions of the examination of the changes and the reasoned assessment decision.
4. Control of the technical documentation.
4.1. In addition to the application referred to in point 3, an application with a notified body of
their choice shall be lodged by the provider for the assessment of the technical
documentation relating to the AI system which the provider intends to place on the market
or put into service and which is covered by the quality management system referred to
under point 3.
14954/22 RB/ek 209
TREE.2.B LIMITE EN
4.2. The application shall include:
(a) the name and address of the provider;
(b) a written declaration that the same application has not been lodged with any other
notified body;
(c) the technical documentation referred to in Annex IV.
4.3. The technical documentation shall be examined by the notified body. Where relevant and
limited to what is necessary to fulfil their tasks, the notified body shall be granted full
access to the training, validation, and testing datasets used, including, where appropriate
and subject to security safeguards, through application programming interfaces (API) or
other relevant technical means and tools enabling remote access.
4.4. In examining the technical documentation, the notified body may require that the provider
supplies further evidence or carries out further tests so as to enable a proper assessment of
conformity of the AI system with the requirements set out in Title III, Chapter 2.
Whenever the notified body is not satisfied with the tests carried out by the provider, the
notified body shall directly carry out adequate tests, as appropriate.
4.5. Notified bodies shall be granted access to the source code of the AI system upon a
reasoned request and only when the following cumulative conditions are fulfilled:
a) Access to source code is necessary to assess the conformity of the high-risk AI system
with the requirements set out in Title III, Chapter 2, and
b) testing/auditing procedures and verifications based on the data and documentation
provided by the provider have been exhausted or proved insufficient.
14954/22 RB/ek 210
TREE.2.B LIMITE EN
4.6. The decision shall be notified to the provider or its authorised representative. The
notification shall contain the conclusions of the assessment of the technical documentation
and the reasoned assessment decision.
Where the AI system is in conformity with the requirements set out in Title III, Chapter 2,
an EU technical documentation assessment certificate shall be issued by the notified body.
The certificate shall indicate the name and address of the provider, the conclusions of the
examination, the conditions (if any) for its validity and the data necessary for the
identification of the AI system.
The certificate and its annexes shall contain all relevant information to allow the
conformity of the AI system to be evaluated, and to allow for control of the AI system
while in use, where applicable.
Where the AI system is not in conformity with the requirements set out in Title III, Chapter
2, the notified body shall refuse to issue an EU technical documentation assessment
certificate and shall inform the applicant accordingly, giving detailed reasons for its
refusal.
Where the AI system does not meet the requirement relating to the data used to train it, retraining of the AI system will be needed prior to the application for a new conformity
assessment. In this case, the reasoned assessment decision of the notified body refusing to
issue the EU technical documentation assessment certificate shall contain specific
considerations on the quality data used to train the AI system, notably on the reasons for
non-compliance.
14954/22 RB/ek 211
TREE.2.B LIMITE EN
4.7. Any change to the AI system that could affect the compliance of the AI system with the
requirements or its intended purpose shall be approved by the notified body which issued
the EU technical documentation assessment certificate. The provider shall inform such
notified body of its intention to introduce any of the above-mentioned changes or if it
becomes otherwise aware of the occurrence of such changes. The intended changes shall
be assessed by the notified body which shall decide whether those changes require a new
conformity assessment in accordance with Article 43(4) or whether they could be
addressed by means of a supplement to the EU technical documentation assessment
certificate. In the latter case, the notified body shall assess the changes, notify the provider
of its decision and, where the changes are approved, issue to the provider a supplement to
the EU technical documentation assessment certificate.
5. Surveillance of the approved quality management system.
5.1. The purpose of the surveillance carried out by the notified body referred to in Point 3 is to
make sure that the provider duly fulfils the terms and conditions of the approved quality
management system.
5.2. For assessment purposes, the provider shall allow the notified body to access the premises
where the design, development, testing of the AI systems is taking place. The provider
shall further share with the notified body all necessary information.
5.3. The notified body shall carry out periodic audits to make sure that the provider maintains
and applies the quality management system and shall provide the provider with an audit
report. In the context of those audits, the notified body may carry out additional tests of the
AI systems for which an EU technical documentation assessment certificate was issued.
14954/22 RB/ek 212
TREE.2.B LIMITE EN
ANNEX VIII
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF OPERATORS
AND HIGH-RISK AI SYSTEMS IN ACCORDANCE WITH ARTICLE 51
Providers, authorised representatives and users that are public authorities, agencies or bodies shall
submit the information referred to in Part I. Providers or, when applicable, authorised
representatives shall ensure that the information on their high-risk AI systems referred to in Part II,
1 to 11 is complete, correct and kept up-to-date. Information laid down in II.12 shall be
automatically generated by the database.
Part I. Information related to operators (upon operators’registration)
-1. Type of operator (provider, authorised representative or user);
1. Name, address and contact details of the provider;
2. Where submission of information is carried out by another person on behalf of the
operator, the name, address and contact details of that person;
Part II. Information related to the high-risk AI system
1. Name, address and contact details of the provider
2. Name, address and contact details of the authorised representative, where applicable;
3. AI system trade name and any additional unambiguous reference allowing identification
and traceability of the AI system;
4. Description of the intended purpose of the AI system;
5. Status of the AI system (on the market, or in service; no longer placed on the market/in
service, recalled);
6. Type, number and expiry date of the certificate issued by the notified body and the name or
identification number of that notified body, when applicable;
14954/22 RB/ek 213
TREE.2.B LIMITE EN
7. A scanned copy of the certificate referred to in point 6, when applicable;
8. Member States in which the AI system is or has been placed on the market, put into service
or made available in the Union;
9. A copy of the EU declaration of conformity referred to in Article 48;
10. Electronic instructions for use;
11. URL for additional information (optional).
12. Name, address and contact details of users
14954/22 RB/ek 214
TREE.2.B LIMITE EN
ANNEX VIIIa
INFORMATION TO BE SUBMITTED UPON THE REGISTRATION OF HIGH-RISK AI
SYSTEMS LISTED IN ANNEX III IN RELATION TO TESTING IN REAL WORLD CONDITIONS
IN ACCORDANCE WITH ARTICLE 54a
The following information shall be provided and thereafter kept up to date with regard to testing in
real world conditions to be registered in accordance with Article 54a:
1. Union-wide unique single identification number of the testing in real world conditions;
2. Name and contact details of the provider or prospective provider and users involved in the testing
in real world conditions;
3. A brief description of the AI system, its intended purpose and other information necessary for the
identification of the system;
4. A summary of the main characteristics of the plan for testing in real world conditions;
5. Information on the suspension or termination of the testing in real world conditions.
14954/22 RB/ek 215
TREE.2.B LIMITE EN
ANNEX IX
UNION LEGISLATION ON LARGE-SCALE IT SYSTEMS IN THE AREA OF FREEDOM,
SECURITY AND JUSTICE
1. Schengen Information System
(a) Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28
November 2018 on the use of the Schengen Information System for the return of
illegally staying third-country nationals (OJ L 312, 7.12.2018, p. 1).
(b) Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28
November 2018 on the establishment, operation and use of the Schengen Information
System (SIS) in the field of border checks, and amending the Convention
implementing the Schengen Agreement, and amending and repealing Regulation
(EC) No 1987/2006 (OJ L 312, 7.12.2018, p. 14)
(c) Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28
November 2018 on the establishment, operation and use of the Schengen Information
System (SIS) in the field of police cooperation and judicial cooperation in criminal
matters, amending and repealing Council Decision 2007/533/JHA, and repealing
Regulation (EC) No 1986/2006 of the European Parliament and of the Council and
Commission Decision 2010/261/EU (OJ L 312, 7.12.2018, p. 56).
2. Visa Information System
(a) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF
THE COUNCIL amending Regulation (EC) No 767/2008, Regulation (EC) No
810/2009, Regulation (EU) 2017/2226, Regulation (EU) 2016/399, Regulation
XX/2018 [Interoperability Regulation], and Decision 2004/512/EC and repealing
Council Decision 2008/633/JHA - COM(2018) 302 final. To be updated once the
Regulation is adopted (April/May 2021) by the co-legislators.
14954/22 RB/ek 216
TREE.2.B LIMITE EN
3. Eurodac
(a) Amended proposal for a REGULATION OF THE EUROPEAN PARLIAMENT
AND OF THE COUNCIL on the establishment of 'Eurodac' for the comparison of
biometric data for the effective application of Regulation (EU) XXX/XXX
[Regulation on Asylum and Migration Management] and of Regulation (EU)
XXX/XXX [Resettlement Regulation], for identifying an illegally staying thirdcountry national or stateless person and on requests for the comparison with Eurodac
data by Member States' law enforcement authorities and Europol for law
enforcement purposes and amending Regulations (EU) 2018/1240 and (EU)
2019/818 – COM(2020) 614 final.
4. Entry/Exit System
(a) Regulation (EU) 2017/2226 of the European Parliament and of the Council of 30
November 2017 establishing an Entry/Exit System (EES) to register entry and exit
data and refusal of entry data of third-country nationals crossing the external borders
of the Member States and determining the conditions for access to the EES for law
enforcement purposes, and amending the Convention implementing the Schengen
Agreement and Regulations (EC) No 767/2008 and (EU) No 1077/2011 (OJ L 327,
9.12.2017, p. 20).
5. European Travel Information and Authorisation System
(a) Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12
September 2018 establishing a European Travel Information and Authorisation
System (ETIAS) and amending Regulations (EU) No 1077/2011, (EU) No 515/2014,
(EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, 19.9.2018, p. 1).
(b) Regulation (EU) 2018/1241 of the European Parliament and of the Council of 12
September 2018 amending Regulation (EU) 2016/794 for the purpose of establishing
a European Travel Information and Authorisation System (ETIAS) (OJ L 236,
19.9.2018, p. 72).
14954/22 RB/ek 217
TREE.2.B LIMITE EN
6. European Criminal Records Information System on third-country nationals and stateless
persons
(a) Regulation (EU) 2019/816 of the European Parliament and of the Council of 17 April
2019 establishing a centralised system for the identification of Member States
holding conviction information on third-country nationals and stateless persons
(ECRIS-TCN) to supplement the European Criminal Records Information System
and amending Regulation (EU) 2018/1726 (OJ L 135, 22.5.2019, p. 1).
7. Interoperability
(a) Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May
2019 on establishing a framework for interoperability between EU information
systems in the field of borders and visa (OJ L 135, 22.5.2019, p. 27).
(b) Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May
2019 on establishing a framework for interoperability between EU information
systems in the field of police and judicial cooperation, asylum and migration (OJ L
135, 22.5.2019, p. 85).
______________" | 952a608c194794c90e8d41423d0154f1 | 30 | dianewan | 2023-05-23T23:45:12 | bad | [] |
this code is not working like i expect — how do i fix it?
resultWorkerErr := make(chan error)
defer close(resultWorkerErr)
go func() {
defer cancel()
resultWorkerErr <- b.resultWorker(ctx)
}()
err := b.worker(ctx)
cancel()
if err == nil {
return <-resultWorkerErr
}
return multierror.Append(err, <-resultWorkerErr) | 2d3e4ac9c3977356a4cb64ed6e105a9e | 31 | dianewan | 2023-05-23T23:46:16 | good | [
"code"
] |
generate types for a typescript function const load = value => {
switch (type(value)) {
case 'Number':
return new Decimal(value);
case 'Array':
return map(load, value);
case 'Object':
return compose(
reduce(applyRecursively(load), {}),
toPairs
)(value);
default:
return value;
}
} | 77309a70372325126cf0864240145b90 | 32 | dianewan | 2023-05-23T23:47:18 | bad | [] |
Use the following information to write the Portable Device interface and the StandingFan class
PortableDevice Interface
The PortableDevice interface does not specify any behaviour. It allows a standing fan to be subtyped
as a PortableDevice.
The StandingFan class is a subclass of the Fan class and implements the PortableDevice interface
and the Device interface with the behaviour required for standing fans in Table 1. It cannot be
subclassed. IDs take the following pattern: SFAN1, SFAN2, SFAN3 for StandingFan objects.
This assignment requires you to write code for a cooling simulation involving rooms and
devices. Three types of devices can be used to cool a room: air conditioners, ceiling fans, and
standing fans. Air conditioners and ceiling fans cannot be removed from a room once they are added.
Standing fans however are portable devices that can be swapped between rooms only if they are not
in use. The table below shows the different properties of each device: portability, breeziness,
noisiness, and cooling power.
Heres Table 1 -
Device - Air Conditioner - Portable = no, Breeziness = 0, Noisiness = 0, Cools by = 5.
Device - Ceiling Fan- Portable = no, Breeziness = 2, Noisiness = 0, Cools by = 3.
Device - Standing Fan- Portable = Yes, Breeziness = 2, Noisiness = 2, Cools by = 1.
public class CeilingFan extends Fan {
public CeilingFan() {
super(0); // Ceiling fans are not very noisy
setId("CFAN" + getFanCount()); // IDs follow the pattern CFAN1, CFAN2, etc.
setBreezy(true);
setNoisy(false);
setCoolsBy(3);
setBreeziness(2);
}
}
public abstract class Fan implements Device {
private static int fanCount = 0;
private String id;
private boolean isOn;
private boolean isBreezy;
private boolean isNoisy;
private int coolsBy;
private int breeziness;
private int noisiness;
public Fan(int noisiness) {
fanCount++;
this.id = Integer.toString(fanCount);
this.isOn = false;
this.isBreezy = true;
this.isNoisy = true;
this.coolsBy = 0;
this.breeziness = 2;
this.noisiness = noisiness;
}
@Override
public String getID() {
return id;
}
@Override
public boolean isBreezy() {
return isBreezy;
}
@Override
public boolean isNoisy() {
return isNoisy;
}
@Override
public boolean isOn() {
return isOn;
}
@Override
public void turnOn() {
isOn = true;
}
@Override
public void turnOff() {
isOn = false;
}
@Override
public int coolsBy() {
return coolsBy;
}
@Override
public int getBreeziness() {
return breeziness;
}
@Override
public int getNoisiness() {
return noisiness;
}
public void setBreezy(boolean isBreezy) {
this.isBreezy = isBreezy;
}
public void setNoisy(boolean isNoisy) {
this.isNoisy = isNoisy;
}
public void setCoolsBy(int coolsBy) {
this.coolsBy = coolsBy;
}
public void setBreeziness(int breeziness) {
this.breeziness = breeziness;
}
public static int getFanCount() {
return fanCount;
}
public void setId(String id) {
this.id = id;
}
}
public interface Device {
String getID();
boolean isBreezy();
boolean isNoisy();
boolean isOn();
void turnOn();
void turnOff();
int coolsBy();
int getBreeziness();
int getNoisiness();
} | 302b8631277ae2d2d54fe3e722ece625 | 33 | dianewan | 2023-05-23T23:48:05 | good | [
"code"
] |
I'm currently trying to learn Rust, and I'm trying to make a sum function that takes in a generic types list as parameter, calculates the sum and gives it back. However when compiling the code I get the following error:
mispatched types
expected type parameter `T`, found `&T`
And this happens on the line that says `sum += &val`.
The code itself looks as follows:
use std::io;
use std::ops::AddAssign;
fn sum_list + Default>(list: &[T]) -> T {
let mut sum: T = T::default();
for &val in list.iter() {
sum += &val;
}
sum
}
fn main() {
let num_list = vec![1,2,3,4,5];
println!("Sum of list = {}", sum_list(&num_list));
}
How would I go around fixing this bug? | 7a717eb633dd939e53a660024dfd16b8 | 34 | dianewan | 2023-05-23T23:48:20 | good | [
"code"
] |
convert this typescript code to flutter
filesize(url: string, callback: any) {
var xhr = new XMLHttpRequest();
xhr.open("HEAD", url, true); // Notice "HEAD" instead of "GET",
// to get only the header
xhr.onreadystatechange = function () {
if (this.readyState == this.DONE) {
callback(parseInt(xhr.getResponseHeader("Content-Length")));
}
};
xhr.send();
} | cc01366aec8dd1d6ccee34f46d2f2ddf | 35 | dianewan | 2023-05-23T23:48:33 | good | [
"code"
] |