status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 10
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 29,531 | ["airflow/ti_deps/deps/prev_dagrun_dep.py", "tests/ti_deps/deps/test_prev_dagrun_dep.py"] | Dynamic task mapping does not always create mapped tasks | ### Apache Airflow version
2.5.1
### What happened
Same problem as https://github.com/apache/airflow/issues/28296, but seems to happen nondeterministically, and still happens when ignoring `depends_on_past=True`.
I've got a task that retrieves some filenames, which then creates dynamically mapped tasks to move the files, one per task.
I'm using a similar task across multiple DAGs. However, task mapping fails on some DAG runs: it inconsistently happens per DAG run, and some DAGs do not seem to be affected at all. These seem to be the DAGs where no task was ever mapped, so that the mapped task instance ended up in a Skipped state.
What happens is that multiple files will be found, but only a single dynamically mapped task will be created. This task never starts and has map_index of -1. It can be found under the "List instances, all runs" menu, but says "No Data found." under the "Mapped Tasks" tab.
![Screenshot 2023-02-14 at 13 29 15](https://user-images.githubusercontent.com/64646000/218742434-c132d3c1-8013-446f-8fd0-9b485506f43e.png)
![Screenshot 2023-02-14 at 13 29 25](https://user-images.githubusercontent.com/64646000/218742461-fb0114f6-6366-403b-841e-03b0657e3561.png)
When I press the "Run" button when the mapped task is selected, the following error appears:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
The previous task _has_ run however. No errors appeared in my Airflow logs.
When I try to run the task with **Ignore All Deps** enabled, I get the error:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
This last bit is a contradiction, the task cannot be mapped and not mapped simultaneously.
If the amount of mapped tasks is 0 while in this erroneous state, the mapped tasks will not be marked as skipped as expected.
### What you think should happen instead
The mapped tasks should not get stuck with "no status".
The mapped tasks should be created and ran successfully, or in the case of a 0-length list output of the upstream task they should be skipped.
### How to reproduce
Run the below DAG, if it runs successfully clear several tasks out of order. This may not immediately reproduce the bug, but after some task clearing, for me it always ends up in the faulty state described above.
```
from airflow import DAG
from airflow.decorators import task
import datetime as dt
from airflow.operators.python import PythonOperator
import random
@task
def get_filenames_kwargs():
return [
{"file_name": i}
for i in range(random.randint(0, 2))
]
def print_filename(file_name):
print(file_name)
with DAG(
dag_id="dtm_test_2",
start_date=dt.datetime(2023, 2, 10),
default_args={
"owner": "airflow",
"depends_on_past": True,
},
schedule="@daily",
) as dag:
get_filenames_task = get_filenames_kwargs.override(task_id="get_filenames_task")()
print_filename_task = PythonOperator.partial(
task_id="print_filename_task",
python_callable=print_filename,
).expand(op_kwargs=get_filenames_task)
```
### Operating System
Amazon Linux v2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29531 | https://github.com/apache/airflow/pull/32397 | 685328e3572043fba6db432edcaacf8d06cf88d0 | 73bc49adb17957e5bb8dee357c04534c6b41f9dd | "2023-02-14T12:47:12Z" | python | "2023-07-23T23:53:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,515 | ["airflow/www/templates/airflow/task.html"] | Hide non-used docs attributes from Task Instance Detail | ### Description
Inside a BashOperator, I added a markdown snippet of documentation for the "Task Instance Details" of my Airflow nodes.
Now I can see my markdown, defined by the attribute "doc_md", but also
Attribute: bash_command
Attribute: doc
Attribute: doc_json
Attribute: doc_rst
Attribute: doc_yaml
I think it would look better if only the chosen type of docs would be shown in the Task Instance detail, instead of leaving the names of other attributes without anything added to them.
![screenshot](https://user-images.githubusercontent.com/23013638/218585618-f75d180c-6319-4cc5-a569-835af82b3e52.png)
### Use case/motivation
I would like to see only the type of doc attribute that I chose to add to my task instance detail and hide all the others docs type.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29515 | https://github.com/apache/airflow/pull/29545 | 655ffb835eb4c5343c3f2b4d37b352248f2768ef | f2f6099c5a2f3613dce0cc434a95a9479d748cf5 | "2023-02-13T22:10:31Z" | python | "2023-02-16T14:17:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,488 | ["airflow/providers/cncf/kubernetes/hooks/kubernetes.py", "airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/triggers/pod.py", "tests/providers/cncf/kubernetes/operators/test_pod.py", "tests/providers/cncf/kubernetes/triggers/test_pod.py", "tests/providers/google/cloud/operators/test_kubernetes_engine.py"] | KPO - deferrable - Invalid kube-config file. Expected key contexts in kube-config | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
5.2.0
### Apache Airflow version
2.5.1
### Operating System
linux 5.15.0-60-generic - Ubuntu 22.04
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
**the exact same task in sync mode ( normal ) work but in async (deferable) it fail**
airflow connection - kubernetes_default ->
```json
{
"conn_type": "kubernetes",
"extra": "{\"extra__kubernetes__in_cluster\": false, \"extra__kubernetes__kube_config_path\": \"/opt/airflow/include/.kube/config\", \"extra__kubernetes__namespace\": \"default\", \"extra__kubernetes__cluster_context\": \"kind-kind\", \"extra__kubernetes__disable_verify_ssl\": false, \"extra__kubernetes__disable_tcp_keepalive\": false}"
}
```
```python
from pendulum import today
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
dag = DAG(
dag_id="kubernetes_dag",
schedule_interval="0 0 * * *",
start_date=today("UTC").add(days=-1)
)
with dag:
cmd = "echo toto && sleep 10 && echo finish"
KubernetesPodOperator(
task_id="task-a",
namespace="default",
kubernetes_conn_id="kubernetes_default",
name="airflow-test-pod",
image="alpine:3.16.2",
cmds=["sh", "-c", cmd],
is_delete_operator_pod=True,
deferrable=True,
get_logs=True,
)
KubernetesPodOperator(
task_id="task-B",
namespace="default",
kubernetes_conn_id="kubernetes_default",
name="airflow-test-pod",
image="alpine:3.16.2",
cmds=["sh", "-c", cmd],
is_delete_operator_pod=True,
get_logs=True,
)
```
```log
[2023-02-12, 09:53:24 UTC] {taskinstance.py:1768} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 611, in execute_complete
raise AirflowException(event["message"])
airflow.exceptions.AirflowException: Invalid kube-config file. Expected key contexts in kube-config
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 630, in execute_complete
self.post_complete_action(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 654, in post_complete_action
self.cleanup(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 673, in cleanup
raise AirflowException(
airflow.exceptions.AirflowException: Pod airflow-test-pod-vw8fxf25 returned a failure:
```
```
```
### What you think should happen instead
deferrable KPO should work same as KPO and not fail
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
![Screenshot from 2023-02-12 11-02-04](https://user-images.githubusercontent.com/10202690/218304576-df6b6524-1703-4ecf-8a51-825ff7155a06.png) | https://github.com/apache/airflow/issues/29488 | https://github.com/apache/airflow/pull/29498 | 155ef09721e5af9a8be8841eb0e690edbfe36188 | b5296b74361bfe2449033eca5f732c4a4377f6bb | "2023-02-12T10:00:17Z" | python | "2023-04-22T17:30:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,435 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | TaskFlow API `multiple_outputs` inferral causes import errors when using TYPE_CHECKING | ### Apache Airflow version
2.5.1
### What happened
When using the TaskFlow API, I like to generally keep a good practice of adding type annotations in the TaskFlow functions so others reading the DAG and task code have better context around inputs/outputs, keep imports solely used for typing behind `typing.TYPE_CHECKING`, and utilize PEP 563 for forwarding annotation evaluations. Unfortunately, when using ~PEP 563 _and_ `TYPE_CHECKING`~ just TYPE_CHECKING, DAG import errors occur with a "NameError: <name> is not defined." exception.
### What you think should happen instead
Users should be free to use ~PEP 563 and~ `TYPE_CHECKING` when using the TaskFlow API and not hit DAG import errors along the way.
### How to reproduce
Using a straightforward use case of transforming a DataFrame, let's assume this toy example:
```py
from __future__ import annotations
from typing import TYPE_CHECKING, Any
from pendulum import datetime
from airflow.decorators import dag, task
if TYPE_CHECKING:
from pandas import DataFrame
@dag(start_date=datetime(2023, 1, 1), schedule=None)
def multiple_outputs():
@task()
def transform(df: DataFrame) -> dict[str, Any]:
...
transform()
multiple_outputs()
```
Add this DAG to your DAGS_FOLDER and the following import error should be observed:
<img width="641" alt="image" src="https://user-images.githubusercontent.com/48934154/217713685-ec29d5cc-4a48-4049-8dfa-56cbd76cddc3.png">
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.2.0
apache-airflow-providers-apache-hive==5.1.1
apache-airflow-providers-apache-livy==3.2.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==5.1.1
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-databricks==4.0.0
apache-airflow-providers-dbt-cloud==2.3.1
apache-airflow-providers-elasticsearch==4.3.3
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-google==8.8.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.1.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sftp==4.2.1
apache-airflow-providers-snowflake==4.0.2
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.4.0
astronomer-providers==1.14.0
### Deployment
Astronomer
### Deployment details
OOTB local Airflow install with LocalExecutor built with the Astro CLI.
### Anything else
- This behavior/error was not observed using Airflow 2.4.3.
- As a workaround, `multiple_outputs` can be explicitly set on the TaskFlow function to skip the inferral.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29435 | https://github.com/apache/airflow/pull/29445 | f9e9d23457cba5d3e18b5bdb7b65ecc63735b65b | b1306065054b98a63c6d3ab17c84d42c2d52809a | "2023-02-09T03:55:48Z" | python | "2023-02-12T07:45:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,432 | ["airflow/models/mappedoperator.py", "tests/models/test_mappedoperator.py", "tests/test_utils/mock_operators.py"] | Jinja templating doesn't work with container_resources when using dymanic task mapping with Kubernetes Pod Operator | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Google Cloud Composer Version - 2.1.5
Airflow Version - 2.4.3
We are trying to use dynamic task mapping with Kubernetes Pod Operator. Our use-case is to return the pod's CPU and memory requirements from a function which is included as a macro in DAG
Without dynamic task mapping it works perfectly, but when used with the dynamic task mapping, it is unable to recognize the macro.
container_resources is a templated field as per the [docs](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/operators/kubernetes_pod/index.html#airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator), the feature was introduced in this [PR](https://github.com/apache/airflow/pull/27457).
We also tried the toggling the boolean `render_template_as_native_obj`, but still no luck.
Providing below a trimmed version of our DAG to help reproduce the issue. (function to return cpu and memory is trivial here just to show example)
### What you think should happen instead
It should have worked similar with or without dynamic task mapping.
### How to reproduce
Deployed the following DAG in Google Cloud Composer.
```
import datetime
import os
from airflow import models
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import (
KubernetesPodOperator,
)
from kubernetes.client import models as k8s_models
dvt_image = os.environ.get("DVT_IMAGE")
default_dag_args = {"start_date": datetime.datetime(2022, 1, 1)}
def pod_mem():
return "4000M"
def pod_cpu():
return "1000m"
with models.DAG(
"sample_dag",
schedule_interval=None,
default_args=default_dag_args,
render_template_as_native_obj=True,
user_defined_macros={
"pod_mem": pod_mem,
"pod_cpu": pod_cpu,
},
) as dag:
task_1 = KubernetesPodOperator(
task_id="task_1",
name="task_1",
namespace="default",
image=dvt_image,
cmds=["bash", "-cx"],
arguments=["echo hello"],
service_account_name="sa-k8s",
container_resources=k8s_models.V1ResourceRequirements(
limits={
"memory": "{{ pod_mem() }}",
"cpu": "{{ pod_cpu() }}",
}
),
startup_timeout_seconds=1800,
get_logs=True,
image_pull_policy="Always",
config_file="/home/airflow/composer_kube_config",
dag=dag,
)
task_2 = KubernetesPodOperator.partial(
task_id="task_2",
name="task_2",
namespace="default",
image=dvt_image,
cmds=["bash", "-cx"],
service_account_name="sa-k8s",
container_resources=k8s_models.V1ResourceRequirements(
limits={
"memory": "{{ pod_mem() }}",
"cpu": "{{ pod_cpu() }}",
}
),
startup_timeout_seconds=1800,
get_logs=True,
image_pull_policy="Always",
config_file="/home/airflow/composer_kube_config",
dag=dag,
).expand(arguments=[["echo hello"]])
task_1 >> task_2
```
task_1 (without dynamic task mapping) completes successfully, while task_2(with dynamic task mapping) fails.
Looking at the error logs, it failed while rendering the Pod spec since the calls to pod_cpu() and pod_mem() are unresolved.
Here is the traceback:
Exception when attempting to create Namespaced Pod: { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": {}, "labels": { "dag_id": "sample_dag", "task_id": "task_2", "run_id": "manual__2023-02-08T183926.890852Z-eee90e4ee", "kubernetes_pod_operator": "True", "map_index": "0", "try_number": "2", "airflow_version": "2.4.3-composer", "airflow_kpo_in_cluster": "False" }, "name": "task-2-46f76eb0432d42ae9a331a6fc53835b3", "namespace": "default" }, "spec": { "affinity": {}, "containers": [ { "args": [ "echo hello" ], "command": [ "bash", "-cx" ], "env": [], "envFrom": [], "image": "us.gcr.io/ams-e2e-testing/edw-dvt-tool", "imagePullPolicy": "Always", "name": "base", "ports": [], "resources": { "limits": { "memory": "{{ pod_mem() }}", "cpu": "{{ pod_cpu() }}" } }, "volumeMounts": [] } ], "hostNetwork": false, "imagePullSecrets": [], "initContainers": [], "nodeSelector": {}, "restartPolicy": "Never", "securityContext": {}, "serviceAccountName": "sa-k8s", "tolerations": [], "volumes": [] } }
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 143, in run_pod_async
resp = self._client.create_namespaced_pod(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7356, in create_namespaced_pod
return self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 7455, in create_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 391, in request
return self.rest_client.POST(url,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 275, in POST
return self.request("POST", url,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 234, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'Audit-Id': '1ef20c0b-6980-4173-b9cc-9af5b4792e86', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': '1b263a21-4c75-4ef8-8147-c18780a13f0e', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3cd4cda4-908c-4944-a422-5512b0fb88d6', 'Date': 'Wed, 08 Feb 2023 18:45:23 GMT', 'Content-Length': '256'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Pod in version \"v1\" cannot be handled as a Pod: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'","reason":"BadRequest","code":400}
### Operating System
Google Composer Kubernetes Cluster
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29432 | https://github.com/apache/airflow/pull/29451 | 43443eb539058b7b4756455f76b0e883186d9250 | 5eefd47771a19dca838c8cce40a4bc5c555e5371 | "2023-02-08T19:01:33Z" | python | "2023-02-13T08:48:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,428 | ["pyproject.toml"] | Require newer version of pypi/setuptools to remove security scan issue (CVE-2022-40897) | ### Description
Hi. My team is evaluating airflow, so I ran a security scan on it. It is flagging a Medium security issue with pypi/setuptools. See https://nvd.nist.gov/vuln/detail/CVE-2022-40897 for details. Is it possible to require a more recent version? Or perhaps airflow users are not vulnerable to this?
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29428 | https://github.com/apache/airflow/pull/29465 | 9c6f83bb6f3e3b57ae0abbe9eb0582fcde265702 | 41dff9875bce4800495c9132b10a6c8bff900a7c | "2023-02-08T15:11:54Z" | python | "2023-02-11T16:03:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,423 | ["airflow/providers/amazon/aws/hooks/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py"] | GlueJobOperator throws error after migration to newest version of Airflow | ### Apache Airflow version
2.5.1
### What happened
We were using GlueJobOperator with Airflow 2.3.3 (official docker image) and it was working well, we didn't specify script file location, because it was inferred from the job name. After migration to 2.5.1 (official docker image) the operator fails if `s3_bucket` and `script_location` are not specified. That's the error I see:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 146, in execute
glue_job_run = glue_job.initialize_job(self.script_args, self.run_job_kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 155, in initialize_job
job_name = self.create_or_update_glue_job()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 300, in create_or_update_glue_job
config = self.create_glue_job_config()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/amazon/aws/hooks/glue.py", line 97, in create_glue_job_config
raise ValueError("Could not initialize glue job, error: Specify Parameter `s3_bucket`")
ValueError: Could not initialize glue job, error: Specify Parameter `s3_bucket`
```
### What you think should happen instead
I was expecting that after migration the operator would work the same way.
### How to reproduce
Create a dag with `GlueJobOperator` operator and do not use s3_bucket or script_location arguments
### Operating System
Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
### Deployment
Docker-Compose
### Deployment details
`apache/airflow:2.5.1-python3.10` Docker image and official docker compose
### Anything else
I believe it was commit #27893 by @romibuzi that introduced this behaviour.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29423 | https://github.com/apache/airflow/pull/29659 | 9de301da2a44385f57be5407e80e16ee376f3d39 | 6c13f04365b916e938e3bea57e37fc80890b8377 | "2023-02-08T09:09:12Z" | python | "2023-02-22T00:00:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,422 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py"] | Multiple AWS connections support in DynamoDBToS3Operator | ### Description
I want to add support of a separate AWS connection for DynamoDB in `DynamoDBToS3Operator` in `apache-airflow-providers-amazon` via `aws_dynamodb_conn_id` constructor argument.
### Use case/motivation
Sometimes DynamoDB tables and S3 buckets live in different AWS accounts so to access both resources you need to assume a role in another account from one of them.
That role can be specified in AWS connection, thus we need to support two of them in this operator.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29422 | https://github.com/apache/airflow/pull/29452 | 8691c4f98c6cd6d96e87737158a9be0f6a04b9ad | 3780b01fc46385809423bec9ef858be5be64b703 | "2023-02-08T08:58:26Z" | python | "2023-03-09T22:02:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,405 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts"] | Add pagination to get_log in the rest API | ### Description
Right now, the `get_log` endpoint at `/dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/logs/{task_try_number}` does not have any pagination and therefore we can be forced to load extremely large text blocks, which makes everything slow. (see the workaround fix we needed to do in the UI: https://github.com/apache/airflow/pull/29390)
In `task_log_reader`, we do have `log_pos` and `offset` (see [here](https://github.com/apache/airflow/blob/main/airflow/utils/log/log_reader.py#L80-L83)). It would be great to expose those parameters in the REST API in order to break apart task instance logs into more manageable pieces.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29405 | https://github.com/apache/airflow/pull/30729 | 7d02277ae13b7d1e6cea9e6c8ff0d411100daf77 | 7d62cbb97e1bc225f09e3cfac440aa422087a8a7 | "2023-02-07T16:10:57Z" | python | "2023-04-22T20:49:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,396 | ["airflow/providers/google/cloud/hooks/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py"] | BigQuery Hook list_rows method missing page_token return value | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==7.0.0
But the problem exists in all newer versions.
### Apache Airflow version
apache-airflow==2.3.2
### Operating System
Ubuntu 20.04.4 LTS
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
The `list_rows` method in the BigQuery Hook does not return the page_token value, which is necessary for paginating query results. Same problem with `get_datasets_list` method.
The documentation for the `get_datasets_list` method even states that the page_token parameter can be accessed:
```
:param page_token: Token representing a cursor into the datasets. If not passed,
the API will return the first page of datasets. The token marks the beginning of the
iterator to be returned and the value of the ``page_token`` can be accessed at
``next_page_token`` of the :class:`~google.api_core.page_iterator.HTTPIterator`.
```
but it doesn't return HTTPIterator. Instead, it converts the `HTTPIterator` to `list[DatasetListItem]` using `list(datasets)`, making it impossible to retrieve the original `HTTPIterator` and thus impossible to obtain the `next_page_token`.
### What you think should happen instead
`list_rows` \ `get_datasets_list` methods should return `Iterator` OR both the list of rows\datasets and the page_token value to allow users to retrieve multiple results pages. For backward compatibility, we can have a parameter like `return_iterator=True` or smth like that.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29396 | https://github.com/apache/airflow/pull/30543 | d9896fd96eb91a684a512a86924a801db53eb945 | 4703f9a0e589557f5176a6f466ae83fe52644cf6 | "2023-02-07T02:26:41Z" | python | "2023-04-08T17:01:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,393 | ["airflow/providers/amazon/aws/log/s3_task_handler.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py"] | S3TaskHandler continuously returns "*** Falling back to local log" even if log_pos is provided when log not in s3 | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
### Apache Airflow version
2.5.1
### Operating System
Ubuntu 18.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When looking at logs in the UI for a running task when using remote s3 logging, the logs for the task are only uploaded to s3 after the task has completed. The the `S3TaskHandler` falls back to the local logs stored on the worker in that case (by falling back to the `FileTaskHandler` behavior) and prepends the line `*** Falling back to local log` to those logs.
This is mostly fine, but for the new log streaming behavior, this means that `*** Falling back to local log` is returned from `/get_logs_with_metadata` on each call, even if there are no new logs.
### What you think should happen instead
I'd expect the falling back message only to be included in calls with no `log_pos` in the metadata or with a `log_pos` of `0`.
### How to reproduce
Start a task with `logging.remote_logging` set to `True` and `logging.remote_base_log_folder` set to `s3://something` and watch the logs while the task is running. You'll see `*** Falling back to local log` printed every few seconds.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29393 | https://github.com/apache/airflow/pull/29708 | 13098d5c35cf056c3ef08ea98a1970ee1a3e76f8 | 5e006d743d1ba3781acd8e053642f2367a8e7edc | "2023-02-06T20:33:08Z" | python | "2023-02-23T21:25:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,358 | ["airflow/models/baseoperator.py", "airflow/models/dag.py", "airflow/models/param.py"] | Cannot use TypedDict object when defining params | ### Apache Airflow version
2.5.1
### What happened
Context: I am attempting to use [TypedDict](https://docs.python.org/3/library/typing.html#typing.TypedDict) objects to maintain the keys used in DAG params in a single place, and check for key names across multiple DAGs that use the params.
This raises an error with `mypy` as `params` expects an `Optional[Dict]`. Due to the invariance of `Dict`, this does not accept `TypedDict` objects.
What happened: I passed a `TypedDict` to the `params` arg of `DAG` and got a TypeError.
### What you think should happen instead
`TypedDict` objects should be accepted by `DAG`, which should accept `Optional[Mapping[str, Any]]`.
Unless I'm mistaken, `params` are converted to a `ParamsDict` class and therefore the appropriate type hint is a generic `Mapping` type.
### How to reproduce
Steps to reproduce
```Python
from typing import TypedDict
from airflow import DAG
from airflow.models import Param
class ParamsTypedDict(TypedDict):
str_param: Param
params: ParamsTypedDict = {
"str_param": Param("", type="str")
}
with DAG(
dag_id="mypy-error-dag",
# The line below raises a mypy error
# Argument "params" to "DAG" has incompatible type "ParamsTypedDict"; expected "Optional[Dict[Any, Any]]" [arg-type]
params=params,
) as dag:
pass
```
### Operating System
Amazon Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29358 | https://github.com/apache/airflow/pull/29782 | b6392ae5fd466fa06ca92c061a0f93272e27a26b | b069df9b0a792beca66b08d873a66d5640ddadb7 | "2023-02-03T14:40:04Z" | python | "2023-03-07T21:25:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,329 | ["airflow/example_dags/example_setup_teardown.py", "airflow/models/abstractoperator.py", "airflow/models/dag.py", "tests/models/test_dag.py", "tests/models/test_dagrun.py"] | Automatically clear setup/teardown when clearing a dependent task | null | https://github.com/apache/airflow/issues/29329 | https://github.com/apache/airflow/pull/30271 | f4c4b7748655cd11d2c297de38563b2e6b840221 | 0c2778f348f61f3bf08b840676d681e93a60f54a | "2023-02-02T15:44:26Z" | python | "2023-06-21T13:34:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,325 | ["airflow/providers/cncf/kubernetes/python_kubernetes_script.py", "airflow/utils/decorators.py", "tests/decorators/test_external_python.py", "tests/decorators/test_python_virtualenv.py", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py", "tests/providers/docker/decorators/test_docker.py"] | Ensure setup/teardown work on a previously decorated function (eg task.docker) | null | https://github.com/apache/airflow/issues/29325 | https://github.com/apache/airflow/pull/30216 | 3022e2ecbb647bfa0c93fbcd589d0d7431541052 | df49ad179bddcdb098b3eccbf9bb6361cfbafc36 | "2023-02-02T15:43:06Z" | python | "2023-03-24T17:01:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,323 | ["airflow/models/serialized_dag.py", "tests/models/test_serialized_dag.py"] | DAG dependencies graph not updating when deleting a DAG | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
ON Airflow 2.4.2 Dag dependencies graph show deleted DAGs that use to have dependencies to currently existing DAGs
### What you think should happen instead
Deleted DAGs should not appear on DAG Dependencies
### How to reproduce
Create a DAG with dependencies on other DAG, like a wait sensor.
Remove new DAG
### Operating System
apache/airflow:2.4.2-python3.10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29323 | https://github.com/apache/airflow/pull/29407 | 18347d36e67894604436f3ef47d273532683b473 | 02a2efeae409bddcfedafe273fffc353595815cc | "2023-02-02T15:22:37Z" | python | "2023-02-13T19:25:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,322 | ["airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"] | DAG list, sorting lost when switching page | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hi, I'm currently on Airflow 2.4.2
In /home when sorting by DAG/Owner/Next Run and going to the next page the sort resets.
This feature only works if I'm looking for last or first, everything in the middle is unreachable.
### What you think should happen instead
The sorting should continue over the pagination
### How to reproduce
Sort by any sortable field on DagList and go to the next page
### Operating System
apache/airflow:2.4.2-python3.10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29322 | https://github.com/apache/airflow/pull/29756 | c917c9de3db125cac1beb0a58ac81f56830fb9a5 | c8cd90fa92c1597300dbbad4366c2bef49ef6390 | "2023-02-02T15:19:51Z" | python | "2023-03-02T14:59:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,320 | ["airflow/api/common/experimental/get_task_instance.py", "airflow/cli/commands/task_command.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/dag_run.py", "airflow/serialization/pydantic/taskinstance.py", "airflow/utils/log/logging_mixin.py", "airflow/www/views.py"] | AIP-44 Migrate TaskCommand._get_ti to Internal API | https://github.com/apache/airflow/blob/main/airflow/cli/commands/task_command.py#L145 | https://github.com/apache/airflow/issues/29320 | https://github.com/apache/airflow/pull/35312 | ab6e623cb1a75f54fc419cee66a16e3d8ff1adc2 | 1e1adc569f43494aabf3712b651956636c04df7f | "2023-02-02T15:10:45Z" | python | "2023-11-08T15:53:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,301 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | BigQueryCreateEmptyTableOperator `exists_ok` parameter doesn't throw appropriate error when set to "False" | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
I'm using `apache-airflow-providers-google==8.2.0`, but it looks like the relevant code that's causing this to occur is still in use as of `8.8.0`.
### Apache Airflow version
2.3.2
### Operating System
Debian (from Docker image `apache/airflow:2.3.2-python3.10`)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed on an EKS cluster via Helm.
### What happened
The first task in one of my DAGs is to create an empty BigQuery table using the `BigQueryCreateEmptyTableOperator` as follows:
```python
create_staging_table = BigQueryCreateEmptyTableOperator(
task_id="create_staging_table",
dataset_id="my_dataset",
table_id="tmp_table",
schema_fields=[
{"name": "field_1", "type": "TIMESTAMP", "mode": "NULLABLE"},
{"name": "field_2", "type": "INTEGER", "mode": "NULLABLE"},
{"name": "field_3", "type": "INTEGER", "mode": "NULLABLE"}
],
exists_ok=False
)
```
Note that `exists_ok=False` explicitly here, but it is also the default value.
This task exits with a `SUCCESS` status even when `my_dataset.tmp_table` already exists in a given BigQuery project. The task returns the following logs:
```
[2023-02-02, 05:52:29 UTC] {bigquery.py:875} INFO - Creating table
[2023-02-02, 05:52:29 UTC] {bigquery.py:901} INFO - Table my_dataset.tmp_table already exists.
[2023-02-02, 05:52:30 UTC] {taskinstance.py:1395} INFO - Marking task as SUCCESS. dag_id=my_fake_dag, task_id=create_staging_table, execution_date=20230202T044000, start_date=20230202T055229, end_date=20230202T055230
[2023-02-02, 05:52:30 UTC] {local_task_job.py:156} INFO - Task exited with return code 0
```
### What you think should happen instead
Setting `exists_ok=False` should raise an exception and exit the task with a `FAILED` status if the table being created already exists in BigQuery.
### How to reproduce
1. Deploy Airflow 2.3.2 running Python 3.10 in some capacity
2. Ensure `apache-airflow-providers-google==8.2.0` (or 8.8.0, as I don't believe the issue has been fixed) is installed on the deployment.
3. Set up a GCP project and create a BigQuery dataset.
4. Create an empty BigQuery table with a schema.
5. Create a DAG that uses the `BigQueryCreateEmptyTableOperator` to create a new BigQuery table.
6. Run the DAG from Step 5 on the Airflow instance deployed in Step 1.
7. Observe the task's status.
### Anything else
I believe the silent failure may be occurring [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L1377), as the `except` statement results in a log output, but doesn't actually raise an exception or change a state that would make the task fail.
If this is in fact the case, I'd be happy to submit a PR, but appreciate any input as to any error-handling standards/consistencies that this provider package maintains.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29301 | https://github.com/apache/airflow/pull/29394 | 228d79c1b3e11ecfbff5a27c900f9d49a84ad365 | a5adb87ab4ee537eb37ef31aba755b40f6f29a1e | "2023-02-02T06:30:16Z" | python | "2023-02-26T19:09:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,282 | ["airflow/providers/ssh/hooks/ssh.py", "airflow/providers/ssh/operators/ssh.py", "docs/apache-airflow-providers-ssh/connections/ssh.rst", "tests/providers/ssh/hooks/test_ssh.py", "tests/providers/ssh/operators/test_ssh.py"] | Ssh connection extra parameter conn_timeout doesn't work with ssh operator | ### Apache Airflow Provider(s)
ssh
### Versions of Apache Airflow Providers
apache-airflow-providers-ssh>=3.3.0
### Apache Airflow version
2.5.0
### Operating System
debian "11 (bullseye)"
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
I have an SSH operator task where the command can take a long time. In recent SSH provider versions(>=3.3.0) it stopped working, as I suspect it is because of #27184 . After this change looks like the timeout is 10 seconds, and after there is no output provided through SSH for 10 seconds I'm getting the following error:
```
[2023-01-26, 11:49:57 UTC] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/ssh/operators/ssh.py", line 171, in execute
result = self.run_ssh_client_command(ssh_client, self.command, context=context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/ssh/operators/ssh.py", line 156, in run_ssh_client_command
exit_status, agg_stdout, agg_stderr = self.ssh_hook.exec_ssh_client_command(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/ssh/hooks/ssh.py", line 521, in exec_ssh_client_command
raise AirflowException("SSH command timed out")
airflow.exceptions.AirflowException: SSH command timed out
```
At first I thought that this is ok, since I can just set `conn_timeout` extra parameter in my ssh connection. But then I noticed that this parameter from the connection is not used anywhere - so this doesn't work, and you have to modify your task code to set the needed value of this parameter in the SSH operator. What's more, even even with modifying task code it's not possible to achieve the previous behavior(when this parameter was not set) since now it'll be set to 10 when you pass None as value.
### What you think should happen instead
I think it should be possible to pass timeout parameter through connection extra field for ssh operator (including None value, meaning no timeout).
### How to reproduce
Add simple DAG with sleeping for more than 10 seconds, for example:
```python
# this DAG only works for SSH provider versions <=3.2.0
from airflow.models import DAG
from airflow.contrib.operators.ssh_operator import SSHOperator
from airflow.utils.dates import days_ago
from airflow.operators.dummy import DummyOperator
args = {
'owner': 'airflow',
'start_date': days_ago(2),
}
dag = DAG(
default_args=args,
dag_id="test_ssh",
max_active_runs=1,
catchup=False,
schedule_interval="@hourly"
)
task0 = SSHOperator(ssh_conn_id='ssh_localhost',
task_id="test_sleep",
command=f'sleep 15s',
dag=dag)
task0
```
Try configuring `ssh_localhost` connection to make the DAG work using extra conn_timeout or extra timeout (or other) parameters.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29282 | https://github.com/apache/airflow/pull/29347 | a21c17bc07c1eeb733eca889a02396fab401b215 | fd000684d05a993ade3fef38b683ef3cdfdfc2b6 | "2023-02-01T08:52:03Z" | python | "2023-02-19T18:51:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,267 | ["airflow/example_dags/example_python_decorator.py", "airflow/example_dags/example_python_operator.py", "airflow/example_dags/example_short_circuit_operator.py", "docs/apache-airflow/howto/operator/python.rst", "docs/conf.py", "docs/sphinx_design/static/custom.css", "setup.py", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | Support tabs in docs | ### What do you see as an issue?
I suggest supporting tabs in the docs to improve the readability when demonstrating different ways to achieve the same things.
**Motivation**
We have multiple ways to achieve the same thing in Airflow, for example:
- TaskFlow API & "classic" operators
- CLI & REST API & API client
However, our docs currently do not consistently demonstrate different ways to use Airflow. For example, https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html demonstrates TaskFlow operators in some examples and classic operators in other examples. All cases covered can be supported by both the TaskFlow & classic operators.
In the case of https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/python.html, I think a nice solution to demonstrate both approaches would be to use tabs. That way somebody who prefers the TaskFlow API can view all TaskFlow examples, and somebody who prefers the classic operators (we should give those a better name) can view only those examples.
**Possible implementation**
There is a package [sphinx-tabs](https://github.com/executablebooks/sphinx-tabs) for this. For the example above, having https://sphinx-tabs.readthedocs.io/en/latest/#group-tabs would be great because it enables you to view all examples of one "style" with a single click.
### Solving the problem
Install https://github.com/executablebooks/sphinx-tabs with the docs.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29267 | https://github.com/apache/airflow/pull/36041 | f60d458dc08a5d5fbe5903fffca8f7b03009f49a | 58e264c83fed1ca42486302600288230b944ab06 | "2023-01-31T14:23:42Z" | python | "2023-12-06T08:44:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,258 | ["airflow/providers/google/cloud/hooks/compute_ssh.py", "tests/providers/google/cloud/hooks/test_compute_ssh.py", "tests/system/providers/google/cloud/compute/example_compute_ssh.py", "tests/system/providers/google/cloud/compute/example_compute_ssh_os_login.py", "tests/system/providers/google/cloud/compute/example_compute_ssh_parallel.py"] | ComputeEngineSSHHook on parallel runs in Composer gives banner Error reading SSH protocol banner | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using ComputeEngineSSHHook for some of our Airflow DAGS in Cloud Composer
Everything works fine when DAGs run one by one
But when we start parallelism where multiple tasks are trying to connect to our GCE instance using ComputeEngineSSHHook at the same time,
We experience intermittent errors like the one give below
Since cloud composer by default has 3 retries, sometimes in the second or third attempt this issue gets resolved automatically but we would like to understand why this issue comes in the first place when there are multiple operators trying to generate keys and SSH into GCE instance
We have tried maintaining the DAG task with banner_timeout and expire_timeout parameters but we still see this issue
create_transfer_run_directory = SSHOperator(
task_id="create_transfer_run_directory",
ssh_hook=ComputeEngineSSHHook(
instance_name=GCE_INSTANCE,
zone=GCE_ZONE,
use_oslogin=True,
use_iap_tunnel=False,
use_internal_ip=True,
),
conn_timeout = 120,
cmd_timeout = 120,
banner_timeout = 120.0,
command=f"sudo mkdir -p {transfer_run_directory}/"
'{{ ti.xcom_pull(task_ids="load_config", key="transfer_id") }}',
dag=dag,
)
**[2023-01-31, 03:30:39 UTC] {compute_ssh.py:286} INFO - Importing SSH public key using OSLogin: [email protected]
[2023-01-31, 03:30:39 UTC] {compute_ssh.py:236} INFO - Opening remote connection to host: username=sa_115585236623848451866, hostname=10.128.0.29
[2023-01-31, 03:30:41 UTC] {transport.py:1874} ERROR - Exception (client): Error reading SSH protocol banner
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - Traceback (most recent call last):
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 2271, in _check_banner
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - buf = self.packetizer.readline(timeout)
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/packet.py", line 380, in readline
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - buf += self._read_timeout(timeout)
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/packet.py", line 609, in _read_timeout
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - raise EOFError()
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - EOFError
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR -
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - During handling of the above exception, another exception occurred:
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR -
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - Traceback (most recent call last):
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 2094, in run
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - self._check_banner()
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 2275, in _check_banner
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - raise SSHException(
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR - paramiko.ssh_exception.SSHException: Error reading SSH protocol banner
[2023-01-31, 03:30:41 UTC] {transport.py:1872} ERROR -
[2023-01-31, 03:30:41 UTC] {compute_ssh.py:258} INFO - Failed to connect. Waiting 0s to retry
[2023-01-31, 03:30:43 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_8.9p1)
[2023-01-31, 03:30:43 UTC] {transport.py:1874} INFO - Authentication (publickey) failed.
[2023-01-31, 03:30:43 UTC] {compute_ssh.py:258} INFO - Failed to connect. Waiting 1s to retry
[2023-01-31, 03:30:47 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_8.9p1)
[2023-01-31, 03:30:50 UTC] {transport.py:1874} INFO - Authentication (publickey) failed.
[2023-01-31, 03:30:50 UTC] {compute_ssh.py:258} INFO - Failed to connect. Waiting 6s to retry
[2023-01-31, 03:30:58 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_8.9p1)
[2023-01-31, 03:30:58 UTC] {transport.py:1874} INFO - Authentication (publickey) failed.
[2023-01-31, 03:30:58 UTC] {taskinstance.py:1904} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py", line 157, in execute
with self.get_ssh_client() as ssh_client:
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py", line 124, in get_ssh_client
return self.get_hook().get_conn()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 232, in get_conn
sshclient = self._connect_to_instance(user, hostname, privkey, proxy_command)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 245, in _connect_to_instance
client.connect(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 50, in connect
return super().connect(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/paramiko/client.py", line 450, in connect
self._auth(
File "/opt/python3.8/lib/python3.8/site-packages/paramiko/client.py", line 781, in _auth
raise saved_exception
File "/opt/python3.8/lib/python3.8/site-packages/paramiko/client.py", line 681, in _auth
self._transport.auth_publickey(username, pkey)
File "/opt/python3.8/lib/python3.8/site-packages/paramiko/transport.py", line 1635, in auth_publickey
return self.auth_handler.wait_for_response(my_event)
File "/opt/python3.8/lib/python3.8/site-packages/paramiko/auth_handler.py", line 259, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.
[2023-01-31, 03:30:58 UTC] {taskinstance.py:1408} INFO - Marking task as UP_FOR_RETRY. dag_id=run_data_transfer_configs_dag, task_id=create_transfer_run_directory, execution_date=20230131T033002, start_date=20230131T033035, end_date=20230131T033058
[2023-01-31, 03:30:58 UTC] {standard_task_runner.py:92} ERROR - Failed to execute job 1418 for task create_transfer_run_directory (Authentication failed.; 21885)**
### What you think should happen instead
The SSH Hook operator should be able to seamlessly SSH into the GCE instance without any intermittent authentication issues
### How to reproduce
_No response_
### Operating System
Composer Kubernetes Cluster
### Versions of Apache Airflow Providers
Composer Version - 2.1.3
Airflow version - 2.3.4
### Deployment
Composer
### Deployment details
Kubernetes Cluster
GCE Compute Engine VM (Ubuntu)
### Anything else
Very random and intermittent
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29258 | https://github.com/apache/airflow/pull/32365 | df74553ec484ad729fcd75ccbc1f5f18e7f34dc8 | 0c894dbb24ad9ad90dcb10c81269ccc056789dc3 | "2023-01-31T03:43:49Z" | python | "2023-08-02T09:16:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,250 | ["airflow/providers/databricks/hooks/databricks.py", "tests/providers/databricks/hooks/test_databricks.py"] | Repair functionality in DatabricksRunNowOperator | ### Description
The Databricks jobs 2.1 API has the ability to repair failed or skipped tasks in a Databricks workflow without having to rerun successful tasks for a given workflow run. It would be nice to be able to leverage this functionality via airflow operators.
### Use case/motivation
The primary motivation is the ability to be more efficient and only have to rerun failed or skipped tasks rather than the entire workflow if only 1 out of 10 tasks fail.
**Repair run API:**
https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsRunsRepairfail
@alexott for visability
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29250 | https://github.com/apache/airflow/pull/30786 | 424fc17d49afd4175826a62aa4fe7aa7c5772143 | 9bebf85e24e352f9194da2f98e2bc66a5e6b972e | "2023-01-30T21:24:49Z" | python | "2023-04-22T21:21:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,227 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Calendar page doesn't load when using a timedelta DAG schedule | ### Apache Airflow version
2.5.1
### What happened
/calendar page give a problem, here is the capture
![屏幕截图 2023-01-30 093116](https://user-images.githubusercontent.com/19165258/215369479-9fc7de5c-f190-460c-9cf7-9ab27d8ac355.png)
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
### Deployment
Other
### Deployment details
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29227 | https://github.com/apache/airflow/pull/29454 | 28126c12fbdd2cac84e0fbcf2212154085aa5ed9 | f837c0105c85d777ea18c88a9578eeeeac5f57db | "2023-01-30T01:32:44Z" | python | "2023-02-14T17:06:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,209 | ["airflow/providers/google/cloud/operators/bigquery_dts.py", "tests/providers/google/cloud/operators/test_bigquery_dts.py"] | BigQueryCreateDataTransferOperator will log AWS credentials when transferring from S3 | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
[apache-airflow-providers-google 8.6.0](https://airflow.apache.org/docs/apache-airflow-providers-google/8.6.0/)
### Apache Airflow version
2.5.0
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When creating a transfer config that will move data from AWS S3, an access_key_id and secret_access_key are provided (see: https://cloud.google.com/bigquery/docs/s3-transfer).
These parameters are logged and exposed as XCom return_value.
### What you think should happen instead
At least the secret_access_key should be hidden or removed from the XCom return value
### How to reproduce
```
PROJECT_ID=123
TRANSFER_CONFIG={
"destination_dataset_id": destination_dataset,
"display_name": display_name,
"data_source_id": "amazon_s3",
"schedule_options": {"disable_auto_scheduling": True},
"params": {
"destination_table_name_template": destination_table,
"file_format": "PARQUET",
"data_path": data_path,
"access_key_id": access_key_id,
"secret_access_key": secret_access_key
}
},
gcp_bigquery_create_transfer = BigQueryCreateDataTransferOperator(
transfer_config=TRANSFER_CONFIG,
project_id=PROJECT_ID,
task_id="gcp_bigquery_create_transfer",
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29209 | https://github.com/apache/airflow/pull/29348 | 3dbcf99d20d47cde0debdd5faf9bd9b2ebde1718 | f51742d20b2e53bcd90a19db21e4e12d2a287677 | "2023-01-28T19:58:00Z" | python | "2023-02-20T23:06:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,199 | ["airflow/models/xcom_arg.py", "tests/decorators/test_python.py"] | TaskFlow AirflowSkipException causes downstream step to fail when multiple_outputs is true | ### Apache Airflow version
2.5.1
### What happened
Most of our code is based on TaskFlow API and we have many tasks that raise AirflowSkipException (or BranchPythonOperator) on purpose to skip the next downstream task (with trigger_rule = none_failed_min_one_success).
And these tasks are expecting a multiple output XCom result (local_file_path, file sizes, records count) from previous tasks and it's causing this error:
`airflow.exceptions.XComNotFound: XComArg result from copy_from_data_lake_to_local_file at outbound_dag_AIR2070 with key="local_file_path" is not found!`
### What you think should happen instead
Considering trigger rule "none_failed_min_one_success", we expect that upstream task should be allowed to skip and downstream tasks will still run without raising any errors caused by not found XCom results.
### How to reproduce
This is an aproximate example dag based on an existing one.
```python
from os import path
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.operators.python import BranchPythonOperator
DAG_ID = "testing_dag_AIR"
# PGP_OPERATION = None
PGP_OPERATION = "decrypt"
LOCAL_FILE_PATH = "/temp/example/example.csv"
with DAG(
dag_id=DAG_ID,
schedule='0 7-18 * * *',
start_date=pendulum.datetime(2022, 12, 15, 7, 0, 0),
) as dag:
@task(multiple_outputs=True, trigger_rule='none_failed_min_one_success')
def copy_from_local_file_to_data_lake(local_file_path: str, dest_dir_path: str):
destination_file_path = path.join(dest_dir_path, path.basename(local_file_path))
return {
"destination_file_path": destination_file_path,
"file_size": 100
}
@task(multiple_outputs=True, trigger_rule='none_failed_min_one_success')
def copy_from_data_lake_to_local_file(data_lake_file_path, local_dir_path):
local_file_path = path.join(local_dir_path, path.basename(data_lake_file_path))
return {
"local_file_path": local_file_path,
"file_size": 100
}
@task(multiple_outputs=True, task_id='get_pgp_file_info', trigger_rule='none_failed_min_one_success')
def get_pgp_file_info(file_path, operation):
import uuid
import os
src_file_name = os.path.basename(file_path)
src_file_dir = os.path.dirname(file_path)
run_id = str(uuid.uuid4())
if operation == "decrypt":
wait_pattern = f'*{src_file_name}'
else:
wait_pattern = f'*{src_file_name}.pgp'
target_path = 'datalake/target'
return {
'src_file_path': file_path,
'src_file_dir': src_file_dir,
'target_path': target_path,
'pattern': wait_pattern,
'guid': run_id
}
@task(multiple_outputs=True, task_id='return_src_path', trigger_rule='none_failed_min_one_success')
def return_src_path(src_file_path):
return {
'file_path': src_file_path,
'file_size': 100
}
@task(multiple_outputs=True, task_id='choose_result', trigger_rule='none_failed_min_one_success')
def choose_result(src_file_path, src_file_size, decrypt_file_path, decrypt_file_size):
import os
file_path = decrypt_file_path or src_file_path
file_size = decrypt_file_size or src_file_size
local_dir = os.path.dirname(file_path)
return {
'local_dir': local_dir,
'file_path': file_path,
'file_size': file_size,
'file_name': os.path.basename(file_path)
}
def switch_branch_func(pgp_operation):
if pgp_operation in ["decrypt", "encrypt"]:
return 'get_pgp_file_info'
else:
return 'return_src_path'
operation = PGP_OPERATION
local_file_path = LOCAL_FILE_PATH
check_need_to_decrypt = BranchPythonOperator(
task_id='branch_task',
python_callable=switch_branch_func,
op_args=(operation,))
pgp_file_info = get_pgp_file_info(local_file_path, operation)
data_lake_file = copy_from_local_file_to_data_lake(pgp_file_info['src_file_path'], pgp_file_info['target_path'])
decrypt_local_file = copy_from_data_lake_to_local_file(
data_lake_file['destination_file_path'], pgp_file_info['src_file_dir'])
src_result = return_src_path(local_file_path)
result = choose_result(src_result['file_path'], src_result['file_size'],
decrypt_local_file['local_file_path'], decrypt_local_file['file_size'])
check_need_to_decrypt >> [pgp_file_info, src_result]
pgp_file_info >> decrypt_local_file
[decrypt_local_file, src_result] >> result
```
### Operating System
Windows 10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
docker-compose version: 3.7
Note: This also happens when it's deployed to one of our testing environments using official Airflow Helm Chart.
### Anything else
This issue is similar to [#24338](https://github.com/apache/airflow/issues/24338), it was solved by [#25661](https://github.com/apache/airflow/pull/25661) but this case is related to multiple_outputs being set to True.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29199 | https://github.com/apache/airflow/pull/32027 | 14eb1d3116ecef15be7be9a8f9d08757e74f981c | 79eac7687cf7c6bcaa4df2b8735efaad79a7fee2 | "2023-01-27T18:27:43Z" | python | "2023-06-21T09:55:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,198 | ["airflow/providers/snowflake/operators/snowflake.py"] | SnowflakeCheckOperator - The conn_id `None` isn't defined | ### Apache Airflow Provider(s)
snowflake
### Versions of Apache Airflow Providers
`apache-airflow-providers-snowflake==4.0.2`
### Apache Airflow version
2.5.1
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
After upgrading the _apache-airflow-providers-snowflake_ from version **3.3.0** to **4.0.2**, the SnowflakeCheckOperator tasks starts to throw the following error:
```
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 179, in get_db_hook
return self._hook
File "/usr/local/lib/python3.9/functools.py", line 993, in __get__
val = self.func(instance)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 141, in _hook
conn = BaseHook.get_connection(self.conn_id)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/hooks/base.py", line 72, in get_connection
conn = Connection.get_connection_from_secrets(conn_id)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/connection.py", line 435, in get_connection_from_secrets
raise AirflowNotFoundException(f"The conn_id `{conn_id}` isn't defined")
airflow.exceptions.AirflowNotFoundException: The conn_id `None` isn't defined
```
### What you think should happen instead
_No response_
### How to reproduce
- Define a _Snowflake_ Connection with the name **snowflake_default**
- Create a Task similar to this:
```
my_task = SnowflakeCheckOperator(
task_id='my_task',
warehouse='warehouse',
database='database',
schema='schema',
role='role',
sql='select 1 from my_table'
)
```
- Run and check the error.
### Anything else
We can workaround this by adding the conn_id='snowflake_default' to the SnowflakeCheckOperator.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29198 | https://github.com/apache/airflow/pull/29211 | a72e28d6e1bc6ae3185b8b3971ac9de5724006e6 | 9b073119d401594b3575c6f7dc4a14520d8ed1d3 | "2023-01-27T18:24:51Z" | python | "2023-01-29T08:54:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,197 | ["airflow/www/templates/airflow/dag.html"] | Trigger DAG w/config raising error from task detail views | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Version: 2.4.3 (migrated from 2.2.4)
Manual UI option "Trigger DAG w/config" raises an error _400 Bad Request - Invalid datetime: None_ from views "Task Instance Details", "Rendered Template", "Log" and "XCom" . Note that DAG is actually triggered , but still error response 400 is raised.
### What you think should happen instead
No 400 error
### How to reproduce
1. Go to any DAG graph view
2. Select a Task > go to "Instance Details"
3. Select "Trigger DAG w/config"
4. Select Trigger
5. See error
### Operating System
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29197 | https://github.com/apache/airflow/pull/29212 | 9b073119d401594b3575c6f7dc4a14520d8ed1d3 | 7315d6f38caa58e6b19054f3e8a20ed02df16a29 | "2023-01-27T18:07:01Z" | python | "2023-01-29T08:56:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,178 | ["airflow/api/client/local_client.py", "airflow/cli/cli_parser.py", "airflow/cli/commands/dag_command.py", "tests/api/client/test_local_client.py", "tests/cli/commands/test_dag_command.py"] | Add `output` format to missing cli commands | ### Description
I have noticed that for some commands, there is an option to get the output in json or yaml (as described in this PR from 2020 https://github.com/apache/airflow/issues/12699).
However, there are still some commands that do not support the `--output` argument, most notable one is the `dags trigger`.
When triggering a dag, it is crucial to get the run_id that has been triggered, so the triggered dag run can be monitored by the calling party. However, the output from this command is hard to parse without resorting to (gasp!) regex:
```
[2023-01-26 11:03:41,038] {{__init__.py:42}} INFO - Loaded API auth backend: airflow.api.auth.backend.session
Created <DagRun sample_dag @ 2023-01-26T11:03:41+00:00: manual__2023-01-26T11:03:41+00:00, state:queued, queued_at: 2023-01-26 11:03:41.412394+00:00. externally triggered: True>
```
As you can see, extracting the run_id `manual__2023-01-26T11:03:41+00:00` is not easy from the above output.
For what I see [in the code](https://github.com/apache/airflow/blob/main/airflow/cli/cli_parser.py#L1156), the `ARG_OUTPUT` is not added to `dag_trigger` command.
### Use case/motivation
At my company we want to be able to trigger dags from another airflow environment (mwaa) and be able to wait for its completion before proceeding with the calling DAG.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29178 | https://github.com/apache/airflow/pull/29224 | ffdc696942d96a14a5ee0279f950e3114817055c | 60fc40791121b19fe379e4216529b2138162b443 | "2023-01-26T11:05:20Z" | python | "2023-02-19T15:15:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,177 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/http/hooks/http.py", "airflow/providers/http/operators/http.py", "tests/providers/http/hooks/test_http.py"] | SimpleHttpOperator not working with loginless auth_type | ### Apache Airflow Provider(s)
http
### Versions of Apache Airflow Providers
apache-airflow-providers-http==4.1.1
### Apache Airflow version
2.5.0
### Operating System
Ubuntu 20.04.5 LTS (Focal Fossa)"
### Deployment
Virtualenv installation
### Deployment details
Reproduced on a local deployment inside WSL on virtualenv - not related to specific deployment.
### What happened
SimpleHttpOperator supports passing in auth_type. Hovewer, [this auth_type is only initialized if login is provided](https://github.com/astronomer/airflow-provider-sample/blob/main/sample_provider/hooks/sample_hook.py#L64-L65).
In our setup we are using the Kerberos authentication. This authentication relies on kerberos sidecar with keytab, and not on user-password pair in the connection string. However, this would also be issue with any other implementation not relying on username passed in the connection string.
We were trying to use some other auth providers from (`HTTPSPNEGOAuth` from [requests_gssapi](https://pypi.org/project/requests-gssapi/) and `HTTPKerberosAuth` from [requests_kerberos](https://pypi.org/project/requests-kerberos/)). We noticed that requests_kerberos is used in Airflow in some other places for Kerberos support, hence we have settled on the latter.
### What you think should happen instead
A suggestion is to initialize the passed `auth_type` also if no login is present.
### How to reproduce
A branch demonstrating possible fix:
https://github.com/apache/airflow/commit/7d341f081f0160ed102c06b9719582cb463b538c
### Anything else
The linked branch is a quick-and-dirty solution, but maybe the code could be refactored in another way? Support for **kwargs could also be useful, but I wanted to make as minimal changes as possible.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29177 | https://github.com/apache/airflow/pull/29206 | 013490edc1046808c651c600db8f0436b40f7423 | c44c7e1b481b7c1a0d475265835a23b0f507506c | "2023-01-26T08:28:39Z" | python | "2023-03-20T13:52:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,175 | ["airflow/providers/redis/provider.yaml", "docs/apache-airflow-providers-redis/index.rst", "generated/provider_dependencies.json", "tests/system/providers/redis/__init__.py", "tests/system/providers/redis/example_redis_publish.py"] | Support for Redis Time series in Airflow common packages | ### Description
The current Redis API version is quite old. I need to implement a DAG for Timeseries data feature. Please upgrade to version that supports this. BTW, I was able to manually update my redis worker and it now works. Can this be added to next release please.
### Use case/motivation
Timeseries in Redis is a growing area needing support in Airflow
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29175 | https://github.com/apache/airflow/pull/31279 | df3569cf489ce8ef26f5b4d9d9c3826d3daad5f2 | 94cad11b439e0ab102268e9e7221b0ab9d98e0df | "2023-01-26T03:42:51Z" | python | "2023-05-16T13:11:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,150 | ["docs/apache-airflow/howto/docker-compose/index.rst"] | trigger process missing from Airflow docker docs | ### What do you see as an issue?
The section [`Fetching docker-compose.yaml`](https://github.com/apache/airflow/blob/main/docs/apache-airflow/howto/docker-compose/index.rst#fetching-docker-composeyaml) claims to talk about all the process definitions that the Dockerfile compose contain but missed to talk the `airflow-trigger` process.
### Solving the problem
We need to include the `airflow-trigger` in the process list that the docker-compose file contains.
### Anything else
None
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29150 | https://github.com/apache/airflow/pull/29203 | f8c1410a0b0e62a1c4b67389d9cfb80cc024058d | 272f358fd6327468fcb04049ef675a5cf939b93e | "2023-01-25T08:50:27Z" | python | "2023-01-30T09:52:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,137 | ["airflow/decorators/sensor.py"] | Fix access to context in functions decorated by task.sensor | ### Description
Hello, I am a new Airflow user. I am requesting a feature in which the airflow context (containing task instance, etc.) be available inside of functions decorated by `airflow.decorators.task.sensor`.
### Use case/motivation
I have noticed that when using the `airflow.decorators.task` decorator, one can access items from the context (such as the task instance) by using `**kwargs` or keyword arguments in the decorated function. But I have discovered that the same is not true for the `airflow.decorators.task.sensor` decorator. I'm not sure if this is a bug or intentional, but it would be very useful to be able to access the context normally from functions decorated by `task.sensor`.
I believe this may have been an oversight. The `DecoratedSensorOperator` class is a child class of `PythonSensor`:
https://github.com/apache/airflow/blob/1fbfd312d9d7e28e66f6ba5274421a96560fb7ba/airflow/decorators/sensor.py#L28
This `DecoratedSensorOperator` class overrides `poke`, but does not incorporate the passed in `Context` object before calling the decorated function:
https://github.com/apache/airflow/blob/1fbfd312d9d7e28e66f6ba5274421a96560fb7ba/airflow/decorators/sensor.py#L60-L61
This is in contrast to the `PythonSensor`, whose `poke` method merges the context with the existing `op_kwargs`:
https://github.com/apache/airflow/blob/1fbfd312d9d7e28e66f6ba5274421a96560fb7ba/airflow/sensors/python.py#L68-L77
This seems like an easy fix, and I'd be happy to submit a pull request. But I figured I'd start with a feature request since I'm new to the open source community.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29137 | https://github.com/apache/airflow/pull/29146 | 0a4184e34c1d83ad25c61adc23b838e994fc43f1 | 2d3cc504db8cde6188c1503675a698c74404cf58 | "2023-01-24T20:19:59Z" | python | "2023-02-20T00:20:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,128 | ["docs/apache-airflow-providers-ftp/index.rst"] | [Doc] Link to examples how to use FTP provider is incorrect | ### What do you see as an issue?
HI.
I tried to use FTP provider (https://airflow.apache.org/docs/apache-airflow-providers-ftp/stable/connections/ftp.html#howto-connection-ftp) but link to the "Example DAGs" is incorrect and Github response is 404.
### Solving the problem
Please update links to Example DAGs - here and check it in other providers.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29128 | https://github.com/apache/airflow/pull/29134 | 33ba242d7eb8661bf936a9b99a8cad4a74b29827 | 1fbfd312d9d7e28e66f6ba5274421a96560fb7ba | "2023-01-24T12:07:45Z" | python | "2023-01-24T19:24:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,125 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "tests/models/test_dag.py", "tests/models/test_dagrun.py"] | Ensure teardown failure with on_failure_fail_dagrun=True fails the DagRun, and not otherwise | null | https://github.com/apache/airflow/issues/29125 | https://github.com/apache/airflow/pull/30398 | fc4166127a1d2099d358fee1ea10662838cf9cf3 | db359ee2375dd7208583aee09b9eae00f1eed1f1 | "2023-01-24T11:08:45Z" | python | "2023-05-08T10:58:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,113 | ["docs/apache-airflow-providers-sqlite/operators.rst"] | sqlite conn id unclear | ### What do you see as an issue?
The sqlite conn doc here https://airflow.apache.org/docs/apache-airflow-providers-sqlite/stable/operators.html is unclear.
Sqlite does not use username, password, port, schema. These need to be removed from the docs. Furthermore, it is unclear how to construct a conn string for sqlite, since the docs for constructing a conn string here https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html assume that all these fields are given.
### Solving the problem
Remove unused arguments for sqlite in connection, and make it clearer how to construct a connection to sqlite
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29113 | https://github.com/apache/airflow/pull/29139 | d23033cff8a25e5f71d01cb513c8ec1d21bbf491 | ec7674f111177c41c02e5269ad336253ed9c28b4 | "2023-01-23T17:44:59Z" | python | "2023-05-01T20:34:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,112 | ["airflow/utils/log/file_task_handler.py"] | "Operation not permitted" error when chmod on log folder | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.5.1
### Kubernetes Version
1.24.6
### Helm Chart configuration
executor: "KubernetesExecutor" # however same issue happens with LocalExecutor
logs:
persistence:
enabled: true
size: 50Gi
storageClassName: azurefile-csi
### Docker Image customizations
Using airflow-2.5.1-python3.10 as a base image.
Copy custom shared libraries into folder under /opt/airflow/company
Copy DAGs /opt/airflow/dags
### What happened
```console
After migrating from airflow 2.4.3 to 2.5.1 start getting error below. No other changes to custom image. No task is running because of this error:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/cli/commands/task_command.py", line 384, in task_run
ti.init_run_context(raw=args.raw)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2414, in init_run_context
self._set_context(self)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/logging_mixin.py", line 77, in _set_context
set_context(self.log, context)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/logging_mixin.py", line 213, in set_context
flag = cast(FileTaskHandler, handler).set_context(value)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py", line 71, in set_context
local_loc = self._init_file(ti)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py", line 382, in _init_file
self._prepare_log_folder(Path(full_path).parent)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/log/file_task_handler.py", line 358, in _prepare_log_folder
directory.chmod(mode)
File "/usr/local/lib/python3.10/pathlib.py", line 1191, in chmod
self._accessor.chmod(self, mode, follow_symlinks=follow_symlinks)
PermissionError: [Errno 1] Operation not permitted: '/opt/airflow/logs/dag_id=***/run_id=manual__2023-01-22T02:59:43.752407+00:00/task_id=***'
```
### What you think should happen instead
Seem like airflow attempts to set change log folder permissions and not permissioned to do it.
Getting same error when executing command manually (confirmed folder path exists): chmod 511 '/opt/airflow/logs/dag_id=***/run_id=manual__2023-01-22T02:59:43.752407+00:00/task_id=***'
chmod: changing permissions of '/opt/airflow/logs/dag_id=***/run_id=scheduled__2023-01-23T15:30:00+00:00/task_id=***': Operation not permitted
### How to reproduce
My understanding is that this error happens before any custom code is executed.
### Anything else
Error happens every time, unable to start any DAG while using airflow 2.5.1. Exactly same configuration works with 2.5.0 and 2.4.3.
Same image and configuration works fine while running locally using docker-composer.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29112 | https://github.com/apache/airflow/pull/30123 | f5ed6ae67d0788ea2a737d781b27fbcae1e8e8af | b87cbc388bae281e553da699212ebfc6bb723eea | "2023-01-23T17:44:10Z" | python | "2023-03-15T20:44:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,109 | ["airflow/providers/google/cloud/hooks/dataproc.py", "airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | [Google Cloud] DataprocCreateBatchOperator returns incorrect results and does not reattach | ### Apache Airflow version
main (development)
### What happened
The provider operator for Google Cloud Dataproc Batches has two bugs:
1. The running [operator](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/dataproc.py#L2123-L2124) returns successful even if the job transitions to State.CANCELLED or State.CANCELLING
2. It [attempts](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/dataproc.py#L2154) to 'reattach' to a potentially running job if it AlreadyExists, but it sends the wrong type since 'result' is a Batch and needs Operation
### What you think should happen instead
A new hook that polls for batch job completion. There is precedent for it in traditional dataproc with 'wait_for_job'.
### How to reproduce
Use the Breeze environment and a DAG that runs DataprocCreateBatchOperator. Allow the first instance to start.
Use the gcloud CLI to cancel the job.
`gcloud dataproc batches cancel <batch_id> --project <project_id> --region <region>`
Observe that the task completes successfully after a 3-5 minute timeout, even though the job was cancelled.
Run the task again with the same batch_id. Observe the ValueError where it expects Operation but receives Batch
### Operating System
Darwin 5806 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:17:10 PDT 2022; root:xnu-8020.140.49~2/RELEASE_X86_64 x86_64
### Versions of Apache Airflow Providers
Same as dev (main) version.
### Deployment
Other Docker-based deployment
### Deployment details
Observable in the Breeze environment, when running against real Google Infrastructure.
### Anything else
Every time.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29109 | https://github.com/apache/airflow/pull/29136 | a770edfac493f3972c10a43e45bcd0e7cfaea65f | 7e3a9fc8586d0e6d9eddbf833a75280e68050da8 | "2023-01-23T16:05:19Z" | python | "2023-02-20T20:34:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,105 | ["airflow/www/static/js/graph.js"] | graph disappears during run time when using branch_task and a dynamic classic operator | ### Apache Airflow version
2.5.1
### What happened
when using a dynamically generated task that gets the expand data from xcom after a branch_task the graph doesn't render.
It reappears once the dag run is finished.
tried with BashOperator and a KubernetesPodOperator.
the developer console in the browser shows the error:
`Uncaught TypeError: Cannot read properties of undefined (reading 'length')
at z (graph.1c0596dfced26c638bfe.js:2:17499)
at graph.1c0596dfced26c638bfe.js:2:17654
at Array.map (<anonymous>)
at z (graph.1c0596dfced26c638bfe.js:2:17646)
at graph.1c0596dfced26c638bfe.js:2:26602
at graph.1c0596dfced26c638bfe.js:2:26655
at graph.1c0596dfced26c638bfe.js:2:26661
at graph.1c0596dfced26c638bfe.js:2:222
at graph.1c0596dfced26c638bfe.js:2:227
z @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
z @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
(anonymous) @ graph.1c0596dfced26c638bfe.js:2
`
grid view renders fine.
### What you think should happen instead
graph should be rendered.
### How to reproduce
```@dag('branch_dynamic', schedule_interval=None, default_args=default_args, catchup=False)
def branch_dynamic_flow():
@branch_task
def choose_path():
return 'b'
@task
def a():
print('a')
@task
def get_args():
return ['echo 1', 'echo 2']
b = BashOperator.partial(task_id="b").expand(bash_command=get_args())
path = choose_path()
path >> a()
path >> b
```
### Operating System
red hat
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes | 5.1.1 | Kubernetes
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29105 | https://github.com/apache/airflow/pull/29042 | b2825e11852890cf0b0f4d0bcaae592311781cdf | 33ba242d7eb8661bf936a9b99a8cad4a74b29827 | "2023-01-23T14:55:28Z" | python | "2023-01-24T15:27:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,100 | ["airflow/www/static/js/dag/details/Dag.tsx", "airflow/www/static/js/dag/details/dagRun/index.tsx", "airflow/www/static/js/dag/details/taskInstance/Logs/LogBlock.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx", "airflow/www/static/js/dag/grid/index.tsx", "airflow/www/static/js/utils/useOffsetHeight.tsx"] | Unnecessary scrollbars in grid view | ### Apache Airflow version
2.5.0
### What happened
Compare the same DAG grid view in 2.4.3: (everything is scrolled using the "main" scrollbar of the window)
![image](https://user-images.githubusercontent.com/3342974/213983669-c5a701f1-a4d8-4d02-b29b-caf5f9c9a2db.png)
and in 2.5.0 (and 2.5.1) (left and right side of the grid have their own scrollbars):
![image](https://user-images.githubusercontent.com/3342974/213983866-b9b60533-87b4-4f1e-b68b-e5062b7f86c2.png)
It was much more ergonomic previously when only the main scrollbar was used.
I think the relevant change was in #27560, where `maxHeight={offsetHeight}` was added to some places.
Is this the intended way the grid view should look like or did happen as an accident?
I tried to look around in the developer tools and it seems like removing the `max-height` from this element restores the old look: `div#react-container div div.c-1rr4qq7 div.c-k008qs div.c-19srwsc div.c-scptso div.c-l7cpmp`. Well it does for the left side of the grid view. Similar change has to be done for some other divs also.
![image](https://user-images.githubusercontent.com/3342974/213984637-106cf7ed-b776-48ec-90e8-991d8ad1b315.png)
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29100 | https://github.com/apache/airflow/pull/29367 | 1b18a501fe818079e535838fa4f232b03365fc75 | 643d736ebb32c488005b3832c2c3f226a77900b2 | "2023-01-23T07:19:18Z" | python | "2023-02-05T23:15:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,091 | ["airflow/providers/amazon/aws/hooks/glue.py", "airflow/providers/amazon/aws/operators/glue.py"] | Incorrect type annotation for `num_of_dpus` in GlueJobOperator/GlueJobHook | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
### Apache Airflow version
2.2.2
### Operating System
macOS Ventura 13.1
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
When calling GlueJobOperator and passing
`create_job_kwargs={"Command": {"Name": "pythonshell"}}` I need to specify MaxCapacity and based on the code [here](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/glue.py#L127) that's equal to _num_of_dpus_ and that parameter is integer as stated [here](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/glue.py#L68)
Because I want to use pythonshell, AWS Glue offers to setup ranges between 0.00625 and 1 and that can't be achieved with integer.
`When you specify a Python shell job (JobCommand.Name="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU.`
I was trying to pass _MaxCapacity_ in `create_job_kwargs={"Command": {"Name": "pythonshell"}, "MaxCapacity": 0.0625}` but it throws the error.
### What you think should happen instead
I think that parameter _num_of_dpus_ should be type double or MaxCapacity should be allowed to setup as double if pythonshell was selected in Command -> Name.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29091 | https://github.com/apache/airflow/pull/29176 | e1a14ae9ee6ba819763776156a49e9df3fe80ee9 | 44024564cb3dd6835b0375d61e682efc1acd7d2c | "2023-01-21T21:24:37Z" | python | "2023-01-27T10:41:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,070 | ["airflow/providers/ftp/operators/ftp.py", "airflow/providers/sftp/operators/sftp.py", "tests/providers/ftp/operators/test_ftp.py"] | FTP operator has logic in __init__ | ### Body
Similarly to SFTP (fixed in https://github.com/apache/airflow/pull/29068) the logic from __init__ should be moved to execute.
The #29068 provides a blueprint for that.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/29070 | https://github.com/apache/airflow/pull/29073 | 8eb348911f2603feba98787d79b88bbd84bd17be | 2b7071c60022b3c483406839d3c0ef734db5daad | "2023-01-20T19:31:08Z" | python | "2023-01-21T00:29:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,049 | ["airflow/models/taskinstance.py", "tests/models/test_cleartasks.py"] | Recursively cleared external task sensors using reschedule mode instantly time out if previous run is older than sensor timeout | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow 2.3.3, when recursively clearing downstream tasks any cleared external task sensors in other DAGs which are using reschedule mode will instantly fail with an `AirflowSensorTimeout` exception if the previous run is older than the sensor's timeout.
### What you think should happen instead
The recursively cleared external task sensors should run normally, waiting for the cleared upstream task to complete, retrying up to the configured number of times and within the configured sensor timeout counting from the point in time when the sensor was cleared.
### How to reproduce
1. Load the following DAGs:
```python
from datetime import datetime, timedelta, timezone
from time import sleep
from airflow.decorators import task
from airflow.models import DAG
from airflow.sensors.external_task import ExternalTaskMarker, ExternalTaskSensor
from airflow.utils import timezone
default_args = {
'start_date': datetime.now(timezone.utc).replace(second=0, microsecond=0),
'retries': 2,
'retry_delay': timedelta(seconds=10),
}
with DAG('parent_dag', schedule_interval='* * * * *', catchup=False, default_args=default_args) as parent_dag:
@task(task_id='parent_task')
def parent_sleep():
sleep(10)
parent_task = parent_sleep()
child_dag__wait_for_parent_task = ExternalTaskMarker(
task_id='child_dag__wait_for_parent_task',
external_dag_id='child_dag',
external_task_id='wait_for_parent_task',
)
parent_task >> child_dag__wait_for_parent_task
with DAG('child_dag', schedule_interval='* * * * *', catchup=False, default_args=default_args) as child_dag:
wait_for_parent_task = ExternalTaskSensor(
task_id='wait_for_parent_task',
external_dag_id='parent_dag',
external_task_id='parent_task',
mode='reschedule',
poke_interval=15,
timeout=60,
)
@task(task_id='child_task')
def child_sleep():
sleep(10)
child_task = child_sleep()
wait_for_parent_task >> child_task
```
2. Enable the `parent_dag` and `child_dag` DAGs and wait for them to automatically run (they're scheduled to run every minute).
3. Wait for at least one additional minute (because the sensor timeout is configured to be one minute).
4. Clear the earliest `parent_dag.parent_task` task instance with the "Downstream" and "Recursive" options enabled.
5. When the cleared `child_dag.wait_for_parent_task` task tries to run it will immediately fail with an `AirflowSensorTimeout` exception.
### Operating System
Debian 10.13
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
This appears to be due to a bug in `airflow.models.taskinstance.clear_task_instances()` where [it only increments the task instance's `max_tries` property if the task is found in the DAG passed in](https://github.com/apache/airflow/blob/2.3.3/airflow/models/taskinstance.py#L219-L223), but when recursively clearing tasks that won't work properly for tasks in downstream DAGs, because all task instances to be recursively cleared are passed to `clear_task_instances()` with [the DAG of the initial task being cleared](https://github.com/apache/airflow/blob/2.3.3/airflow/models/dag.py#L1905).
When a cleared task instance for a sensor using reschedule mode doesn't have its `max_tries` property incremented that causes the [logic in `BaseSensorOperator.execute()`](https://github.com/apache/airflow/blob/2.3.3/airflow/sensors/base.py#L247-L264) to incorrectly choose an older `first_try_number` value, calculate the sensor run duration as the total time passed since that previous run, and fail with an `AirflowSensorTimeout` exception if that inflated run duration exceeds the sensor timeout.
While I tested this in Airflow 2.3.3 because that's what my company is running, I also looked at the current `main` branch code and this appears to still be a problem in the latest version.
IMO the best solution would be to change `airflow.models.taskinstance.clear_task_instances()` to make an effort to get the associated DAGs for all the task instances being cleared so their associated tasks can be read and their `max_tries` property can be incremented correctly.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29049 | https://github.com/apache/airflow/pull/29065 | 7074167d71c93b69361d24c1121adc7419367f2a | 0d2e6dce709acebdb46288faef17d322196f29a2 | "2023-01-19T21:46:25Z" | python | "2023-04-14T17:17:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,036 | ["airflow/providers/amazon/aws/transfers/sql_to_s3.py"] | Top level code imports in AWS transfer | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3
### Operating System
MacOs/Linux
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
sql_to_s3.py transfer has top level python code imports considered as Bad Practices:
https://github.com/apache/airflow/blob/be31214dcf14db39b7a5f422ca272cdc13e08268/airflow/providers/amazon/aws/transfers/sql_to_s3.py#L26
According to the [official docs](https://airflow.apache.org/docs/apache-airflow/2.3.3/best-practices.html#top-level-python-code):
```python
import numpy as np # <-- THIS IS A VERY BAD IDEA! DON'T DO THAT!
```
All imports that are not related to DAG structure and creation should be moved to callable functions, such as the `execute` method.
This causes timeout errors while filling the `DagBag`:
```
File "/opt/airflow/dags/mydag.py", line 6, in <module>
from airflow.providers.amazon.aws.transfers.sql_to_s3 import SqlToS3Operator
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/transfers/sql_to_s3.py", line 25, in <module>
import pandas as pd
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/__init__.py", line 50, in <module>
from pandas.core.api import (
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/api.py", line 48, in <module>
from pandas.core.groupby import (
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/groupby/__init__.py", line 1, in <module>
from pandas.core.groupby.generic import (
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/groupby/generic.py", line 73, in <module>
from pandas.core.frame import DataFrame
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/frame.py", line 193, in <module>
from pandas.core.series import Series
File "/home/airflow/.local/lib/python3.7/site-packages/pandas/core/series.py", line 141, in <module>
import pandas.plotting
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 724, in exec_module
File "<frozen importlib._bootstrap_external>", line 859, in get_code
File "<frozen importlib._bootstrap_external>", line 917, in get_data
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/timeout.py", line 68, in handle_timeout
raise AirflowTaskTimeout(self.error_message)
airflow.exceptions.AirflowTaskTimeout: DagBag import timeout for /opt/airflow/dags/mydag.py after 30.0s.
Please take a look at these docs to improve your DAG import time:
* https://airflow.apache.org/docs/apache-airflow/2.3.3/best-practices.html#top-level-python-code
* https://airflow.apache.org/docs/apache-airflow/2.3.3/best-practices.html#reducing-dag-complexity, PID: 7
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29036 | https://github.com/apache/airflow/pull/29045 | af0bbe62a5fc26bac189acd9039f5bbc83c2d429 | 62825678b3100b0e0ea3b4e14419d259a36ba074 | "2023-01-19T11:51:48Z" | python | "2023-01-30T23:37:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,013 | ["airflow/jobs/scheduler_job.py"] | Metrics dagrun.duration.failed.<dag_id> not updated when the dag run failed due to timeout | ### Apache Airflow version
2.5.0
### What happened
When the dag was set with `dagrun_timeout` [parameter](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#airflow.models.dag.DAG) and the dag run failed due to time out reason, the metrics `dagrun.duration.failed.<dag_id>` was not triggered.
### What you think should happen instead
According to the [doc](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html#timers), the metrics `dagrun.duration.failed.<dag_id>` should capture `Milliseconds taken for a DagRun to reach failed state`. Then it should capture all kinds of dag failure including the failure caused by dag level time out.
### How to reproduce
set `dagrun_timeout` [parameter](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#airflow.models.dag.DAG) (e.g. `dagrun_timeout=timedelta(seconds=5)`), then set up a BashOperator task run longer than dagrun_timeout. (e.g., `bash_command='sleep 120'`,).
Then check the metrics, dagrun.duration.failed.<dag_id> can not capture this failed dag run due to timeout reason.
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.1.0
apache-airflow-providers-common-sql==1.3.3
apache-airflow-providers-ftp==3.3.0
apache-airflow-providers-http==4.1.1
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
According to the [doc](https://airflow.apache.org/docs/apache-airflow/stable/logging-monitoring/metrics.html#timers), the metrics `dagrun.duration.failed.<dag_id>` should capture `Milliseconds taken for a DagRun to reach failed state`. However, if the dag run was failed due to the dag run level timeout, the metric can not capture the failed dag run.
I deep dive to the airflow code and figured out the reason.
The timer `dagrun.duration.failed.{self.dag_id}` was triggered in the method _emit_duration_stats_for_finished_state. [code](https://github.com/apache/airflow/blob/2.5.0/airflow/models/dagrun.py#L880-L894)
```
def _emit_duration_stats_for_finished_state(self):
if self.state == State.RUNNING:
return
if self.start_date is None:
self.log.warning("Failed to record duration of %s: start_date is not set.", self)
return
if self.end_date is None:
self.log.warning("Failed to record duration of %s: end_date is not set.", self)
return
duration = self.end_date - self.start_date
if self.state == State.SUCCESS:
Stats.timing(f"dagrun.duration.success.{self.dag_id}", duration)
elif self.state == State.FAILED:
Stats.timing(f"dagrun.duration.failed.{self.dag_id}", duration)
```
The function `_emit_duration_stats_for_finished_state` was only called in the update_state() method for class DagRun(). [code](https://github.com/apache/airflow/blob/2.5.0/airflow/models/dagrun.py#L650-L677) If the update_state() method was not call, then `_emit_duration_stats_for_finished_state` will not used.
```
if self._state == DagRunState.FAILED or self._state == DagRunState.SUCCESS:
msg = (
"DagRun Finished: dag_id=%s, execution_date=%s, run_id=%s, "
"run_start_date=%s, run_end_date=%s, run_duration=%s, "
"state=%s, external_trigger=%s, run_type=%s, "
"data_interval_start=%s, data_interval_end=%s, dag_hash=%s"
)
self.log.info(
msg,
self.dag_id,
self.execution_date,
self.run_id,
self.start_date,
self.end_date,
(self.end_date - self.start_date).total_seconds()
if self.start_date and self.end_date
else None,
self._state,
self.external_trigger,
self.run_type,
self.data_interval_start,
self.data_interval_end,
self.dag_hash,
)
session.flush()
self._emit_true_scheduling_delay_stats_for_finished_state(finished_tis)
self._emit_duration_stats_for_finished_state()
```
When a dag run was timed out, in the scheduler job, it will only call set_state(). [code](https://github.com/apache/airflow/blob/2.5.0/airflow/jobs/scheduler_job.py#L1280-L1312)
```
if (
dag_run.start_date
and dag.dagrun_timeout
and dag_run.start_date < timezone.utcnow() - dag.dagrun_timeout
):
dag_run.set_state(DagRunState.FAILED)
unfinished_task_instances = (
session.query(TI)
.filter(TI.dag_id == dag_run.dag_id)
.filter(TI.run_id == dag_run.run_id)
.filter(TI.state.in_(State.unfinished))
)
for task_instance in unfinished_task_instances:
task_instance.state = TaskInstanceState.SKIPPED
session.merge(task_instance)
session.flush()
self.log.info("Run %s of %s has timed-out", dag_run.run_id, dag_run.dag_id)
active_runs = dag.get_num_active_runs(only_running=False, session=session)
# Work out if we should allow creating a new DagRun now?
if self._should_update_dag_next_dagruns(dag, dag_model, active_runs):
dag_model.calculate_dagrun_date_fields(dag, dag.get_run_data_interval(dag_run))
callback_to_execute = DagCallbackRequest(
full_filepath=dag.fileloc,
dag_id=dag.dag_id,
run_id=dag_run.run_id,
is_failure_callback=True,
processor_subdir=dag_model.processor_subdir,
msg="timed_out",
)
dag_run.notify_dagrun_state_changed()
return callback_to_execute
```
From the above code, we can see that when the DAG run was timed out, it will call the set_state() method only. Here update_state() method was not called and that is why the metrics dagrun.duration.failed.{self.dag_id} was not set up accordingly.
Please fix this bug to let the timer `dagrun.duration.failed.<dag_id>` can capture the failed dag run due to dag level timed out.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29013 | https://github.com/apache/airflow/pull/29076 | 9dedf81fa18e57755aa7d317f08f0ea8b6c7b287 | ca9a59b3e8c08286c8efd5ca23a509f9178a3cc9 | "2023-01-18T12:25:00Z" | python | "2023-01-21T03:31:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 29,002 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | KubernetesPodOperator xcom push failure | ### Apache Airflow version
2.5.0
### What happened
Kubernetes pod operator failed to push xcom value.
After upgrading airflow from 2.2.4 to 2.5.0 (and apache-airflow-providers-cncf-kubernetes from 3.0.2 to 5.0.0) pushing of xcom values from kubernetes pod operator stopped working.
### What you think should happen instead
Example of log before the upgrade
```
[2023-01-17 06:57:54,357] {pod_launcher.py:313} INFO - Running command... cat /airflow/xcom/return.json
[2023-01-17 06:57:54,398] {pod_launcher.py:313} INFO - Running command... kill -s SIGINT 1
[2023-01-17 06:57:55,012] {pod_launcher.py:186} INFO - ["No non-accuracy metrics changed more than 10.0% between 2023-01-15 and 2023-01-16\n"]
```
and after the upgrade
```
[2023-01-18T07:12:32.784+0900] {pod_manager.py:368} INFO - Checking if xcom sidecar container is started.
[2023-01-18T07:12:32.804+0900] {pod_manager.py:372} INFO - The xcom sidecar container is started.
[2023-01-18T07:12:32.845+0900] {pod_manager.py:407} INFO - Running command... if [ -s /airflow/xcom/return.json ]; then cat /airflow/xcom/return.json; else echo __airflow_xcom_result_empty__; fi
[2023-01-18T07:12:32.895+0900] {pod_manager.py:407} INFO - Running command... kill -s SIGINT 1
[2023-01-18T07:12:33.405+0900] {kubernetes_pod.py:431} INFO - Result file is empty.
```
Looking at other timestamps in the log file, it appears that the xcom sidecar is run before the pod finishes, instead of waiting until the end.
### How to reproduce
I used `airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator` with `get_logs=False`.
### Operating System
Debian GNU/Linux 11 (bullseye) (based on official airflow image)
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes 3.0.2 and 5.0.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed on GKE using the official helm chart
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/29002 | https://github.com/apache/airflow/pull/29052 | 4a9e1e8a1fcf76c0bd9e2c501b0da0466223f6ac | 1e81a98cc69344a35c50b00e2d25a6d48a9bded2 | "2023-01-18T01:08:36Z" | python | "2023-03-07T13:41:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,973 | ["airflow/models/xcom_arg.py", "tests/models/test_taskinstance.py"] | Dynamic Task Mapping skips tasks before upstream has started | ### Apache Airflow version
2.5.0
### What happened
In some cases we are seeing dynamic mapped task being skipped before upstream tasks have started & the dynamic count for the task can be calculated. We see this both locally in a with the `LocalExecutor` & on our cluster with the `KubernetesExecutor`.
To trigger the issue we need multiple dynamic tasks merging into a upstream task, see the images below for example. If there is no merging the tasks run as expected. The tasks also need to not know the number of dynamic tasks that will be created on DAG start, for example by chaining in an other dynamic task output.
![screenshot_2023-01-16_at_14-57-23_test_skip_-_graph_-_airflow](https://user-images.githubusercontent.com/1442084/212699549-8bfc80c6-02c7-4187-8dad-91020c94616f.png)
![screenshot_2023-01-16_at_14-56-44_test_skip_-_graph_-_airflow](https://user-images.githubusercontent.com/1442084/212699551-428c7efd-d044-472c-8fc3-92c9b146a6da.png)
If the DAG, task, or upstream tasks are cleared the skipped task runs as expected.
The issue exists both on airflow 2.4.x & 2.5.0.
Happy to help debug this further & answer any questions!
### What you think should happen instead
The tasks should run after upstream tasks are done.
### How to reproduce
The following code is able to reproduce the issue on our side:
```python
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.utils.task_group import TaskGroup
from airflow.operators.empty import EmptyOperator
# Only one chained tasks results in only 1 of the `skipped_tasks` skipping.
# Add in extra tasks results in both `skipped_tasks` skipping, but
# no earlier tasks are ever skipped.
CHAIN_TASKS = 1
@task()
def add(x, y):
return x, y
with DAG(
dag_id="test_skip",
schedule=None,
start_date=datetime(2023, 1, 13),
) as dag:
init = EmptyOperator(task_id="init_task")
final = EmptyOperator(task_id="final")
for i in range(2):
with TaskGroup(f"task_group_{i}") as tg:
chain_task = [i]
for j in range(CHAIN_TASKS):
chain_task = add.partial(x=j).expand(y=chain_task)
skipped_task = (
add.override(task_id="skipped").partial(x=i).expand(y=chain_task)
)
# Task isn't skipped if final (merging task) is removed.
init >> tg >> final
```
### Operating System
MacOS
### Versions of Apache Airflow Providers
This can be reproduced without any extra providers installed.
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28973 | https://github.com/apache/airflow/pull/30641 | 8cfc0f6332c45ca750bc2317ea1e283aaf2ac5bd | 5f2628d36cb8481ee21bd79ac184fd8fdce3e47d | "2023-01-16T14:18:41Z" | python | "2023-04-22T19:00:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,951 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/decorators/test_docker.py", "tests/providers/docker/operators/test_docker.py"] | Add a way to skip Docker Operator task | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.3.3
Raising the `AirflowSkipException` in the source code, using the `DockerOperator`, is supposed to mark the task as skipped, according to the [docs](https://airflow.apache.org/docs/apache-airflow/stable/concepts/tasks.html#special-exceptions). However, what happens is that the task is marked as failed with the logs showing `ERROR - Task failed with exception`.
### What you think should happen instead
Tasks should be marked as skipped, not failed.
### How to reproduce
Raise the `AirflowSkipException` in the python source code, while using the `DockerOperator`.
### Operating System
Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-125-generic x86_64)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28951 | https://github.com/apache/airflow/pull/28996 | bc5cecc0db27cb8684c238b36ad12c7217d0c3ca | 3a7bfce6017207218889b66976dbee1ed84292dc | "2023-01-15T11:36:04Z" | python | "2023-01-18T21:04:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,933 | ["airflow/providers/cncf/kubernetes/decorators/kubernetes.py", "airflow/providers/cncf/kubernetes/python_kubernetes_script.jinja2", "tests/providers/cncf/kubernetes/decorators/test_kubernetes.py"] | @task.kubernetes TaskFlow decorator fails with IndexError and is unable to receive input | ### Apache Airflow Provider(s)
cncf-kubernetes
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes 5.0.0
### Apache Airflow version
2.5.0
### Operating System
Debian 11
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
When passing arguments (either args or kwargs) to a @task.kubernetes decorated function, the following exception occurs:
Task Logs:
```
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:621} INFO - Building pod k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55 with labels: {'dag_id': 'test_k8s_input_1673647477', 'task_id': 'k8s_with_input', 'run_id': 'backfill__2023-01-01T0000000000-c16e0472d', 'kubernetes_pod_operator': 'True', 'try_number': '1'}
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:404} INFO - Found matching pod k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55 with labels {'airflow_kpo_in_cluster': 'True', 'airflow_version': '2.5.0', 'dag_id': 'test_k8s_input_1673647477', 'kubernetes_pod_operator': 'True', 'run_id': 'backfill__2023-01-01T0000000000-c16e0472d', 'task_id': 'k8s_with_input', 'try_number': '1'}
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:405} INFO - `try_number` of task_instance: 1
[2023-01-13, 22:05:40 UTC] {kubernetes_pod.py:406} INFO - `try_number` of pod: 1
[2023-01-13, 22:05:40 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:41 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:42 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:43 UTC] {pod_manager.py:189} WARNING - Pod not yet started: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - + python -c 'import base64, os;x = os.environ["__PYTHON_SCRIPT"];f = open("/tmp/script.py", "w"); f.write(x); f.close()'
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - + python /tmp/script.py
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - Traceback (most recent call last):
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - File "/tmp/script.py", line 14, in <module>
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - with open(sys.argv[1], "rb") as file:
[2023-01-13, 22:05:44 UTC] {pod_manager.py:237} INFO - IndexError: list index out of range
[2023-01-13, 22:05:44 UTC] {kubernetes_pod.py:499} INFO - Deleting pod: k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55
[2023-01-13, 22:05:44 UTC] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/decorators/kubernetes.py", line 104, in execute
return super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/decorators/base.py", line 217, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 465, in execute
self.cleanup(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", line 489, in cleanup
raise AirflowException(
airflow.exceptions.AirflowException: Pod k8s-airflow-pod-5c285c340fdf4e-81721f4662e247e793f497ada2f1ce55 returned a failure:
```
### What you think should happen instead
K8's decorator should properly receive input. The [python command invoked here](https://github.com/apache/airflow/blob/2.5.0/airflow/providers/cncf/kubernetes/decorators/kubernetes.py#L75) does not pass input. Contrast this with the [docker version of the decorator](https://github.com/apache/airflow/blob/2.5.0/airflow/providers/docker/decorators/docker.py#L105) which does properly pass pickled input.
### How to reproduce
Create a dag:
```py
import os
from airflow import DAG
from airflow.decorators import task
DEFAULT_TASK_ARGS = {
"owner": "gcp-data-platform",
"start_date": "2022-12-16",
"retries": 0,
}
@task.kubernetes(
image="python:3.8-slim-buster",
namespace=os.getenv("AIRFLOW__KUBERNETES_EXECUTOR__NAMESPACE"),
in_cluster=False,
)
def k8s_with_input(val: str) -> str:
import datetime
print(f"Got val: {val}")
return val
with DAG(
schedule_interval="@daily",
max_active_runs=1,
max_active_tasks=5,
catchup=False,
dag_id="test_oom_dag",
default_args=DEFAULT_TASK_ARGS,
) as dag:
output = k8s_with_input.override(task_id="k8s_with_input")("a")
```
Run and observe failure:
<img width="907" alt="image" src="https://user-images.githubusercontent.com/9200263/212427952-15466317-4e61-4b71-9971-2cdedba4f7ba.png">
Task logs above.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28933 | https://github.com/apache/airflow/pull/28942 | 73c8e7df0be8b254e3727890b51ca0f76308e6b5 | 9a5c3e0ac0b682d7f2c51727a56e06d68bc9f6be | "2023-01-13T22:08:52Z" | python | "2023-02-18T17:42:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,919 | ["airflow/api/auth/backend/kerberos_auth.py", "docs/apache-airflow/administration-and-deployment/security/api.rst"] | Airflow API kerberos authentication error | ### Apache Airflow version
2.5.0
### What happened
Configured AUTH_DB authentication for web server and Kerberos authentication for API. Web server works well.
Try to get any API endpoint and get an error 500. I see Kerberos authentication step is done, but authorization step fails.
'User' object (now it is just a string) doesn't have such parameter.
Request error
```
янв 13 13:54:14 nginx-test airflow[238738]: [2023-01-13 13:54:14,923] {app.py:1741} ERROR - Exception on /api/v1/dags [GET]
янв 13 13:54:14 nginx-test airflow[238738]: Traceback (most recent call last):
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2525, in wsgi_app
янв 13 13:54:14 nginx-test airflow[238738]: response = self.full_dispatch_request()
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1822, in full_dispatch_request
янв 13 13:54:14 nginx-test airflow[238738]: rv = self.handle_user_exception(e)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1820, in full_dispatch_request
янв 13 13:54:14 nginx-test airflow[238738]: rv = self.dispatch_request()
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1796, in dispatch_request
янв 13 13:54:14 nginx-test airflow[238738]: return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/decorator.py", line 68, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: response = function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: response = function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/validation.py", line 399, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: return function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/response.py", line 112, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: response = function(request)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/connexion/decorators/parameter.py", line 120, in wrapper
янв 13 13:54:14 nginx-test airflow[238738]: return function(**kwargs)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/airflow/api_connexion/security.py", line 50, in decorated
янв 13 13:54:14 nginx-test airflow[238738]: if appbuilder.sm.check_authorization(permissions, kwargs.get("dag_id")):
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/airflow/www/security.py", line 715, in check_authorization
янв 13 13:54:14 nginx-test airflow[238738]: can_access_all_dags = self.has_access(*perm)
янв 13 13:54:14 nginx-test airflow[238738]: File "/usr/local/lib/python3.8/dist-packages/airflow/www/security.py", line 419, in has_access
янв 13 13:54:14 nginx-test airflow[238738]: if (action_name, resource_name) in user.perms:
янв 13 13:54:14 nginx-test airflow[238738]: AttributeError: 'str' object has no attribute 'perms'
янв 13 13:54:14 nginx-test airflow[238738]: 127.0.0.1 - - [13/Jan/2023:13:54:14 +0300] "GET /api/v1/dags HTTP/1.1" 500 1561 "-" "curl/7.68.0"
```
Starting airflow-webserver log (no errors)
```
янв 13 13:38:51 nginx-test airflow[238502]: ____________ _____________
янв 13 13:38:51 nginx-test airflow[238502]: ____ |__( )_________ __/__ /________ __
янв 13 13:38:51 nginx-test airflow[238502]: ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
янв 13 13:38:51 nginx-test airflow[238502]: ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
янв 13 13:38:51 nginx-test airflow[238502]: _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
янв 13 13:38:51 nginx-test airflow[238502]: Running the Gunicorn Server with:
янв 13 13:38:51 nginx-test airflow[238502]: Workers: 4 sync
янв 13 13:38:51 nginx-test airflow[238502]: Host: 0.0.0.0:10000
янв 13 13:38:51 nginx-test airflow[238502]: Timeout: 120
янв 13 13:38:51 nginx-test airflow[238502]: Logfiles: - -
янв 13 13:38:51 nginx-test airflow[238502]: Access Logformat:
янв 13 13:38:51 nginx-test airflow[238502]: =================================================================
янв 13 13:38:51 nginx-test airflow[238502]: [2023-01-13 13:38:51,209] {webserver_command.py:431} INFO - Received signal: 15. Closing gunicorn.
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238525 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238523 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238526 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [WARNING] Worker with pid 238524 was terminated due to signal 15
янв 13 13:38:51 nginx-test airflow[238519]: [2023-01-13 13:38:51 +0300] [238519] [INFO] Shutting down: Master
янв 13 13:38:52 nginx-test systemd[1]: airflow-webserver.service: Succeeded.
янв 13 13:38:52 nginx-test systemd[1]: Stopped Airflow webserver daemon.
янв 13 13:38:52 nginx-test systemd[1]: Started Airflow webserver daemon.
янв 13 13:38:54 nginx-test airflow[238732]: /usr/local/lib/python3.8/dist-packages/airflow/api/auth/backend/kerberos_auth.py:50 DeprecationWarning: '_request_ctx_stack' is dep>
янв 13 13:38:54 nginx-test airflow[238732]: [2023-01-13 13:38:54,393] {kerberos_auth.py:78} INFO - Kerberos: hostname nginx-test.mycompany
янв 13 13:38:54 nginx-test airflow[238732]: [2023-01-13 13:38:54,393] {kerberos_auth.py:88} INFO - Kerberos init: airflow nginx-test.mycompany
янв 13 13:38:54 nginx-test airflow[238732]: [2023-01-13 13:38:54,394] {kerberos_auth.py:93} INFO - Kerberos API: server is airflow/nginx-test.mycompany@MYCOMPANY>
янв 13 13:38:56 nginx-test airflow[238732]: [2023-01-13 13:38:56 +0300] [238732] [INFO] Starting gunicorn 20.1.0
янв 13 13:38:56 nginx-test airflow[238732]: [2023-01-13 13:38:56 +0300] [238732] [INFO] Listening at: http://0.0.0.0:10000 (238732)
янв 13 13:38:56 nginx-test airflow[238732]: [2023-01-13 13:38:56 +0300] [238732] [INFO] Using worker: sync
янв 13 13:38:56 nginx-test airflow[238735]: [2023-01-13 13:38:56 +0300] [238735] [INFO] Booting worker with pid: 238735
янв 13 13:38:57 nginx-test airflow[238736]: [2023-01-13 13:38:57 +0300] [238736] [INFO] Booting worker with pid: 238736
янв 13 13:38:57 nginx-test airflow[238737]: [2023-01-13 13:38:57 +0300] [238737] [INFO] Booting worker with pid: 238737
янв 13 13:38:57 nginx-test airflow[238738]: [2023-01-13 13:38:57 +0300] [238738] [INFO] Booting worker with pid: 238738
```
I tried to skip rights check, commenting problem lines and returning True from has_access function and if I remember it right in one more function from security.py. And I got it working. But it has been just a hack to check where is the problem.
### What you think should happen instead
It should return right json answer with code 200.
### How to reproduce
1. webserver_config.py: default
2. airflow.cfg changed lines:
```
[core]
security = kerberos
[api]
auth_backends = airflow.api.auth.backend.kerberos_auth,airflow.api.auth.backend.session
[kerberos]
ccache = /tmp/airflow_krb5_ccache
principal = airflow/nginx-test.mycompany
reinit_frequency = 3600
kinit_path = kinit
keytab = /root/airflow/airflow2.keytab
forwardable = True
include_ip = True
[webserver]
base_url = http://localhost:10000
web_server_port = 10000
```
3. Create keytab file with airflow principal
4. Log in as domain user, make request (for example):
curl --verbose --negotiate -u : http://nginx-test.mycompany:10000/api/v1/dags
### Operating System
Ubuntu. VERSION="20.04.5 LTS (Focal Fossa)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28919 | https://github.com/apache/airflow/pull/29054 | 80dbfbc7ad8f63db8565baefa282bc01146803fe | 135aef30be3f9b8b36556f3ff5e0d184b0f74f22 | "2023-01-13T11:27:58Z" | python | "2023-01-20T16:05:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,912 | ["docs/apache-airflow/start.rst"] | quick start fails: DagRun for example_bash_operator with run_id or execution_date of '2015-01-01' not found | ### Apache Airflow version
2.5.0
### What happened
I follow the [quick start guide](https://airflow.apache.org/docs/apache-airflow/stable/start.html)
When I execute `airflow tasks run example_bash_operator runme_0 2015-01-01` I got the following error:
```
[2023-01-13 15:50:42,493] {dagbag.py:538} INFO - Filling up the DagBag from /root/airflow/dags
[2023-01-13 15:50:42,761] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): prepare_email>, send_email already registered for DAG: example_dag_decorator
[2023-01-13 15:50:42,761] {taskmixin.py:205} WARNING - Dependency <Task(EmailOperator): send_email>, prepare_email already registered for DAG: example_dag_decorator
[2023-01-13 15:50:42,830] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_group>, delete_entry_group already registered for DAG: example_complex
[2023-01-13 15:50:42,830] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry_group>, create_entry_group already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_entry_gcs>, delete_entry already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_entry>, create_entry_gcs already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): create_tag>, delete_tag already registered for DAG: example_complex
[2023-01-13 15:50:42,831] {taskmixin.py:205} WARNING - Dependency <Task(BashOperator): delete_tag>, create_tag already registered for DAG: example_complex
[2023-01-13 15:50:42,852] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,852] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,853] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,853] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,854] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,854] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): print_the_context>, log_sql_query already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {taskmixin.py:205} WARNING - Dependency <Task(_PythonDecoratedOperator): log_sql_query>, print_the_context already registered for DAG: example_python_operator
[2023-01-13 15:50:42,855] {example_python_operator.py:90} WARNING - The virtalenv_python example task requires virtualenv, please install it.
[2023-01-13 15:50:43,608] {tutorial_taskflow_api_virtualenv.py:29} WARNING - The tutorial_taskflow_api_virtualenv example DAG requires virtualenv, please install it.
/root/miniconda3/lib/python3.7/site-packages/airflow/models/dag.py:3524 RemovedInAirflow3Warning: Param `schedule_interval` is deprecated and will be removed in a future release. Please use `schedule` instead.
Traceback (most recent call last):
File "/root/miniconda3/bin/airflow", line 8, in <module>
sys.exit(main())
File "/root/miniconda3/lib/python3.7/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 384, in task_run
ti, _ = _get_ti(task, args.map_index, exec_date_or_run_id=args.execution_date_or_run_id, pool=args.pool)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 163, in _get_ti
session=session,
File "/root/miniconda3/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 118, in _get_dag_run
) from None
airflow.exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of '2023-11-01' not found
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Ubuntu 18.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28912 | https://github.com/apache/airflow/pull/28949 | c57c23dce39992eafcf86dc08a1938d7d407803f | a4f6f3d6fe614457ff95ac803fd15e9f0bd38d27 | "2023-01-13T07:55:02Z" | python | "2023-01-15T21:01:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,910 | ["airflow/providers/amazon/aws/operators/ecs.py"] | Misnamed param in EcsRunTaskOperator | ### What do you see as an issue?
In the `EcsRunTaskOperator`, one of the params in the docstring is `region_name`, but it should be `region`:
https://github.com/apache/airflow/blob/2.5.0/airflow/providers/amazon/aws/operators/ecs.py#L281
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28910 | https://github.com/apache/airflow/pull/29562 | eb46eeb33d58436aa5860f2f0031fad3dea3ce3b | cadab59e8df90588b07cf8d9ee3ce13f9a79f656 | "2023-01-13T01:21:52Z" | python | "2023-02-16T03:13:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,891 | ["chart/templates/pgbouncer/pgbouncer-deployment.yaml", "chart/values.schema.json", "chart/values.yaml"] | Pgbouncer metrics exporter restarts | ### Official Helm Chart version
1.6.0
### Apache Airflow version
2.4.2
### Kubernetes Version
1.21
### Helm Chart configuration
Nothing really specific
### Docker Image customizations
_No response_
### What happened
From time to time we have pg_bouncer metrics exporter that fails its healthcheck.
When it fails its healtchecks three times in a row, pgbouncer stop being reachable and drops all the ongoing connection.
Is it possible to make the pgbouncer healtcheck configurable at least the timeout parameter of one second that seems really short?
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28891 | https://github.com/apache/airflow/pull/29752 | d0fba865aed1fc21d82f0a61cddb1fa0bd4b7d0a | 44f89c6db115d91aba91955fde42475d1a276628 | "2023-01-12T15:18:28Z" | python | "2023-02-27T18:20:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,888 | ["airflow/www/app.py", "tests/www/views/test_views_base.py"] | `webserver.instance_name` shows markup text in `<title>` tag | ### Apache Airflow version
2.5.0
### What happened
https://github.com/apache/airflow/pull/20888 enables the use of markup to style the `webserver.instance_name`.
However, if the instance name has HTML code, this will also be reflected in the `<title>` tag, as shown in the screenshot below.
![image](https://user-images.githubusercontent.com/562969/212091882-d33bb0f7-75c2-4c92-bd4f-4bc7ba6be8db.png)
This is not a pretty behaviour.
### What you think should happen instead
Ideally, if `webserver. instance_name_has_markup = True`, then the text inside the `<title>` should be stripped of HTML code.
For example:
- Set `webserver.instance_name` to some text with markup, like `<b style="color: red">title</b>`
- Set `webserver.Instance_name_has_markup` to `true`
This is how the `<title>` tag should look like:
```html
<title>DAGs - title</title>
```
Instead of:
```
<title>DAGs - <b style="color: red">title<b></title>
```
### How to reproduce
- Airflow version 2.3+, which is [when this change has been introduced](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#instance-name-has-markup)
- Set `webserver.instance_name` to some text with markup, like `<b style="color: red">title</b>`
- Set `webserver.Instance_name_has_markup` to `true`
### Operating System
Doesn't matter
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28888 | https://github.com/apache/airflow/pull/28894 | 696b91fafe4a557f179098e0609eb9d9dcb73f72 | 971e3226dc3ca43900f0b79c42afffb14c59d691 | "2023-01-12T14:32:55Z" | python | "2023-03-16T11:34:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,884 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | Azure Blob storage exposes crendentials in UI | ### Apache Airflow version
Other Airflow 2 version (please specify below)
2.3.3
### What happened
Azure Blob Storage exposes credentials in the UI
<img width="1249" alt="Screenshot 2023-01-12 at 14 00 05" src="https://user-images.githubusercontent.com/35199552/212072943-adca75c4-2226-4251-9446-e8f18fb22081.png">
### What you think should happen instead
_No response_
### How to reproduce
Create an Azure Blob storage connection. then click on the edit button on the connection.
### Operating System
debain
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28884 | https://github.com/apache/airflow/pull/28914 | 6f4544cfbdfa3cabb3faaeea60a651206cd84e67 | 3decb189f786781bb0dfb3420a508a4a2a22bd8b | "2023-01-12T13:01:24Z" | python | "2023-01-13T15:02:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,847 | ["airflow/www/static/js/callModal.js", "airflow/www/templates/airflow/dag.html", "airflow/www/views.py"] | Graph UI: Add Filter Downstream & Filter DownStream & Upstream | ### Description
Currently Airflow has a `Filter Upstream` View/option inside the graph view. (As documented [here](https://docs.astronomer.io/learn/airflow-ui#graph-view) under `Filter Upstream`)
<img width="682" alt="image" src="https://user-images.githubusercontent.com/9246654/211711759-670a1180-7f90-4ecd-84b0-2f3b290ff477.png">
It would be great if there were also the options
1. `Filter Downstream` &
2. `Filter Downstream & Upstream`
### Use case/motivation
Sometimes it is useful to view downstream tasks & down & upstream tasks when reviewing dags. This feature would make it as easy to view those as it is to view upstream today.
### Related issues
I found nothing with a quick search
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28847 | https://github.com/apache/airflow/pull/29226 | 624520db47f736af820b4bc834a5080111adfc96 | a8b2de9205dd805ee42cf6b0e15e7e2805752abb | "2023-01-11T03:35:33Z" | python | "2023-02-03T15:04:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,830 | ["airflow/providers/amazon/aws/transfers/dynamodb_to_s3.py", "airflow/providers/amazon/aws/waiters/README.md", "airflow/providers/amazon/aws/waiters/dynamodb.json", "docs/apache-airflow-providers-amazon/transfer/dynamodb_to_s3.rst", "tests/providers/amazon/aws/transfers/test_dynamodb_to_s3.py", "tests/providers/amazon/aws/waiters/test_custom_waiters.py", "tests/system/providers/amazon/aws/example_dynamodb_to_s3.py"] | Export DynamoDB table to S3 with PITR | ### Description
Airflow provides the Amazon DynamoDB to Amazon S3 below.
https://airflow.apache.org/docs/apache-airflow-providers-amazon/stable/operators/transfer/dynamodb_to_s3.html
Most of Data Engineer build their "export DDB data to s3" pipeline using "within the point in time recovery window".
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Client.export_table_to_point_in_time
I appreciate if airflow has this function as a native function.
### Use case/motivation
My daily batch job exports its data with pitr option. All of tasks is written by apache-airflow-providers-amazon except "export_table_to_point_in_time" task.
"export_table_to_point_in_time" task only used the python operator. I expect I can unify the task as apache-airflow-providers-amazon library.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28830 | https://github.com/apache/airflow/pull/31142 | 71c26276bcd3ddd5377d620e6b8baef30b72eaa0 | cd3fa33e82922e01888d609ed9c24b9c2dadfa27 | "2023-01-10T13:44:29Z" | python | "2023-05-09T23:56:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,825 | ["airflow/api_connexion/endpoints/dag_run_endpoint.py", "airflow/api_connexion/schemas/dag_run_schema.py", "tests/api_connexion/endpoints/test_dag_run_endpoint.py"] | Bad request when triggering dag run with `note` in payload | ### Apache Airflow version
2.5.0
### What happened
Specifying a `note` in the payload (as mentioned [in the doc](https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#operation/post_dag_run)) when triggering a new dag run yield a 400 bad request
(Git Version: .release:2.5.0+fa2bec042995004f45b914dd1d66b466ccced410)
### What you think should happen instead
As far as I understand the documentation, I should be able to set a note for this dag run, and it is not the case.
### How to reproduce
This is a local airflow, using default credentials and default setup when following [this guide](https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#)
DAG:
<details>
```
import airflow
from airflow import DAG
import logging
from airflow.operators.python import PythonOperator
from airflow.operators.dummy import DummyOperator
from datetime import timedelta
logger = logging.getLogger("airflow.task")
default_args = {
"owner": "airflow",
"depends_on_past": False,
"retries": 0,
"retry_delay": timedelta(minutes=5),
}
def log_body(**context):
logger.info(f"Body: {context['dag_run'].conf}")
with DAG(
"my-validator",
default_args=default_args,
schedule_interval=None,
start_date=airflow.utils.dates.days_ago(0),
catchup=False
) as dag:
(
PythonOperator(
task_id="abcde",
python_callable=log_body,
provide_context=True
)
>> DummyOperator(
task_id="todo"
)
)
```
</details>
Request:
<details>
```
curl --location --request POST '0.0.0.0:8080/api/v1/dags/my-validator/dagRuns' \
--header 'Authorization: Basic YWlyZmxvdzphaXJmbG93' \
--header 'Content-Type: application/json' \
--data-raw '{
"conf": {
"key":"value"
},
"note": "test"
}'
```
</details>
Response:
<details>
```
{
"detail": "{'note': ['Unknown field.']}",
"status": 400,
"title": "Bad Request",
"type": "https://airflow.apache.org/docs/apache-airflow/2.5.0/stable-rest-api-ref.html#section/Errors/BadRequest"
}
```
</details>
Removing the `note` key, returns 200... with a null `note`!
<details>
```
{
"conf": {
"key": "value"
},
"dag_id": "my-validator",
"dag_run_id": "manual__2023-01-10T10:45:26.102802+00:00",
"data_interval_end": "2023-01-10T10:45:26.102802+00:00",
"data_interval_start": "2023-01-10T10:45:26.102802+00:00",
"end_date": null,
"execution_date": "2023-01-10T10:45:26.102802+00:00",
"external_trigger": true,
"last_scheduling_decision": null,
"logical_date": "2023-01-10T10:45:26.102802+00:00",
"note": null,
"run_type": "manual",
"start_date": null,
"state": "queued"
}
```
</details>
### Operating System
Ubuntu 20.04.5 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Everytime.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28825 | https://github.com/apache/airflow/pull/29228 | e626131563efb536f325a35c78585b74d4482ea3 | b94f36bf563f5c8372086cec63b74eadef638ef8 | "2023-01-10T10:53:02Z" | python | "2023-02-01T19:37:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,812 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksSubmitRunOperator Get failed for Multi Task Databricks Job Run | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
As we are running DatabricksSubmitRunOperator to run multi task databricks job as I am using airflow providers with mostly all flavours of versions, but when the databricks job get failed, DatabricksSubmitRunOperator gives below error its because this operator running get-output API, hence taking job run id instead of taking task run id
Error
```console
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 355, in _do_api_call
for attempt in self._get_retry_object():
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/tenacity/__init__.py", line 382, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 365, in _do_api_call
response.raise_for_status()
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/operators/databricks.py", line 375, in execute
_handle_databricks_operator_execution(self, hook, self.log, context)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/operators/databricks.py", line 90, in _handle_databricks_operator_execution
run_output = hook.get_run_output(operator.run_id)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks.py", line 280, in get_run_output
run_output = self._do_api_call(OUTPUT_RUNS_JOB_ENDPOINT, json)
File "/home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 371, in _do_api_call
raise AirflowException(
airflow.exceptions.AirflowException: Response: b'{"error_code":"INVALID_PARAMETER_VALUE","message":"Retrieving the output of runs with multiple tasks is not supported. Please retrieve the output of each individual task run instead."}', Status Code: 400
[2023-01-10, 05:15:12 IST] {taskinstance.py} INFO - Marking task as FAILED. dag_id=experiment_metrics_store_experiment_4, task_id=, execution_date=20230109T180804, start_date=20230109T180810, end_date=20230109T181512
[2023-01-10, 05:15:13 IST] {warnings.py} WARNING - /home/ubuntu/.venv/airflow/lib/python3.8/site-packages/airflow/utils/email.py:119: PendingDeprecationWarning: Fetching SMTP credentials from configuration variables will be deprecated in a future release. Please set credentials using a connection instead.
send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun)
```
### Apache Airflow version
2.3.2
### Operating System
macos
### Deployment
Other
### Deployment details
_No response_
### What happened
_No response_
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28812 | https://github.com/apache/airflow/pull/25427 | 87a0bd969b5bdb06c6e93236432eff6d28747e59 | 679a85325a73fac814c805c8c34d752ae7a94312 | "2023-01-09T19:20:39Z" | python | "2022-08-03T10:42:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,806 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | BaseSQLToGCSOperator no longer returns at least one file even if empty | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==8.6.0
### Apache Airflow version
2.5.0
### Operating System
Debian GNU/Linux
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
PR `Expose SQL to GCS Metadata (https://github.com/apache/airflow/pull/24382)` made a breaking change [here](https://github.com/apache/airflow/blob/3eee33ac8cb74cfbb08bce9090e9c601cf98da44/airflow/providers/google/cloud/transfers/sql_to_gcs.py#L286) that results in no files being returned when there are no data rows (empty table) rather than a single empty file as in the past.
### What you think should happen instead
I would like to preserve the original behavior of having at least one file returned even if it is empty. Or to make that behavior optional via a new parameter.
The original behavior can be implemented with the following code change:
FROM:
```
if file_to_upload["file_row_count"] > 0:
yield file_to_upload
```
TO:
```
if file_no == 0 or file_to_upload["file_row_count"] > 0:
yield file_to_upload
```
### How to reproduce
Create a DAG that uses BaseSQLToGCSOperator with a SQL command that references an empty SQL table or returns no rows. The `execute` method will not write any files.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28806 | https://github.com/apache/airflow/pull/28959 | 7f2b065ccd01071cff8f298b944d81f3ff3384b5 | 5350be2194250366536db7f78b88dc8e49c9620e | "2023-01-09T16:56:12Z" | python | "2023-01-19T17:10:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,803 | ["airflow/datasets/manager.py", "airflow/jobs/scheduler_job.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | statsd metric for dataset count | ### Description
A count of datasets that are currently registered/declared in an Airflow deployment.
### Use case/motivation
Would be nice to see how deployments are adopting datasets.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28803 | https://github.com/apache/airflow/pull/28907 | 5d84b59554c93fd22e92b46a1061b40b899a8dec | 7689592c244111b24bc52e7428c5a3bb80a4c2d6 | "2023-01-09T14:51:24Z" | python | "2023-01-18T09:35:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,789 | ["airflow/cli/cli_parser.py", "setup.cfg"] | Add colors in help outputs of Airfow CLI commands | ### Body
Folowing up after https://github.com/apache/airflow/pull/22613#issuecomment-1374530689 - seems that there is a new [rich-argparse](https://github.com/hamdanal/rich-argparse) project that might give us the option without rewriting Airflow's argument parsing to click (click has a number of possible performance issues that might impact airlfow's speed of CLI command parsing)
Seems this might be rather easy thing to do (just adding the formatter class for argparse).
Would be nice if someone implements it and tests (also for performance of CLI).
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28789 | https://github.com/apache/airflow/pull/29116 | c310fb9255ba458b2842315f65f59758b76df9d5 | fdac67b3a5350ab4af79fd98612592511ca5f3fc | "2023-01-07T23:05:56Z" | python | "2023-02-08T11:04:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,785 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/manager.py"] | AIP-44 Migrate DagFileProcessorManager.clear_nonexistent_import_errors to Internal API | https://github.com/apache/airflow/blob/main/airflow/dag_processing/manager.py#L773 | https://github.com/apache/airflow/issues/28785 | https://github.com/apache/airflow/pull/28976 | ca9a59b3e8c08286c8efd5ca23a509f9178a3cc9 | 09b3a29972430e5749d772359692fe4a9d528e48 | "2023-01-07T20:06:27Z" | python | "2023-01-21T03:33:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,778 | ["Dockerfile", "scripts/docker/clean-logs.sh"] | Script "clean-logs.sh" has an unexpected burst behavior | ### Apache Airflow version
2.5.0
### What happened
I've noticed that my Airflow Scheduler logs are full of the following message:
Trimming airflow logs to 15 days.
Trimming airflow logs to 15 days.
...
My deployment uses the Helm chart, so it's probably specific to the Docker related assets.
This script has a loop where every 900 seconds it will delete old log files. However, on every activation, the part where it prints the
log message and deletes the file is burst a few hundred times in less than a second. The cycle repeats.
### What you think should happen instead
It should only print one log message (and delete files once) on every cycle.
### How to reproduce
Just comment out the lines regarding the actual file deletion and run it in any bash shell. It should get triggered after 15 minutes.
### Operating System
openSUSE Tumbleweed
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28778 | https://github.com/apache/airflow/pull/28780 | 207f65b542a8aa212f04a9d252762643cfd67a74 | 4b1a36f833b77d3f0bec78958d1fb9f360b7b11b | "2023-01-07T04:03:33Z" | python | "2023-01-16T17:05:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,772 | ["airflow/utils/json.py", "airflow/www/utils.py", "airflow/www/views.py", "tests/www/test_utils.py"] | DAG Run List UI Breaks when a non-JSON serializable value is added to dag_run.conf | ### Apache Airflow version
2.5.0
### What happened
When accessing `dag_run.conf` via a task's context, I was able to add a value that is non-JSON serializable. When I tried to access the Dag Run List UI (`/dagrun/list/`) or the Dag's Grid View, I was met with these error messages respectively:
**Dag Run List UI**
```
Ooops!
Something bad has happened.
Airflow is used by many users, and it is very likely that others had similar problems and you can easily find
a solution to your problem.
Consider following these steps:
* gather the relevant information (detailed logs with errors, reproduction steps, details of your deployment)
* find similar issues using:
* [GitHub Discussions](https://github.com/apache/airflow/discussions)
* [GitHub Issues](https://github.com/apache/airflow/issues)
* [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
* the usual search engine you use on a daily basis
* if you run Airflow on a Managed Service, consider opening an issue using the service support channels
* if you tried and have difficulty with diagnosing and fixing the problem yourself, consider creating a [bug report](https://github.com/apache/airflow/issues/new/choose).
Make sure however, to include all relevant details and results of your investigation so far.
```
**Grid View**
```
Auto-refresh Error
<!DOCTYPE html> <html lang="en"> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css"> </head> <body> <div class="container"> <h1> Ooops! </h1> <div> <pre> Something bad has happened. Airflow is used by many users, and it is very likely that others had similar problems and you can easily find a solution to your problem. Consider following these steps: * gather the relevant information (detailed logs
```
I was able to push the same value to XCom with `AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True`, and the XCom List UI (`/xcom/list/`) did **not** throw an error.
In the postgres instance I am using for the Airflow DB, both `dag_run.conf` & `xcom.value` have `BYTEA` types.
### What you think should happen instead
Since we are able to add (and commit) a non-JSON serializable value into a Dag Run's conf, the UI should not break when trying to load this value. We could also ensure that one DAG Run's conf does not break the List UI for all Dag Runs (across all DAGs), and the DAG's Grid View.
### How to reproduce
- Set `AIRFLOW__CORE__ENABLE_XCOM_PICKLING=True`
- Trigger this DAG:
```
import datetime
from airflow.decorators import dag, task
from airflow.models.xcom import XCom
@dag(
schedule_interval=None,
start_date=datetime.datetime(2023, 1, 1),
)
def ui_issue():
@task()
def update_conf(**kwargs):
dag_conf = kwargs["dag_run"].conf
dag_conf["non_json_serializable_value"] = b"1234"
print(dag_conf)
@task()
def push_to_xcom(**kwargs):
dag_conf = kwargs["dag_run"].conf
print(dag_conf)
XCom.set(key="dag_conf", value=dag_conf, dag_id=kwargs["ti"].dag_id, task_id=kwargs["ti"].task_id, run_id=kwargs["ti"].run_id)
return update_conf() >> push_to_xcom()
dag = ui_issue()
```
- View both the Dag Runs and XCom lists in the UI.
- The DAG Run List UI should break, and the XCom List UI should show a value of `{'non_json_serializable_value': b'1234'}` for `ui_issue.push_to_xcom`.
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
The XCom List UI was able to render this value. We could extend this capability to the DAG Run List UI.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28772 | https://github.com/apache/airflow/pull/28777 | 82c5a5f343d2310822f7bb0d316efa0abe9d4a21 | 8069b500e8487675df0472b4a5df9081dcfa9d6c | "2023-01-06T19:10:49Z" | python | "2023-04-03T08:46:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,766 | ["airflow/cli/commands/connection_command.py", "tests/cli/commands/test_connection_command.py"] | Cannot create connection without defining host using CLI | ### Apache Airflow version
2.5.0
### What happened
In order to send logs to s3 bucket after finishing the task, I added a connection to airflow using cli.
```airflow connections add connection_id_1 --conn-uri aws://s3/?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
Then I got a logging warning saying:
[2023-01-06T13:28:39.585+0000] {logging_mixin.py:137} WARNING - <string>:8 DeprecationWarning: Host s3 specified in the connection is not used. Please, set it on extra['endpoint_url'] instead
Instead I was trying to remove the host from the `conn-uri` I provided but every attempt to create a connection failed (list of my attempts below):
```airflow connections add connection_id_1 --conn-uri aws://?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
```airflow connections add connection_id_1 --conn-uri aws:///?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
### What you think should happen instead
I believe there are 2 options:
1. Allow to create connection without defining host
or
2. Remove the warning log
### How to reproduce
Create an S3 connection using CLI:
```airflow connections add connection_id_1 --conn-uri aws://s3/?region_name=eu-west-1&endpoint_url=https%3A%2F%2Fs3.eu-west-1.amazonaws.com```
### Operating System
Linux - official airflow image from docker hub apache/airflow:slim-2.5.0
### Versions of Apache Airflow Providers
```
apache-airflow-providers-cncf-kubernetes | 5.0.0 | Kubernetes
apache-airflow-providers-common-sql | 1.3.1 | Common SQL Provider
apache-airflow-providers-databricks | 4.0.0 | Databricks
apache-airflow-providers-ftp | 3.2.0 | File Transfer Protocol (FTP)
apache-airflow-providers-hashicorp | 3.2.0 | Hashicorp including Hashicorp Vault
apache-airflow-providers-http | 4.1.0 | Hypertext Transfer Protocol (HTTP)
apache-airflow-providers-imap | 3.1.0 | Internet Message Access Protocol (IMAP)
apache-airflow-providers-postgres | 5.3.1 | PostgreSQL
apache-airflow-providers-sqlite | 3.3.1 | SQLite
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
This log message is printed every second minute so it is pretty annoying.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28766 | https://github.com/apache/airflow/pull/28922 | c5ee4b8a3a2266ef98b379ee28ed68ff1b59ac5f | d8b84ce0e6d36850cd61b1ce37840c80aaec0116 | "2023-01-06T13:43:51Z" | python | "2023-01-13T21:41:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,756 | ["airflow/configuration.py", "tests/core/test_configuration.py"] | All Airflow Configurations set via Environment Variable are masked when `expose_config` is set as `non-sensitive-only` | ### Apache Airflow version
2.5.0
### What happened
In [Airflow 2.4.0](https://github.com/apache/airflow/blob/main/RELEASE_NOTES.rst#airflow-240-2022-09-19), a new feature was added that added an option to mask sensitive data in UI configuration page ([PR](https://github.com/apache/airflow/pull/25346)). I have set `AIRFLOW__WEBSERVER__EXPOSE_CONFIG` as `NON-SENSITIVE-ONLY`.
The feature is working partially as the `airflow.cfg` file display only has [sensitive configurations](https://github.com/apache/airflow/blob/2.5.0/airflow/configuration.py#L149-L160) marked as `< hidden >`. However, the `Running Configuration` table below the file display has all configuration set via environment variables marked as `< hidden >` which I believe is unintended.
I did not change `airflow.cfg` so the value here is displaying the default value of `False` as expected.
![Screen Shot 2023-01-05 at 1 39 11 PM](https://user-images.githubusercontent.com/5952735/210891805-1a5f6a6b-1afe-4d05-b03d-61ac583441fc.png)
The value for `expose_config` I expect to be shown as `NON-SENSITIVE-ONLY` but it shown as `< hidden >`.
![Screen Shot 2023-01-05 at 1 39 27 PM](https://user-images.githubusercontent.com/5952735/210891803-dba826d4-2d3c-4781-aeae-43c46e31fa89.png)
### What you think should happen instead
As mentioned previously, the value for `expose_config` I expect to be shown as `NON-SENSITIVE-ONLY`.
Only the [sensitive variables](https://github.com/apache/airflow/blob/2.5.0/airflow/configuration.py#L149-L160) should be set as `< hidden >`.
### How to reproduce
Set an Airflow configuration through the environment variable and check on the Configuration page.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28756 | https://github.com/apache/airflow/pull/28802 | 9a7f07491e603123182adfd5706fbae524e33c0d | 0a8d0ab56689c341e65a36c0287c9d635bae1242 | "2023-01-05T22:46:30Z" | python | "2023-01-09T16:43:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,751 | ["airflow/providers/google/cloud/operators/cloud_base.py", "tests/providers/google/cloud/operators/test_cloud_base.py"] | KubernetesExecutor leaves failed pods due to deepcopy issue with Google providers | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
With Airflow 2.3 and 2.4 there appears to be a bug in the KubernetesExecutor when used in conjunction with the Google airflow providers. This bug does not affect Airflow 2.2 due to the pip version requirements.
The bug specifically presents itself when using nearly any Google provider operator. During the pod lifecycle, all is well until the executor in the pod starts to clean up following a successful run. Airflow itself still see's the task marked as a success, but in Kubernetes, while the task is finishing up after reporting status, it actually crashes and puts the pod into a Failed state silently:
```
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 103, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 382, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 189, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 247, in _run_task_by_local_task_job
run_job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 247, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 137, in _execute
self.handle_task_exit(return_code)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 168, in handle_task_exit
self._run_mini_scheduler_on_child_tasks()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/local_task_job.py", line 253, in _run_mini_scheduler_on_child_tasks
partial_dag = task.dag.partial_subset(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2188, in partial_subset
dag.task_dict = {
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2189, in <dictcomp>
t.task_id: _deepcopy_task(t)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/dag.py", line 2186, in _deepcopy_task
return copy.deepcopy(t, memo)
File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy
y = copier(memo)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/baseoperator.py", line 1163, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "/usr/local/lib/python3.9/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/local/lib/python3.9/copy.py", line 264, in _reconstruct
y = func(*args)
File "/usr/local/lib/python3.9/enum.py", line 384, in __call__
return cls.__new__(cls, value)
File "/usr/local/lib/python3.9/enum.py", line 702, in __new__
raise ve_exc
ValueError: <object object at 0x7f570181a3c0> is not a valid _MethodDefault
```
Based on a quick look, it appears to be related to the default argument that Google is using in its operators which happens to be an Enum, and fails during a deepcopy at the end of the task.
Example operator that is affected: https://github.com/apache/airflow/blob/403ed7163f3431deb7fc21108e1743385e139907/airflow/providers/google/cloud/hooks/dataproc.py#L753
Reference to the Google Python API core which has the Enum causing the problem: https://github.com/googleapis/python-api-core/blob/main/google/api_core/gapic_v1/method.py#L31
### What you think should happen instead
Kubernetes pods should succeed, be marked as `Completed`, and then be gracefully terminated.
### How to reproduce
Use any `apache-airflow-providers-google` >= 7.0.0 which includes `google-api-core` >= 2.2.2. Run a DAG with a task which uses any of the Google operators which have `_MethodDefault` as a default argument.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-apache-hive==5.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-presto==4.2.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-ssh==3.2.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28751 | https://github.com/apache/airflow/pull/29518 | ec31648be4c2fc4d4a7ef2bd23be342ca1150956 | 5a632f78eb6e3dcd9dc808e73b74581806653a89 | "2023-01-05T17:31:57Z" | python | "2023-03-04T22:44:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,746 | ["airflow/www/utils.py", "tests/test_utils/www.py", "tests/www/views/conftest.py", "tests/www/views/test_views_home.py"] | UIAlert returns AttributeError: 'NoneType' object has no attribute 'roles' when specifying AUTH_ROLE_PUBLIC | ### Apache Airflow version
2.5.0
### What happened
When adding a [role-based UIAlert following these docs](https://airflow.apache.org/docs/apache-airflow/stable/howto/customize-ui.html#add-custom-alert-messages-on-the-dashboard), I received the below stacktrace:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/views.py", line 780, in index
dashboard_alerts = [
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/views.py", line 781, in <listcomp>
fm for fm in settings.DASHBOARD_UIALERTS if fm.should_show(get_airflow_app().appbuilder.sm)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/www/utils.py", line 820, in should_show
user_roles = {r.name for r in securitymanager.current_user.roles}
AttributeError: 'NoneType' object has no attribute 'roles'
```
On further inspection, I realized this is happening because my webserver_config.py has this specification:
```py
# Uncomment and set to desired role to enable access without authentication
AUTH_ROLE_PUBLIC = 'Viewer'
```
When we set AUTH_ROLE_PUBLIC to a role like Viewer, [this line](https://github.com/apache/airflow/blob/ad7f8e09f8e6e87df2665abdedb22b3e8a469b49/airflow/www/utils.py#L828) returns an exception because `securitymanager.current_user` is None.
Relevant code snippet:
```py
def should_show(self, securitymanager) -> bool:Open an interactive python shell in this frame
"""Determine if the user should see the message based on their role membership"""
if self.roles:
user_roles = {r.name for r in securitymanager.current_user.roles}
if not user_roles.intersection(set(self.roles)):
return False
return True
```
### What you think should happen instead
If we detect that the securitymanager.current_user is None, we should not attempt to get its `roles` attribute.
Instead, we can check to see if the AUTH_ROLE_PUBLIC is set in webserver_config.py which will tell us if a public role is being used. If it is, we can assume that because the current_user is None, the current_user's role is the public role.
In code, this might look like this:
```py
def should_show(self, securitymanager) -> bool:
"""Determine if the user should see the message based on their role membership"""
if self.roles:
user_roles = set()
if hasattr(securitymanager.current_user, "roles"):
user_roles = {r.name for r in securitymanager.current_user.roles}
elif "AUTH_ROLE_PUBLIC" in securitymanager.appbuilder.get_app.config:
# Give anonymous user public role
user_roles = set([securitymanager.appbuilder.get_app.config["AUTH_ROLE_PUBLIC"]])
if not user_roles.intersection(set(self.roles)):
return False
return True
```
Expected result on the webpage:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/9200263/210823778-4c619b75-40a3-4caa-9a2c-073651da7f0d.png">
### How to reproduce
Start breeze:
```
breeze --python 3.7 --backend postgres start-airflow
```
After the webserver, triggerer, and scheduler are started, modify webserver_config.py to uncomment AUTH_ROLE_PUBLIC and add airflow_local_settings.py:
```bash
cd $AIRFLOW_HOME
# Uncomment AUTH_ROLE_PUBLIC
vi webserver_config.py
mkdir -p config
# Add sample airflow_local_settings.py below
vi config/airflow_local_settings.py
```
```py
from airflow.www.utils import UIAlert
DASHBOARD_UIALERTS = [
UIAlert("Role based alert", category="warning", roles=["Viewer"]),
]
```
Restart the webserver and navigate to airflow. You should see this page:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/9200263/210820838-e74ffc23-7b6b-42dc-85f1-29ab8b0ee3d5.png">
### Operating System
Debian 11
### Versions of Apache Airflow Providers
2.5.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Locally
### Anything else
This problem only occurs if you add a role based UIAlert and are using AUTH_ROLE_PUBLIC
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28746 | https://github.com/apache/airflow/pull/28781 | 1e9c8e52fda95a0a30b3ae298d5d3adc1971ed45 | f17e2ba48b59525655a92e04684db664a672918f | "2023-01-05T15:55:51Z" | python | "2023-01-10T05:51:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,745 | ["chart/templates/logs-persistent-volume-claim.yaml", "chart/values.schema.json", "chart/values.yaml"] | annotations in logs pvc | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.5.0
### Kubernetes Version
v1.22.8+d48376b
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
When creating the dags pvc, it is possible to inject annotations to the object.
### What you think should happen instead
There should be the possibility to inject annotations to the logs pvc as well.
### How to reproduce
_No response_
### Anything else
We are using annotations on pvc to disable the creation of backup snapshots provided by our company platform. (OpenShift)
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28745 | https://github.com/apache/airflow/pull/29270 | 6ef5ba9104f5a658b003f8ade274f19d7ec1b6a9 | 5835b08e8bc3e11f4f98745266d10bbae510b258 | "2023-01-05T13:22:16Z" | python | "2023-02-20T22:57:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,731 | ["airflow/providers/common/sql/hooks/sql.py", "airflow/providers/exasol/hooks/exasol.py", "airflow/providers/exasol/operators/exasol.py", "tests/providers/exasol/hooks/test_sql.py", "tests/providers/exasol/operators/test_exasol.py", "tests/providers/exasol/operators/test_exasol_sql.py"] | AttributeError: 'ExaStatement' object has no attribute 'description' | ### Apache Airflow Provider(s)
exasol
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-exasol==4.1.1
### Apache Airflow version
2.5.0
### Operating System
Rocky Linux 8.7 (like RHEL 8.7)
### Deployment
Other Docker-based deployment
### Deployment details
- Docker Images built using Python 3.9 and recommended constraints https://raw.githubusercontent.com/apache/airflow/constraints-2.5.0/constraints-3.9.txt
- Deployment to AWS ECS
### What happened
After upgrading from Airflow 2.4.3 to 2.5.0, the ExasolOperator stopped working even when executing simple SQL Statements.
See log snippet below for details.
It looks like the Exasol Hook fails due to a missing attribute.
It seems likely the issue was introduced in a refactoring of the Exasol Hook to use common DBApiHook https://github.com/apache/airflow/pull/28009/commits
### What you think should happen instead
_No response_
### How to reproduce
Any execution of ExasolOperator in Airflow built with the mentioned constraints should show the issue.
### Anything else
```console
[2023-01-04, 15:31:33 CET] {exasol.py:176} INFO - Running statement: EXECUTE SCRIPT mohn_fw.update_select_to_date_for_area('CORE'), parameters: None
[2023-01-04, 15:31:33 CET] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/operators/sql.py", line 255, in execute
output = hook.run(
File "/usr/local/lib/python3.9/site-packages/airflow/providers/exasol/hooks/exasol.py", line 178, in run
result = handler(cur)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/common/sql/hooks/sql.py", line 62, in fetch_all_handler
if cursor.description is not None:
AttributeError: 'ExaStatement' object has no attribute 'description'
[2023-01-04, 15:31:33 CET] {taskinstance.py:1322} INFO - Marking task as UP_FOR_RETRY. dag_id=MOH_DWH_DAILY_CORE, task_id=update_select_to_date_for_area, execution_date=20221225T210000, start_date=20230104T143132, end_date=20230104T143133
[2023-01-04, 15:31:33 CET] {standard_task_runner.py:100} ERROR - Failed to execute job 46137 for task update_select_to_date_for_area ('ExaStatement' object has no attribute 'description'; 7245)
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28731 | https://github.com/apache/airflow/pull/28744 | c0b2fcff24184aa0c5beb9c0d06ce7d67b5c5b7e | 9a7f07491e603123182adfd5706fbae524e33c0d | "2023-01-04T15:25:40Z" | python | "2023-01-09T16:20:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,691 | ["airflow/providers/amazon/aws/utils/waiter.py", "tests/providers/amazon/aws/utils/test_waiter.py"] | Fix custom waiter function in AWS provider package | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.5.0
### Operating System
MacOS
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
Discussed in #28294
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28691 | https://github.com/apache/airflow/pull/28753 | 2b92c3c74d3259ebac714f157c525836f0af50f0 | ce188e509389737b3c0bdc282abea2425281c2b7 | "2023-01-03T14:34:10Z" | python | "2023-01-05T22:09:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,680 | ["airflow/providers/amazon/aws/operators/batch.py", "tests/providers/amazon/aws/operators/test_batch.py"] | Improve AWS Batch hook and operator | ### Description
AWS Batch hook and operator do not support the boto3 parameter shareIdentifier, which is required to submit jobs to specific types of queues.
### Use case/motivation
I wish that AWS Batch hook and operator support the submit of jobs to queues that require shareIdentifier parameter.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28680 | https://github.com/apache/airflow/pull/30829 | bd542fdf51ad9550e5c4348f11e70b5a6c9adb48 | 612676b975a2ff26541bb2581fbdf2befc6c3de9 | "2023-01-02T14:47:23Z" | python | "2023-04-28T22:04:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,670 | ["airflow/providers/telegram/CHANGELOG.rst", "airflow/providers/telegram/hooks/telegram.py", "airflow/providers/telegram/provider.yaml", "docs/spelling_wordlist.txt", "generated/provider_dependencies.json", "tests/providers/telegram/hooks/test_telegram.py"] | Support telegram-bot v20+ | ### Body
Currently our telegram integration uses Telegram v13 telegram-bot library. On 1st of Jan 2023 a new, backwards incompatible version of Telegram-bot has been released : https://pypi.org/project/python-telegram-bot/20.0/#history and at least as reported by MyPy and our test suite test failures, Telegram 20 needs some changes to work:
Here is a transition guide that might be helpful.
Transition guide is here: https://github.com/python-telegram-bot/python-telegram-bot/wiki/Transition-guide-to-Version-20.0
In the meantime we limit telegram to < 20.0.0
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28670 | https://github.com/apache/airflow/pull/28953 | 68412e166414cbf6228385e1e118ec0939857496 | 644cea14fff74d34f823b5c52c9dbf5bad33bd52 | "2023-01-02T06:58:45Z" | python | "2023-02-23T03:24:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,662 | ["airflow/providers/apache/beam/operators/beam.py"] | BeamRunGoPipelineOperator: temp dir with Go file from GCS is removed before starting the pipeline | ### Apache Airflow Provider(s)
apache-beam
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-beam==4.1.0
apache-airflow-providers-google==8.6.0
### Apache Airflow version
2.5.0
### Operating System
macOS 13.1
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using the `BeamRunGoPipelineOperator` with a `go_file` on GCS, the object is downloaded to a temporary directory, however the directory with the file has already been removed by the time it is needed, i.e. when executing `go mod init` and starting the pipeline.
### What you think should happen instead
The `BeamRunGoPipelineOperator.execute` method enters into a `tempfile.TemporaryDirectory` context manager using [with](https://github.com/apache/airflow/blob/2.5.0/airflow/providers/apache/beam/operators/beam.py#L588) when downloading the `go_file` from GCS to the local filesystem. On completion of the context, this temporary directory is removed. `BeamHook.start_go_pipeline`, which uses the file, is called outside of the context however, which means the file no longer exists when `go mod init` is called.
A suggested solution is to use the `enter_context` method of the existing `ExitStack` to also enter into the TemporaryDirectory context manager. This allows the go_file to still exist when it is time to initialize the go module and start the pipeline:
```python
with ExitStack() as exit_stack:
if self.go_file.lower().startswith("gs://"):
gcs_hook = GCSHook(self.gcp_conn_id, self.delegate_to)
tmp_dir = exit_stack.enter_context(tempfile.TemporaryDirectory(prefix="apache-beam-go"))
tmp_gcs_file = exit_stack.enter_context(
gcs_hook.provide_file(object_url=self.go_file, dir=tmp_dir)
)
self.go_file = tmp_gcs_file.name
self.should_init_go_module = True
```
### How to reproduce
The problem can be reproduced by creating a DAG which uses the `BeamRunGoPipelineOperator` and passing a `go_file` with a GS URI:
```python
import pendulum
from airflow import DAG
from airflow.providers.apache.beam.operators.beam import BeamRunGoPipelineOperator
with DAG(
dag_id="beam_go_dag",
start_date=pendulum.today("UTC"),
) as dag:
BeamRunGoPipelineOperator(
task_id="beam_go_pipeline",
go_file="gs://my-bucket/main.go"
)
```
### Anything else
Relevant logs:
```
[2023-01-01T12:41:06.155+0100] {taskinstance.py:1303} INFO - Executing <Task(BeamRunGoPipelineOperator): beam_go_pipeline> on 2023-01-01 00:00:00+00:00
[2023-01-01T12:41:06.411+0100] {taskinstance.py:1510} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=beam_go_dag
AIRFLOW_CTX_TASK_ID=beam_go_pipeline
AIRFLOW_CTX_EXECUTION_DATE=2023-01-01T00:00:00+00:00
AIRFLOW_CTX_TRY_NUMBER=1
AIRFLOW_CTX_DAG_RUN_ID=backfill__2023-01-01T00:00:00+00:00
[2023-01-01T12:41:06.430+0100] {base.py:73} INFO - Using connection ID 'google_cloud_default' for task execution.
[2023-01-01T12:41:06.441+0100] {credentials_provider.py:323} INFO - Getting connection using `google.auth.default()` since no key file is defined for hook.
[2023-01-01T12:41:08.701+0100] {gcs.py:323} INFO - File downloaded to /var/folders/1_/7h5npt456j5f063tq7ngyxdw0000gn/T/apache-beam-gosmk3lv_4/tmp6j9g5090main.go
[2023-01-01T12:41:08.704+0100] {process_utils.py:179} INFO - Executing cmd: go mod init main
[2023-01-01T12:41:08.712+0100] {taskinstance.py:1782} ERROR - Task failed with exception
Traceback (most recent call last):
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/google/cloud/hooks/gcs.py", line 402, in provide_file
yield tmp_file
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/apache/beam/operators/beam.py", line 621, in execute
self.beam_hook.start_go_pipeline(
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/apache/beam/hooks/beam.py", line 339, in start_go_pipeline
init_module("main", working_directory)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/google/go_module_utils.py", line 37, in init_module
execute_in_subprocess(go_mod_init_cmd, cwd=go_module_path)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/utils/process_utils.py", line 168, in execute_in_subprocess
execute_in_subprocess_with_kwargs(cmd, cwd=cwd)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/utils/process_utils.py", line 180, in execute_in_subprocess_with_kwargs
with subprocess.Popen(
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/subprocess.py", line 969, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/subprocess.py", line 1845, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/1_/7h5npt456j5f063tq7ngyxdw0000gn/T/apache-beam-gosmk3lv_4'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/apache/beam/operators/beam.py", line 584, in execute
with ExitStack() as exit_stack:
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 576, in __exit__
raise exc_details[1]
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 561, in __exit__
if cb(*exc_details):
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/Users/johannaojeling/repo/johannaojeling/airflow/airflow/providers/google/cloud/hooks/gcs.py", line 399, in provide_file
with NamedTemporaryFile(suffix=file_name, dir=dir) as tmp_file:
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/tempfile.py", line 502, in __exit__
self.close()
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/tempfile.py", line 509, in close
self._closer.close()
File "/Users/johannaojeling/.pyenv/versions/3.10.6/lib/python3.10/tempfile.py", line 446, in close
unlink(self.name)
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/1_/7h5npt456j5f063tq7ngyxdw0000gn/T/apache-beam-gosmk3lv_4/tmp6j9g5090main.go'
[2023-01-01T12:41:08.829+0100] {taskinstance.py:1321} INFO - Marking task as FAILED. dag_id=beam_go_dag, task_id=beam_go_pipeline, execution_date=20230101T000000, start_date=20230101T114106, end_date=20230101T114108
[...]
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28662 | https://github.com/apache/airflow/pull/28664 | 675af73ceb5bc8b03d46a7cd903a73f9b8faba6f | 8da678ccd2e5a30f9c2d22c7526b7a238c185d2f | "2023-01-01T15:27:59Z" | python | "2023-01-03T09:08:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,658 | ["tests/jobs/test_local_task_job.py"] | Fix Quarantine tests | ### Body
We have several tests marked in the code base with `@pytest.mark.quarantined`
It means that the tests are flaky and if fail in CI it does not fail the build.
The goal is to fix the tests and make them stable.
This task is to gather all of them under the same issue instead of dedicated issue per test.
- [x] [TestImpersonation](https://github.com/apache/airflow/blob/bfcae349b88fd959e32bfacd027a5be976fe2132/tests/core/test_impersonation_tests.py#L117)
- [x] [TestImpersonationWithCustomPythonPath](https://github.com/apache/airflow/blob/bfcae349b88fd959e32bfacd027a5be976fe2132/tests/core/test_impersonation_tests.py#L181)
- [x] [test_exception_propagation](https://github.com/apache/airflow/blob/76f81cd4a7433b7eeddb863b2ae6ee59176cf816/tests/jobs/test_local_task_job.py#L772)
- [x] [test_localtaskjob_maintain_heart_rate](https://github.com/apache/airflow/blob/76f81cd4a7433b7eeddb863b2ae6ee59176cf816/tests/jobs/test_local_task_job.py#L402)
- [x] [test_exception_propagation](https://github.com/apache/airflow/blob/4d0fd8ef6adc35f683c7561f05688a65fd7451f4/tests/executors/test_celery_executor.py#L103)
- [x] [test_process_sigterm_works_with_retries](https://github.com/apache/airflow/blob/65010fda091242870a410c65478eae362899763b/tests/jobs/test_local_task_job.py#L770)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28658 | https://github.com/apache/airflow/pull/29087 | 90ce88bf34b2337f89eed67e41092f53bf24e9c1 | a6e21bc6ce428eadf44f62b05aeea7bbd3447a7b | "2022-12-31T15:37:37Z" | python | "2023-01-25T22:49:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,637 | ["docs/helm-chart/index.rst"] | version 2.4.1 migration job "run-airflow-migrations" run once only when deploy via helm or flux/kustomization | ### Official Helm Chart version
1.7.0 (latest released)
### Apache Airflow version
2.4.1
### Kubernetes Version
v4.5.4
### Helm Chart configuration
_No response_
### Docker Image customisations
_No response_
### What happened
manually copied from [the Q & A 27992 migration job](https://github.com/apache/airflow/discussions/27992) (the button create issue from discussion did not work)
I found my migration job would not restart for the 2nd time (the 1st time run was when the default airflow is deployed onto Kubernetes and it had no issues), and then i started to apply changes to the values.yaml file such as **make the database to be azure postgresql**; but then it would not take the values into effect, see screen shots;
of course my debug skills on kubernetes are not high, so i would need extra help if extra info is needed.
![image](https://user-images.githubusercontent.com/11322886/209687297-7d83e4aa-9096-467e-851a-2557928da2b6.png)
![image](https://user-images.githubusercontent.com/11322886/209687323-fc853fcc-438c-4bea-8182-793dac722cae.png)
![image](https://user-images.githubusercontent.com/11322886/209687349-5c043188-3393-49b2-a73f-a997e55d6c3c.png)
```
database:
sql_alchemy_conn_secret: airflow-postgres-redis
sql_alchemy_connect_args:
{
"keepalives": 1,
"keepalives_idle": 30,
"keepalives_interval": 5,
"keepalives_count": 5,
}
postgresql:
enabled: false
pgbouncer:
enabled: false
# Airflow database & redis config
data:
metadataSecretName: airflow-postgres-redis
```
check again the pod for waiting for the migration:
![image](https://user-images.githubusercontent.com/11322886/209689640-fdeed08d-19b3-43d5-a736-466cf36237ba.png)
and below was the 1st success at the initial installation (which did not use external db)
```
kubectl describe job airflow-airflow-run-airflow-migrations
Name: airflow-airflow-run-airflow-migrations
Namespace: airflow
Selector: controller-uid=efdc3c7b-5172-4841-abcf-17e055fa6e2e
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=run-airflow-migrations
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Tue, 27 Dec 2022 14:21:50 +0100
Completed At: Tue, 27 Dec 2022 14:22:29 +0100
Duration: 39s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=run-airflow-migrations
controller-uid=efdc3c7b-5172-4841-abcf-17e055fa6e2e
job-name=airflow-airflow-run-airflow-migrations
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-migrate-database-job
Containers:
run-airflow-migrations:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow db upgrade
Environment:
PYTHONUNBUFFERED: 1
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-airflow-airflow-metadata'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
my further experiment/try tells me the jobs were only run once.
more independent tests could be done with a bit help, such as what kind of changes will trigger migration job to run.
See below helm release history: the 1st installation worked; and i could not make the 3rd release to succeed even though the values are 100% correct; so **the bug/issue short description is: helmRelease in combination with `flux` have issues with db migration jobs (only run once <can be successful>) which makes it a stopper for further upgrade**
```
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Wed Dec 28 02:22:42 2022 superseded airflow-1.7.0 2.4.1 Install complete
2 Wed Dec 28 02:43:25 2022 deployed airflow-1.7.0 2.4.1 Upgrade complete
```
see below equivalent values , even tried to disable the db migration did not make flux to work with it.
```
createUserJob:
useHelmHooks: false
migrateDatabaseJob:
useHelmHooks: false
config:
webserver:
expose_config: 'non-sensitive-only'
postgresql:
enabled: false
pgbouncer:
enabled: true
# The maximum number of connections to PgBouncer
maxClientConn: 100
# The maximum number of server connections to the metadata database from PgBouncer
metadataPoolSize: 10
# The maximum number of server connections to the result backend database from PgBouncer
resultBackendPoolSize: 5
# Airflow database & redis config
data:
metadataSecretName: airflow-postgres-redis
# to generate strong secret: python3 -c 'import secrets; print(secrets.token_hex(16))'
webserverSecretKeySecretName: airflow-webserver-secret
```
and see below 2 jobs
```
$ kubectl describe job -n airflow
Name: airflow-airflow-create-user
Namespace: airflow
Selector: controller-uid=8b09e28b-ba3a-4cee-b20f-693a3aa15363
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=create-user-job
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Wed, 28 Dec 2022 03:22:46 +0100
Completed At: Wed, 28 Dec 2022 03:24:32 +0100
Duration: 106s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=create-user-job
controller-uid=8b09e28b-ba3a-4cee-b20f-693a3aa15363
job-name=airflow-airflow-create-user
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-create-user-job
Containers:
create-user:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow users create "$@"
--
-r
Admin
-u
admin
-e
[email protected]
-f
admin
-l
user
-p
admin
Environment:
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
Name: airflow-airflow-run-airflow-migrations
Namespace: airflow
Selector: controller-uid=5da8c81f-7920-4eaf-9d7a-58a48c740bdc
Labels: app.kubernetes.io/managed-by=Helm
chart=airflow-1.7.0
component=run-airflow-migrations
helm.toolkit.fluxcd.io/name=airflow
helm.toolkit.fluxcd.io/namespace=airflow
heritage=Helm
release=airflow-airflow
tier=airflow
Annotations: batch.kubernetes.io/job-tracking:
meta.helm.sh/release-name: airflow-airflow
meta.helm.sh/release-namespace: airflow
Parallelism: 1
Completions: 1
Completion Mode: NonIndexed
Start Time: Wed, 28 Dec 2022 03:22:46 +0100
Completed At: Wed, 28 Dec 2022 03:23:07 +0100
Duration: 21s
Pods Statuses: 0 Active (0 Ready) / 1 Succeeded / 0 Failed
Pod Template:
Labels: component=run-airflow-migrations
controller-uid=5da8c81f-7920-4eaf-9d7a-58a48c740bdc
job-name=airflow-airflow-run-airflow-migrations
release=airflow-airflow
tier=airflow
Service Account: airflow-airflow-migrate-database-job
Containers:
run-airflow-migrations:
Image: apache/airflow:2.4.1
Port: <none>
Host Port: <none>
Args:
bash
-c
exec \
airflow db upgrade
Environment:
PYTHONUNBUFFERED: 1
AIRFLOW__CORE__FERNET_KEY: <set to the key 'fernet-key' in secret 'airflow-airflow-fernet-key'> Optional: false
AIRFLOW__CORE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__DATABASE__SQL_ALCHEMY_CONN: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW_CONN_AIRFLOW_DB: <set to the key 'connection' in secret 'airflow-postgres-redis'> Optional: false
AIRFLOW__WEBSERVER__SECRET_KEY: <set to the key 'webserver-secret-key' in secret 'airflow-airflow-webserver-secret-key'> Optional: false
AIRFLOW__CELERY__BROKER_URL: <set to the key 'connection' in secret 'airflow-airflow-broker-url'> Optional: false
Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-airflow-config
Optional: false
Events: <none>
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28637 | https://github.com/apache/airflow/pull/29078 | 30ad26e705f50442f05dd579990372196323fc86 | 6c479437b1aedf74d029463bda56b42950278287 | "2022-12-29T10:27:55Z" | python | "2023-01-27T20:58:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,615 | ["airflow/dag_processing/processor.py", "airflow/models/dagbag.py", "tests/models/test_dagbag.py"] | AIP-44 Migrate Dagbag.sync_to_db to internal API. | This method is used in DagFileProcessor.process_file - it may be easier to migrate all it's internal calls instead of the whole method. | https://github.com/apache/airflow/issues/28615 | https://github.com/apache/airflow/pull/29188 | 05242e95bbfbaf153e4ae971fc0d0a5314d5bdb8 | 5c15b23023be59a87355c41ab23a46315cca21a5 | "2022-12-27T20:09:25Z" | python | "2023-03-12T10:02:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,614 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/api_internal/internal_api_call.py", "airflow/models/dag.py", "tests/api_internal/test_internal_api_call.py"] | AIP-44 Migrate DagModel.get_paused_dag_ids to Internal API | null | https://github.com/apache/airflow/issues/28614 | https://github.com/apache/airflow/pull/28693 | f114c67c03a9b4257cc98bb8a970c6aed8d0c673 | ad738198545431c1d10619f8e924d082bf6a3c75 | "2022-12-27T20:09:14Z" | python | "2023-01-20T19:08:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,613 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/trigger.py"] | AIP-44 Migrate Trigger class to Internal API | null | https://github.com/apache/airflow/issues/28613 | https://github.com/apache/airflow/pull/29099 | 69babdcf7449c95fea7fe3b9055c677b92a74298 | ee0a56a2caef0ccfb42406afe57b9d2169c13a01 | "2022-12-27T20:09:03Z" | python | "2023-02-20T21:26:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,612 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/xcom.py"] | AIP-44 Migrate XCom get*/clear* to Internal API | null | https://github.com/apache/airflow/issues/28612 | https://github.com/apache/airflow/pull/29083 | 9bc48747ddbd609c2bd3baa54a5d0472e9fdcbe4 | a1ffb26e5bcf4547e3b9e494cf7ccd24af30c2e6 | "2022-12-27T20:08:50Z" | python | "2023-01-22T19:19:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,510 | [".pre-commit-config.yaml", "STATIC_CODE_CHECKS.rst", "airflow/cli/commands/info_command.py", "scripts/ci/pre_commit/pre_commit_check_provider_yaml_files.py", "scripts/in_container/run_provider_yaml_files_check.py"] | Add pre-commit/test to verify extra links refer to existed classes | ### Body
We had an issue where extra link class (`AIPlatformConsoleLink`) was removed in [PR](https://github.com/apache/airflow/pull/26836) without removing the class from the `provider.yaml` extra links this resulted in web server exception as shown in https://github.com/apache/airflow/pull/28449
**The Task:**
Add validation that classes of extra-links in provider.yaml are importable
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/28510 | https://github.com/apache/airflow/pull/28516 | 7ccbe4e7eaa529641052779a89e34d54c5a20f72 | e47c472e632effbfe3ddc784788a956c4ca44122 | "2022-12-20T22:35:11Z" | python | "2022-12-22T02:25:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,483 | ["airflow/www/static/css/main.css"] | Issues with Custom Menu Items on Smaller Windows | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We take advantage of the custom menu items with flask appbuilder offer a variety of dropdown menus with custom DAG filters. We've notice two things:
1. When you have too many dropdown menu items in a single category, several menu items are unreachable when using the Airflow UI on a small screen:
<img width="335" alt="Screenshot 2022-12-19 at 6 34 24 PM" src="https://user-images.githubusercontent.com/40223998/208548419-f9d1ff57-6cad-4a40-bc58-dbf20148a92a.png">
2. When you have too many menu categories, multiple rows of dropdown menus are displayed, but cover some other components.
<img width="1077" alt="Screenshot 2022-12-19 at 6 32 05 PM" src="https://user-images.githubusercontent.com/40223998/208548222-44e50717-9040-4899-be06-d503a8c0f69a.png">
### What you think should happen instead
1. When you have too many dropdown menu items in a single category, there should be a scrollbar.
2. When you have too many menu categories, multiple rows of dropdown menus are displayed, the menu shouldn't cover the dag import errors or any part of the UI
### How to reproduce
1. Add a bunch of menu items under the same category in a custom plugin and resize your window smaller
2. Add a large number of menu item categories in a custom plugin and resize your window smaller.
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
I'm happy to make a PR for this. I just don't have the frontend context. If someone can point me in the right direction that'd be great
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28483 | https://github.com/apache/airflow/pull/28561 | ea3be1a602b3e109169c6e90e555a418e2649f9a | 2aa52f4ce78e1be7f34b0995d40be996b4826f26 | "2022-12-19T23:40:01Z" | python | "2022-12-30T01:50:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,468 | ["airflow/providers/amazon/CHANGELOG.rst", "airflow/providers/amazon/aws/transfers/sql_to_s3.py", "airflow/providers/amazon/provider.yaml", "airflow/providers/apache/hive/hooks/hive.py", "generated/provider_dependencies.json"] | Make pandas an optional dependency for amazon provider | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
latest
### Operating System
any
### Deployment
Other
### Deployment details
_No response_
### What happened
First of all, apologies if this is not the right section to post a GH issue. I looked for provider specific feature requests but couldnt find such section.
We use the aws provider at my company to interact from airflow with AWS services. We are using poetry for building the testing environment to test our dags.
However the build times are quite long, and the reason is building pandas, which is a [dependency ](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/provider.yaml#L62) of the amazon provider.
By checking the provider's code, it seems pandas is used in a small minority of functions inside the provider:
```
./aws/transfers/hive_to_dynamodb.py:93: data = hive.get_pandas_df(self.sql, schema=self.schema)
```
and
```
./aws/transfers/sql_to_s3.py:159: data_df = sql_hook.get_pandas_df(sql=self.query, parameters=self.parameters)
```
Forcing every AWS Airflow user that do not use hive or want to turn sql into an s3 file to install pandas is a bit cumbersome.
### What you think should happen instead
given how heavy the package is and how little is used in the amazon provider, pandas should be an optional dependency.
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28468 | https://github.com/apache/airflow/pull/28505 | bc7feda66ed7bb2f2940fa90ef26ff90dd7a8c80 | d9ae90fc6478133767e29774920ed797175146bc | "2022-12-19T15:58:50Z" | python | "2022-12-21T08:59:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,465 | ["airflow/providers/jenkins/hooks/jenkins.py", "docs/apache-airflow-providers-jenkins/connections.rst", "tests/providers/jenkins/hooks/test_jenkins.py"] | Airflow 2.2.4 Jenkins Connection - unable to set as the hook expects to be | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello team,
I am trying to use the `JenkinsJobTriggerOperator` version v3.1.0 on an Airflow instance version 2.2.4
Checking the documentation regards how to set up the connection and the hook in order to use `https` instead of the default `http`, I see https://airflow.apache.org/docs/apache-airflow-providers-jenkins/3.1.0/connections.html
```
Extras (optional)
Specify whether you want to use http or https scheme by entering true to use https or false for http in extras. Default is http.
```
Unfortunately from the Airflow UI when trying to specify the connection and especially the `Extras` options it accepts a JSON-like object, so whatever you put differently to a dictionary the code fails to update the extra options for that connection.
Checking in more details what the [Jenkins hook](https://airflow.apache.org/docs/apache-airflow-providers-jenkins/3.1.0/_modules/airflow/providers/jenkins/hooks/jenkins.html#JenkinsHook.conn_name_attr) does:
```
self.connection = connection
connection_prefix = "http"
# connection.extra contains info about using https (true) or http (false)
if to_boolean(connection.extra):
connection_prefix = "https"
url = f"{connection_prefix}://{connection.host}:{connection.port}"
```
where the `connection.extra` cannot be a simple true/false string!
### What you think should happen instead
Either we should get the `http` or `https` from the `Schema`
Or we should update the [JenkinsHook](https://airflow.apache.org/docs/apache-airflow-providers-jenkins/stable/_modules/airflow/providers/jenkins/hooks/jenkins.html#JenkinsHook.default_conn_name) to read the provided dictionary for http value:
`if to_boolean(connection.extra.https)`
### How to reproduce
_No response_
### Operating System
macos Monterey 12.6.2
### Versions of Apache Airflow Providers
```
pip freeze | grep apache-airflow-providers
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-common-sql==1.3.1
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-jenkins==3.1.0
apache-airflow-providers-sqlite==2.1.0
```
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28465 | https://github.com/apache/airflow/pull/30301 | f7d5b165fcb8983bd82a852dcc5088b4b7d26a91 | 1f8bf783b89d440ecb3e6db536c63ff324d9fc62 | "2022-12-19T14:43:00Z" | python | "2023-03-25T19:37:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,452 | ["airflow/providers/docker/operators/docker_swarm.py", "tests/providers/docker/operators/test_docker_swarm.py"] | TaskInstances do not succeed when using enable_logging=True option in DockerSwarmOperator | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-docker==3.3.0
### Apache Airflow version
2.5.0
### Operating System
centos 7
### Deployment
Other Docker-based deployment
### Deployment details
Running an a docker-swarm cluster deployed locally.
### What happened
Same issue as https://github.com/apache/airflow/issues/13675
With logging_enabled=True the DAG never completes and stays in running.
When using DockerSwarmOperator together with the default enable_logging=True option, tasks do not succeed and stay in state running. When checking the docker service logs I can clearly see that the container ran and ended successfully. Airflow however does not recognize that the container finished and keeps the tasks in state running.
### What you think should happen instead
DAG should complete.
### How to reproduce
Docker-compose deployment:
```console
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.5.0/docker-compose.yaml'
docker compose up airflow-init
docker compose up -d
```
DAG code:
```python
from airflow import DAG
from docker.types import Mount, SecretReference
from airflow.providers.docker.operators.docker_swarm import DockerSwarmOperator
from datetime import timedelta
from airflow.utils.dates import days_ago
from airflow.models import Variable
# Setup default args for the job
default_args = {
'owner': 'airflow',
'start_date': days_ago(2),
'retries': 0
}
# Create the DAG
dag = DAG(
'test_dag', # DAG ID
default_args=default_args,
schedule_interval='0 0 * * *',
catchup=False
)
# # Create the DAG object
with dag as dag:
docker_swarm_task = DockerSwarmOperator(
task_id="job_run",
image="<any image>",
execution_timeout=timedelta(minutes=5),
command="<specific code>",
api_version='auto',
tty=True,
enable_logging=True
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28452 | https://github.com/apache/airflow/pull/35677 | 3bb5978e63f3be21a5bb7ae89e7e3ce9d06a4ab8 | 882108862dcaf08e7f5da519b3d186048d4ec7f9 | "2022-12-19T03:51:53Z" | python | "2023-12-06T22:07:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,441 | ["airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator fails when schema_object is specified without schema_fields | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow 2.5.0
apache-airflow-providers-apache-beam 4.1.0
apache-airflow-providers-cncf-kubernetes 5.0.0
apache-airflow-providers-google 8.6.0
apache-airflow-providers-grpc 3.1.0
### Apache Airflow version
2.5.0
### Operating System
Debian 11
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
KubernetesExecutor
### What happened
GCSToBigQueryOperator allows multiple ways to specify schema of the BigQuery table:
1. Setting autodetect == True
1. Setting schema_fields directly with autodetect == False
1. Setting a schema_object and optionally a schema_object_bucket with autodetect == False
This third method seems to be broken in the latest provider version (8.6.0) and will always result in this error:
```
[2022-12-16, 21:06:18 UTC] {taskinstance.py:1772} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 395, in execute
self.configuration = self._check_schema_fields(self.configuration)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", line 524, in _check_schema_fields
raise RuntimeError(
RuntimeError: Table schema was not found. Set autodetect=True to automatically set schema fields from source objects or pass schema_fields explicitly
```
The reason for this is because [this block](https://github.com/apache/airflow/blob/25bdbc8e6768712bad6043618242eec9c6632618/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L318-L320) where `if self.schema_object and self.source_format != "DATASTORE_BACKUP":`. fails to set self.schema_fields. It only sets the local variable, schema_fields. When self._check_schema_fields is subsequently called [here](https://github.com/apache/airflow/blob/25bdbc8e6768712bad6043618242eec9c6632618/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L395), we enter the [first block](https://github.com/apache/airflow/blob/25bdbc8e6768712bad6043618242eec9c6632618/airflow/providers/google/cloud/transfers/gcs_to_bigquery.py#L523-L528) because autodetect is false and schema_fields is not set.
### What you think should happen instead
No error should be raised if autodetect is set to False and a valid schema_object is provided
### How to reproduce
1. Create a simple BigQuery table with a single column col1:
```sql
CREATE TABLE `my-project.my_dataset.test_gcs_to_bigquery` (col1 INT);
```
2. Upload a json blob for this object to a bucket (e.g., data/schemas/table.json)
3. Upload a simple CSV for the source file to load to a bucket (e.g., data/source/file.csv)
4. Run the following command:
```py
gcs_to_biquery = GCSToBigQueryOperator(
task_id="gcs_to_bigquery",
destination_project_dataset_table="my-project.my_dataset.test_gcs_to_bigquery",
bucket="my_bucket_name",
create_disposition="CREATE_IF_NEEDED",
write_disposition="WRITE_TRUNCATE",
source_objects=["data/source/file.csv"],
source_format="CSV",
autodetect=False,
schema_object="data/schemas/table.json",
)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28441 | https://github.com/apache/airflow/pull/28444 | 032a542feeb617d1f92580b97fa0ad3cdca09d63 | 9eacf607be109eb6ab80f7e27d234a17fb128ae0 | "2022-12-18T13:48:28Z" | python | "2022-12-20T06:14:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,393 | ["airflow/providers/google/provider.yaml"] | Webserver reports "ImportError: Module "airflow.providers.google.cloud.operators.mlengine" does not define a "AIPlatformConsoleLink" attribute/class" | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow 2.5.0
apache-airflow-providers-apache-beam 4.1.0
apache-airflow-providers-cncf-kubernetes 5.0.0
apache-airflow-providers-google 8.6.0
apache-airflow-providers-grpc 3.1.0
### Apache Airflow version
2.5.0
### Operating System
Debian 11
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
KubernetesExecutor
### What happened
We are seeing this stacktrace on our webserver when a task is clicked:
```
10.253.8.251 - - [15/Dec/2022:18:32:58 +0000] "GET /object/next_run_datasets/recs_ranking_purchase_ranker_dag HTTP/1.1" 200 2 "https://web.airflow.etsy-syseng-gke-prod.etsycloud.com/dags/recs_ranking_purchase_ranker_dag/code" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36"
raise ImportError(f'Module "{module_path}" does not define a "{class_name}" attribute/class')
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 38, in import_string
imported_class = import_string(class_name)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers_manager.py", line 275, in _sanity_check
Traceback (most recent call last):
During handling of the above exception, another exception occurred:
AttributeError: module 'airflow.providers.google.cloud.operators.mlengine' has no attribute 'AIPlatformConsoleLink'
return getattr(module, class_name)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 36, in import_string
Traceback (most recent call last):
[2022-12-15 18:32:58,068] {providers_manager.py:243} WARNING - Exception when importing 'airflow.providers.google.cloud.operators.mlengine.AIPlatformConsoleLink' from 'apache-airflow-providers-google' package
ImportError: Module "airflow.providers.google.cloud.operators.mlengine" does not define a "AIPlatformConsoleLink" attribute/class
```
### What you think should happen instead
These errors should now appear.
### How to reproduce
Start webserver anew, navigate to a dag, click on a task, and tail webserver logs
### Anything else
[This YAML file](https://github.com/apache/airflow/blob/providers-google/8.6.0/airflow/providers/google/provider.yaml#L968) is being utilized as config which then results in the import error here: https://github.com/apache/airflow/blob/providers-google/8.6.0/airflow/providers_manager.py#L885-L891
```
extra-links:
- airflow.providers.google.cloud.operators.bigquery.BigQueryConsoleLink
- airflow.providers.google.cloud.operators.bigquery.BigQueryConsoleIndexableLink
- airflow.providers.google.cloud.operators.mlengine.AIPlatformConsoleLink
```
We should remove this from extra-links as it was removed as of apache-airflow-providers-google 8.5.0
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28393 | https://github.com/apache/airflow/pull/28449 | b213f4fd2627bb2a2a4c96fe2845471db430aa5d | 7950fb9711384f8ac4609fc19f319edb17e296ef | "2022-12-15T22:04:26Z" | python | "2022-12-21T05:29:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,391 | ["airflow/cli/commands/task_command.py", "airflow/executors/kubernetes_executor.py", "airflow/www/views.py"] | Manual task trigger fails for kubernetes executor with psycopg2 InvalidTextRepresentation error | ### Apache Airflow version
main (development)
### What happened
Manual task trigger fails for kubernetes executor with the following error. Manual trigger of dag works without any issue.
```
[2022-12-15 20:05:38,442] {app.py:1741} ERROR - Exception on /run [POST]
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.InvalidTextRepresentation: invalid input syntax for integer: "manual"
LINE 3: ...ate = 'queued' AND task_instance.queued_by_job_id = 'manual'
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/www/auth.py", line 47, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/www/decorators.py", line 125, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/www/views.py", line 1896, in run
executor.start()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 586, in start
self.clear_not_launched_queued_tasks()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/executors/kubernetes_executor.py", line 510, in clear_not_launched_queued_tasks
queued_tis: list[TaskInstance] = query.all()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2773, in all
return self._iter().all()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter
result = self.session.execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1714, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DataError: (psycopg2.errors.InvalidTextRepresentation) invalid input syntax for integer: "manual"
LINE 3: ...ate = 'queued' AND task_instance.queued_by_job_id = 'manual'
```
^
### What you think should happen instead
should be able to trigger the task manually from the UI
### How to reproduce
deploy the main branch with kubernetes executor and postgres db.
### Operating System
ubuntu 20
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Python version: 3.10.9
Airflow version: 2.6.0.dev0
helm.sh/chart=postgresql-10.5.3
### Anything else
the issue is caused due to this check:
https://github.com/apache/airflow/blob/b263dbcb0f84fd9029591d1447a7c843cb970f15/airflow/executors/kubernetes_executor.py#L505-L507
in `celery_executor` there is a similar check, but i believe it is not called at the ti executor time. and also since it is in a try/catch the exception is not visible.
https://github.com/apache/airflow/blob/b263dbcb0f84fd9029591d1447a7c843cb970f15/airflow/executors/celery_executor.py#L394-L412
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28391 | https://github.com/apache/airflow/pull/28394 | be0e35321f0bbd7d21c75096cad45dbe20c2359a | 9510043546d1ac8ac56b67bafa537e4b940d68a4 | "2022-12-15T20:37:26Z" | python | "2023-01-24T15:18:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,381 | ["Dockerfile.ci", "airflow/www/extensions/init_views.py", "airflow/www/package.json", "airflow/www/templates/swagger-ui/index.j2", "airflow/www/webpack.config.js", "airflow/www/yarn.lock", "setup.cfg"] | CVE-2019-17495 for swagger-ui | ### Apache Airflow version
2.5.0
### What happened
this issue https://github.com/apache/airflow/issues/18383 still isn't closed. It seems like the underlying swagger-ui bundle has been abandoned by its maintainer, and we should instead point swagger UI bundle to this version which is kept up-to-date
https://github.com/bartsanchez/swagger_ui_bundle
edit : it seems like this might not be coming from the swagger_ui_bundle any more but instead perhaps from connexion. I'm not familiar with python dependencies, so forgive me if I'm mis-reporting this.
There are CVE scanner tools that notifies https://github.com/advisories/GHSA-c427-hjc3-wrfw using the apache/airflow:2.1.4
The python deps include swagger-ui-2.2.10 and swagger-ui-3.30.0 as part of the bundle. It is already included at ~/.local/lib/python3.6/site-packages/swagger_ui_bundle
swagger-ui-2.2.10 swagger-ui-3.30.0
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28381 | https://github.com/apache/airflow/pull/28788 | 35a8ffc55af220b16ea345d770f80f698dcae3fb | 35ad16dc0f6b764322b1eb289709e493fbbb0ae0 | "2022-12-15T13:50:45Z" | python | "2023-01-10T10:24:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,356 | ["airflow/config_templates/default_webserver_config.py"] | CSRF token should be expire with session | ### Apache Airflow version
2.5.0
### What happened
In the default configuration, the CSRF token [expires in one hour](https://pythonhosted.org/Flask-WTF/config.html#forms-and-csrf). This setting leads to frequent errors in the UI – for no good reason.
### What you think should happen instead
A short expiration date for the CSRF token is not the right value in my view and I [agree with this answer](https://security.stackexchange.com/a/56520/22108) that the CSRF token should basically never expire, instead pegging itself to the current session.
That is, the CSRF token should last as long as the current session. The easiest way to accomplish this is by generating the CSRF token from the session id.
### How to reproduce
_No response_
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28356 | https://github.com/apache/airflow/pull/28730 | 04306f18b0643dfed3ed97863bbcf24dc50a8973 | 543e9a592e6b9dc81467c55169725e192fe95e89 | "2022-12-14T10:21:12Z" | python | "2023-01-10T23:25:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,328 | ["airflow/executors/kubernetes_executor.py"] | Scheduler pod hang when K8s API call fail | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow version: `2.3.4`
I have deployed airflow with the official Helm in K8s with `KubernetesExecutor`. Sometimes the scheduler hang when calling K8s API. The log:
``` bash
ERROR - Exception when executing Executor.end
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 752, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 842, in _run_scheduler_loop
self.executor.heartbeat()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/base_executor.py", line 171, in heartbeat
self.sync()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 649, in sync
next_event = self.event_scheduler.run(blocking=False)
File "/usr/local/lib/python3.8/sched.py", line 151, in run
action(*argument, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat
action(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 673, in _check_worker_pods_pending_timeout
for pod in pending_pods().items:
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 15697, in list_namespaced_pod
return self.list_namespaced_pod_with_http_info(namespace, **kwargs) # noqa: E501
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 15812, in list_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 240, in GET
return self.request("GET", url,
File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 213, in request
r = self.pool_manager.request(method, url,
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/request.py", line 74, in request
return self.request_encode_url(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 815, in urlopen
return self.urlopen(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/airflow/.local/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 182, in _exit_gracefully
sys.exit(os.EX_OK)
SystemExit: 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 773, in _execute
self.executor.end()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 823, in end
self._flush_task_queue()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 776, in _flush_task_queue
self.log.debug('Executor shutting down, task_queue approximate size=%d', self.task_queue.qsize())
File "<string>", line 2, in qsize
File "/usr/local/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
kind, result = conn.recv()
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/local/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
```
Then the executor process was killed and the pod was still running. But the scheduler does not work.
After restarting, the scheduler worked usually.
### What you think should happen instead
When the error occurs, the executor needs to auto restart or the scheduler should be killed.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28328 | https://github.com/apache/airflow/pull/28685 | 57a889de357b269ae104b721e2a4bb78b929cea9 | a3de721e2f084913e853aff39d04adc00f0b82ea | "2022-12-13T07:49:50Z" | python | "2023-01-03T11:53:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,296 | ["airflow/ti_deps/deps/prev_dagrun_dep.py", "tests/models/test_dagrun.py"] | Dynamic task mapping does not correctly handle depends_on_past | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Airflow 2.4.2.
I've got a task that retrieves some filenames, which then creates dynamically mapped tasks to move the files, one per task.
I'm using a similar task across multiple DAGs. However, task mapping fails on some DAG runs: it inconsistently happens per DAG run, and some DAGs do not seem to be affected at all. These seem to be the DAGs where no task was ever mapped, so that the mapped task instance ended up in a Skipped state.
What happens is that multiple files will be found, but only a single dynamically mapped task will be created. This task never starts and has map_index of -1. It can be found under the "List instances, all runs" menu, but says "No Data found." under the "Mapped Tasks" tab.
When I press the "Run" button when the mapped task is selected, the following error appears:
```
Could not queue task instance for execution, dependencies not met: Previous Dagrun State: depends_on_past is true for this task's DAG, but the previous task instance has not run yet., Task has been mapped: The task has yet to be mapped!
```
The previous task *has* run however. No errors appeared in my Airflow logs.
### What you think should happen instead
The appropriate amount of task instances should be created, they should correctly resolve the ```depends_on_past``` check and then proceed to run correctly.
### How to reproduce
This DAG reliably reproduces the error for me. The first set of mapped tasks succeeds, the subsequent ones do not.
```python
from airflow import DAG
from airflow.decorators import task
import datetime as dt
from airflow.operators.python import PythonOperator
@task
def get_filenames_kwargs():
return [
{"file_name": i}
for i in range(10)
]
def print_filename(file_name):
print(file_name)
with DAG(
dag_id="dtm_test",
start_date=dt.datetime(2022, 12, 10),
default_args={
"owner": "airflow",
"depends_on_past": True,
},
schedule="@daily",
) as dag:
get_filenames_task = get_filenames_kwargs.override(task_id="get_filenames_task")()
print_filename_task = PythonOperator.partial(
task_id="print_filename_task",
python_callable=print_filename,
).expand(op_kwargs=get_filenames_task)
# Perhaps redundant
get_filenames_task >> print_filename_task
```
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28296 | https://github.com/apache/airflow/pull/28379 | a62840806c37ef87e4112c0138d2cdfd980f1681 | 8aac56656d29009dbca24a5948c2a2097043f4f3 | "2022-12-12T07:36:52Z" | python | "2022-12-15T16:43:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,272 | ["airflow/providers/amazon/aws/sensors/s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"] | S3KeySensor 'bucket_key' instantiates as a nested list when rendered as a templated_field | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.2.0
### Apache Airflow version
2.5.0
### Operating System
Red Hat Enterprise Linux Server 7.6 (Maipo)
### Deployment
Virtualenv installation
### Deployment details
Simple virtualenv deployment
### What happened
bucket_key is a template_field in S3KeySensor, which means that is expected to be rendered as a template field.
The supported types for the attribute are both 'str' and 'list'. There is also a [conditional operation in the __init__ function](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/sensors/s3.py#L89) of the class that relies on the type of the input data, that converts the attribute to a list of strings. If a list of str is passed in through Jinja template, **self.bucket_key** is available as a _**doubly-nested list of strings**_, rather than a list of strings.
This is because the input value of **bucket_key** can only be a string type that represents the template-string when used as a template_field. These template_fields are then converted to their corresponding values when instantiated as a task_instance.
Example log from __init__ function:
` scheduler | DEBUG | type: <class 'list'> | val: ["{{ ti.xcom_pull(task_ids='t1') }}"]`
Example log from poke function:
`poke | DEBUG | type: <class 'list'> | val: [["s3://test_bucket/test_key1", "s3://test_bucket/test_key2"]]`
This leads to the poke function throwing an [exception](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/hooks/s3.py#L172) as each individual key needs to be a string value to parse the url, but is being passed as a list (since self.bucket_key is a nested list).
### What you think should happen instead
Instead of putting the input value of **bucket_key** in a list, we should store the value as-is upon initialization of the class, and just conditionally check the type of the attribute within the poke function.
[def \_\_init\_\_](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/sensors/s3.py#L89)
`self.bucket_key = bucket_key`
(which willstore the input values correctly as a str or a list when the task instance is created and the template fields are rendered)
[def poke](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/sensors/s3.py#L127)
```
def poke(self, context: Context):
if isinstance(self.bucket_key, str):
return self._check_key(key)
else:
return all(self._check_key(key) for key in self.bucket_key)
```
### How to reproduce
1. Use a template field as the bucket_key attribute in S3KeySensor
2. Pass a list of strings as the rendered template input value for the bucket_key attribute in the S3KeySensor task. (e.g. as an XCOM or Variable pulled value)
Example:
```
with DAG(
...
render_template_as_native_obj=True,
) as dag:
@task(task_id="get_list_of_str", do_xcom_push=True)
def get_list_of_str():
return ["s3://test_bucket/test_key1", "s3://test_bucket/test_key1"]
t = get_list_of_str()
op = S3KeySensor(task_id="s3_key_sensor", bucket_key="{{ ti.xcom_pull(task_ids='get_list_of_str') }}")
t >> op
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28272 | https://github.com/apache/airflow/pull/28340 | 9d9b15989a02042a9041ff86bc7e304bb06caa15 | 381160c0f63a15957a631da9db875f98bb8e9d64 | "2022-12-09T20:17:11Z" | python | "2022-12-14T07:47:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,271 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/models/variable.py"] | AIP-44 Migrate Variable to Internal API | Link: https://github.com/apache/airflow/blob/main/airflow/models/variable.py
Methods to migrate:
- val
- set
- delete
- update
Note that get_variable_from_secrets shouls still be executed locally.
It may be better to first close https://github.com/apache/airflow/issues/28267 | https://github.com/apache/airflow/issues/28271 | https://github.com/apache/airflow/pull/28795 | 9c3cd3803f0c4c83b1f8220525e1ac42dd676549 | bea49094be3e9d84243383017ca7d21dda62f329 | "2022-12-09T20:09:08Z" | python | "2023-01-23T11:21:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,270 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/manager.py", "tests/api_internal/endpoints/test_rpc_api_endpoint.py", "tests/api_internal/test_internal_api_call.py", "tests/dag_processing/test_manager.py"] | AIP-44 Migrate DagFileProcessorManager._deactivate_stale_dags to Internal API | null | https://github.com/apache/airflow/issues/28270 | https://github.com/apache/airflow/pull/28476 | c18dbe963ad87c03d49e95dfe189b765cc18fbec | 29a26a810ee8250c30f8ba0d6a72bc796872359c | "2022-12-09T19:55:02Z" | python | "2023-01-25T21:26:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,268 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/dag_processing/processor.py", "airflow/utils/log/logging_mixin.py", "tests/dag_processing/test_processor.py"] | AIP-44 Migrate DagFileProcessor.manage_slas to Internal API | null | https://github.com/apache/airflow/issues/28268 | https://github.com/apache/airflow/pull/28502 | 7e2493e3c8b2dbeb378dba4e40110ab1e4ad24da | 0359a42a3975d0d7891a39abe4395bdd6f210718 | "2022-12-09T19:54:41Z" | python | "2023-01-23T20:54:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,267 | ["airflow/api_internal/internal_api_call.py", "airflow/cli/commands/internal_api_command.py", "airflow/cli/commands/scheduler_command.py", "airflow/www/app.py", "tests/api_internal/test_internal_api_call.py"] | AIP-44 Provide information to internal_api_call decorator about the running component | Scheduler/Webserver should never use Internal API, so calling any method decorated with internal_api_call should still execute them locally | https://github.com/apache/airflow/issues/28267 | https://github.com/apache/airflow/pull/28783 | 50b30e5b92808e91ad9b6b05189f560d58dd8152 | 6046aef56b12331b2bb39221d1935b2932f44e93 | "2022-12-09T19:53:23Z" | python | "2023-02-15T01:37:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,266 | [".pre-commit-config.yaml", "airflow/cli/cli_parser.py", "airflow/cli/commands/internal_api_command.py", "airflow/www/extensions/init_views.py", "tests/cli/commands/test_internal_api_command.py"] | AIP-44 Implement standalone internal-api component | https://github.com/apache/airflow/pull/27892 added Internal API as part of Webserver.
We need to introduce `airlfow internal-api` CLI command that starts Internal API as a independent component. | https://github.com/apache/airflow/issues/28266 | https://github.com/apache/airflow/pull/28425 | 760c52949ac41ffa7a2357aa1af0cdca163ddac8 | 367e8f135c2354310b67b3469317f15cec68dafa | "2022-12-09T19:51:08Z" | python | "2023-01-20T18:19:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,242 | ["airflow/cli/commands/role_command.py", "airflow/www/extensions/init_appbuilder.py"] | Airflow CLI to list roles is slow | ### Apache Airflow version
2.5.0
### What happened
We're currently running a suboptimal setup where database connectivity is laggy, 125ms roundtrip.
This has interesting consequences. For example, `airflow roles list` is really slow. Turns out that it's doing a lot of individual queries.
### What you think should happen instead
Ideally, listing roles should be a single (perhaps complex) query.
### How to reproduce
We're using py-spy to sample program execution:
```bash
$ py-spy record -o spy.svg -i --rate 250 --nonblocking airflow roles list
```
Now, to see the bad behavior, the database should incur significant latency.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28242 | https://github.com/apache/airflow/pull/28244 | 2f5c77b0baa0ab26d2c51fa010850653ded80a46 | e24733662e95ad082e786d4855066cd4d36015c9 | "2022-12-08T22:18:08Z" | python | "2022-12-09T12:47:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 28,227 | ["airflow/utils/sqlalchemy.py", "tests/utils/test_sqlalchemy.py"] | Scheduler error: 'V1PodSpec' object has no attribute '_ephemeral_containers' | ### Apache Airflow version
2.5.0
### What happened
After upgrade 2.2.5 -> 2.5.0 scheduler failing with error:
```
AttributeError: 'V1PodSpec' object has no attribute '_ephemeral_containers'
```
tried with no luck:
```
airflow dags reserialize
```
Full Traceback:
```verilog
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 39, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 73, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 43, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 247, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 759, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 889, in _run_scheduler_loop
num_finished_events = self._process_executor_events(session=session)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 705, in _process_executor_events
self.executor.send_callback(request)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/celery_kubernetes_executor.py", line 213, in send_callback
self.callback_sink.send(request)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/callbacks/database_callback_sink.py", line 34, in send
db_callback = DbCallbackRequest(callback=callback, priority_weight=10)
File "<string>", line 4, in __init__
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/state.py", line 480, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/state.py", line 477, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/db_callback_request.py", line 46, in __init__
self.callback_data = callback.to_json()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/callbacks/callback_requests.py", line 91, in to_json
val = BaseSerialization.serialize(self.__dict__, strict=True)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 450, in serialize
return cls._encode(cls.serialize(var.__dict__, strict=strict), type_=DAT.SIMPLE_TASK_INSTANCE)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in serialize
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 407, in <dictcomp>
{str(k): cls.serialize(v, strict=strict) for k, v in var.items()}, type_=DAT.DICT
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 412, in serialize
json_pod = PodGenerator.serialize_pod(var)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/kubernetes/pod_generator.py", line 411, in serialize_pod
return api_client.sanitize_for_serialization(pod)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 241, in sanitize_for_serialization
return {key: self.sanitize_for_serialization(val)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 241, in <dictcomp>
return {key: self.sanitize_for_serialization(val)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 237, in sanitize_for_serialization
obj_dict = {obj.attribute_map[attr]: getattr(obj, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 239, in <dictcomp>
if getattr(obj, attr) is not None}
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 397, in ephemeral_containers
return self._ephemeral_containers
AttributeError: 'V1PodSpec' object has no attribute '_ephemeral_containers'
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
AWS EKS
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/28227 | https://github.com/apache/airflow/pull/28454 | dc06bb0e26a0af7f861187e84ce27dbe973b731c | 27f07b0bf5ed088c4186296668a36dc89da25617 | "2022-12-08T15:44:30Z" | python | "2022-12-26T07:56:13Z" |