content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
django.db.utils.InterfaceError: (0, '') when using django model
I have django.db.utils.InterfaceError: (0, '') error on django.
I googled around and found this error is related with django mysql connection.
What I have done is just like this ,
from django.core.management.base import BaseCommand
from ...models import Issue
class Command(BaseCommand):
def handle(self, *args, **options):
print("dbconnection test:")
obj = Issue.objects.get(id=1)
print(obj.id)
exit()
Some articles show the solution with , connection close
cursor = connection.cursor()
cursor.execute(query)
cursor.close()
but I don't even have the chance to connection.close()
Problem happens here /usr/local/lib/python3.6/site-packages/MySQLdb/connections.py
def query(self, query):
# Since _mysql releases GIL while querying, we need immutable buffer.
if isinstance(query, bytearray):
query = bytes(query)
_mysql.connection.query(self, query)
I really appreciate any help. thank you very much.
I added the CONN_MAX_AGE None in db settings but in vain.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
....
'HOST': env('DATABASE_HOST'),
'PORT': env('DATABASE_PORT'),
'OPTIONS': {
'charset': 'utf8mb4',
'init_command': "SET sql_mode='STRICT_TRANS_TABLES'"
},
'CONN_MAX_AGE' : None ## add here
}
}
These are the stacktrace
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler
raise errorvalue
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute
res = self._query(query)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query
rowcount = self._do_query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query
db.query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query
_mysql.connection.query(self, query)
_mysql_exceptions.InterfaceError: (0, '')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 19, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 328, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 369, in execute
output = self.handle(*args, **options)
File "/code/tweet/management/commands/handle_tweet.py", line 521, in handle
twitterApi.search_tweet(keyword)
File "/code/tweet/management/commands/handle_tweet.py", line 329, in search_tweet
cnt = self.tagByAi()
File "/code/tweet/management/commands/handle_tweet.py", line 103, in tagByAi
crowded = Issue.objects.get(id=382)
File "/usr/local/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 411, in get
num = len(clone)
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 258, in __len__
self._fetch_all()
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 1261, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 57, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1137, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler
raise errorvalue
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute
res = self._query(query)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query
rowcount = self._do_query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query
db.query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query
_mysql.connection.query(self, query)
django.db.utils.InterfaceError: (0, '')
myenvironment is here
absl-py 0.9.0
asgiref 3.2.7
astor 0.8.1
boto3 1.12.28
botocore 1.15.28
cachetools 4.0.0
certifi 2019.11.28
chardet 3.0.4
cycler 0.10.0
Django 3.0.1
django-environ 0.4.5
django-extensions 2.2.6
django-filter 2.2.0
django-mysql 3.3.0
djangorestframework 3.11.0
docutils 0.15.2
gast 0.2.2
gensim 3.8.1
google-api-core 1.16.0
google-auth 1.11.3
google-cloud-core 1.3.0
google-cloud-storage 1.26.0
google-pasta 0.2.0
google-resumable-media 0.5.0
googleapis-common-protos 1.51.0
grpcio 1.27.2
h5py 2.10.0
idna 2.9
jmespath 0.9.5
Keras 2.3.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
kiwisolver 1.1.0
Markdown 3.2.1
matplotlib 3.0.3
mecab-python3 0.996.3
mysqlclient 1.3.13
neologdn 0.4
numpy 1.16.2
oauthlib 3.1.0
opt-einsum 3.2.0
pandas 0.24.2
pandas-schema 0.3.5
pandocfilters 1.4.2
pip 20.0.2
protobuf 3.11.3
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 2.4.6
python-dateutil 2.8.1
pytz 2019.3
PyYAML 5.3.1
requests 2.23.0
requests-oauthlib 1.3.0
rsa 4.0
s3transfer 0.3.3
scikit-learn 0.20.3
scipy 1.4.1
setuptools 45.2.0
six 1.14.0
smart-open 1.10.0
sqlparse 0.3.1
tensorboard 1.15.0
tensorflow 1.15.2
tensorflow-estimator 1.15.1
tensorflow-hub 0.7.0
termcolor 1.1.0
urllib3 1.25.8
uWSGI 2.0.17
Werkzeug 1.0.0
wheel 0.34.2
wrapt 1.12.1
A:
My code was very similar to what is in the question:
class Command(BaseCommand):
help = "Check order"
def add_arguments(self, parser):
parser.add_argument("--order-no", nargs="?", type=str)
def handle(self, *args, **options):
order = Orders.objects.get(order_no=options["order_no"])
print(order)
I figured out by accident that the connection is randomly closed every now and then and it causes InterfaceError. In my case probably some other section of the code uses models on global level or maybe django.setup() is loading module that creates a connection to database, or perhaps connection is kept in memory and re-used between consecutive django calls. (No idea so far, need to dig more).
But solution to it was pretty simple, add transaction context manager.
It will look like this:
from django.db import transaction
class Command(BaseCommand):
help = "Check order"
def add_arguments(self, parser):
parser.add_argument("--order-no", nargs="?", type=str)
def handle(self, *args, **options):
with transaction.atomic():
order = Orders.objects.get(order_no=options["order_no"])
print(order)
A:
MySQL is lazily connected to Django. If connection.connection is None means you have not connected to MySQL before.
If the connection is closed and you try to access the cursor, you will get an InterfaceError. You can close the database connection so that when you use ORM methods Django will reconnect to open a new connection
I have created a helper reconnect() which we are using in production.
def reconnect():
from django.db import connections
from logging import getLogger
for alias in list(connections):
conn = connections[alias]
if conn.connection and not conn.is_usable():
conn.close()
del connections[alias]
closed.append(alias)
getLogger(__name__).warn('Closing unusable connections: %s', closed)
I am further attaching a few helpful examples on maintaining database connections healthy
https://www.programcreek.com/python/example/100987/django.db.connection.is_usable
| django.db.utils.InterfaceError: (0, '') when using django model | I have django.db.utils.InterfaceError: (0, '') error on django.
I googled around and found this error is related with django mysql connection.
What I have done is just like this ,
from django.core.management.base import BaseCommand
from ...models import Issue
class Command(BaseCommand):
def handle(self, *args, **options):
print("dbconnection test:")
obj = Issue.objects.get(id=1)
print(obj.id)
exit()
Some articles show the solution with , connection close
cursor = connection.cursor()
cursor.execute(query)
cursor.close()
but I don't even have the chance to connection.close()
Problem happens here /usr/local/lib/python3.6/site-packages/MySQLdb/connections.py
def query(self, query):
# Since _mysql releases GIL while querying, we need immutable buffer.
if isinstance(query, bytearray):
query = bytes(query)
_mysql.connection.query(self, query)
I really appreciate any help. thank you very much.
I added the CONN_MAX_AGE None in db settings but in vain.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
....
'HOST': env('DATABASE_HOST'),
'PORT': env('DATABASE_PORT'),
'OPTIONS': {
'charset': 'utf8mb4',
'init_command': "SET sql_mode='STRICT_TRANS_TABLES'"
},
'CONN_MAX_AGE' : None ## add here
}
}
These are the stacktrace
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler
raise errorvalue
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute
res = self._query(query)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query
rowcount = self._do_query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query
db.query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query
_mysql.connection.query(self, query)
_mysql_exceptions.InterfaceError: (0, '')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 19, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 328, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.6/site-packages/django/core/management/base.py", line 369, in execute
output = self.handle(*args, **options)
File "/code/tweet/management/commands/handle_tweet.py", line 521, in handle
twitterApi.search_tweet(keyword)
File "/code/tweet/management/commands/handle_tweet.py", line 329, in search_tweet
cnt = self.tagByAi()
File "/code/tweet/management/commands/handle_tweet.py", line 103, in tagByAi
crowded = Issue.objects.get(id=382)
File "/usr/local/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 411, in get
num = len(clone)
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 258, in __len__
self._fetch_all()
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 1261, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.6/site-packages/django/db/models/query.py", line 57, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1137, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 100, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 68, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.6/site-packages/django/db/backends/utils.py", line 86, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.6/site-packages/django/db/backends/mysql/base.py", line 74, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 250, in execute
self.errorhandler(self, exc, value)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler
raise errorvalue
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 247, in execute
res = self._query(query)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 412, in _query
rowcount = self._do_query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/cursors.py", line 375, in _do_query
db.query(q)
File "/usr/local/lib/python3.6/site-packages/MySQLdb/connections.py", line 276, in query
_mysql.connection.query(self, query)
django.db.utils.InterfaceError: (0, '')
myenvironment is here
absl-py 0.9.0
asgiref 3.2.7
astor 0.8.1
boto3 1.12.28
botocore 1.15.28
cachetools 4.0.0
certifi 2019.11.28
chardet 3.0.4
cycler 0.10.0
Django 3.0.1
django-environ 0.4.5
django-extensions 2.2.6
django-filter 2.2.0
django-mysql 3.3.0
djangorestframework 3.11.0
docutils 0.15.2
gast 0.2.2
gensim 3.8.1
google-api-core 1.16.0
google-auth 1.11.3
google-cloud-core 1.3.0
google-cloud-storage 1.26.0
google-pasta 0.2.0
google-resumable-media 0.5.0
googleapis-common-protos 1.51.0
grpcio 1.27.2
h5py 2.10.0
idna 2.9
jmespath 0.9.5
Keras 2.3.1
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
kiwisolver 1.1.0
Markdown 3.2.1
matplotlib 3.0.3
mecab-python3 0.996.3
mysqlclient 1.3.13
neologdn 0.4
numpy 1.16.2
oauthlib 3.1.0
opt-einsum 3.2.0
pandas 0.24.2
pandas-schema 0.3.5
pandocfilters 1.4.2
pip 20.0.2
protobuf 3.11.3
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 2.4.6
python-dateutil 2.8.1
pytz 2019.3
PyYAML 5.3.1
requests 2.23.0
requests-oauthlib 1.3.0
rsa 4.0
s3transfer 0.3.3
scikit-learn 0.20.3
scipy 1.4.1
setuptools 45.2.0
six 1.14.0
smart-open 1.10.0
sqlparse 0.3.1
tensorboard 1.15.0
tensorflow 1.15.2
tensorflow-estimator 1.15.1
tensorflow-hub 0.7.0
termcolor 1.1.0
urllib3 1.25.8
uWSGI 2.0.17
Werkzeug 1.0.0
wheel 0.34.2
wrapt 1.12.1
| [
"My code was very similar to what is in the question:\nclass Command(BaseCommand):\n help = \"Check order\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"--order-no\", nargs=\"?\", type=str)\n\n def handle(self, *args, **options):\n order = Orders.objects.get(order_no=options[\"order_no\"])\n print(order)\n\nI figured out by accident that the connection is randomly closed every now and then and it causes InterfaceError. In my case probably some other section of the code uses models on global level or maybe django.setup() is loading module that creates a connection to database, or perhaps connection is kept in memory and re-used between consecutive django calls. (No idea so far, need to dig more).\nBut solution to it was pretty simple, add transaction context manager.\nIt will look like this:\nfrom django.db import transaction \n\n\nclass Command(BaseCommand):\n help = \"Check order\"\n\n def add_arguments(self, parser):\n parser.add_argument(\"--order-no\", nargs=\"?\", type=str)\n\n def handle(self, *args, **options):\n with transaction.atomic():\n order = Orders.objects.get(order_no=options[\"order_no\"])\n print(order)\n\n",
"MySQL is lazily connected to Django. If connection.connection is None means you have not connected to MySQL before.\nIf the connection is closed and you try to access the cursor, you will get an InterfaceError. You can close the database connection so that when you use ORM methods Django will reconnect to open a new connection\nI have created a helper reconnect() which we are using in production.\ndef reconnect():\n from django.db import connections\n from logging import getLogger\n \n for alias in list(connections):\n conn = connections[alias]\n if conn.connection and not conn.is_usable():\n conn.close()\n del connections[alias]\n closed.append(alias)\n getLogger(__name__).warn('Closing unusable connections: %s', closed)\n\nI am further attaching a few helpful examples on maintaining database connections healthy\nhttps://www.programcreek.com/python/example/100987/django.db.connection.is_usable\n"
] | [
1,
0
] | [] | [] | [
"django",
"mysql",
"python"
] | stackoverflow_0060852406_django_mysql_python.txt |
Q:
Does the mean() function in python create a list?
I saw a code that does a mean calculation for a column using another column as a group for groupby() function.
I want to know what it means by total_acc_avg[6].
Is total_acc_avg a list? Is 6 the index of the list?
import pandas as pd
data = pd.DataFrame({'mort_acc':[6, None, 3, None, 2, None, 9, 8], # Create pandas DataFrame
'x2':range(11, 19),
'total_acc':[1, 6, 2, 3, 3, 3, 6, 1],
'group2':['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b']})
print(data)
total_acc_avg=data.groupby(by='total_acc').mean().mort_acc
print(total_acc_avg[6])
A:
total_acc_avg is a pandas Series object that contains the average of the mort_acc column, grouped by the total_acc column. In this case, the 6th index of the total_acc_avg Series contains the average value of the mort_acc column for the group with total_acc value of 6.
| Does the mean() function in python create a list? | I saw a code that does a mean calculation for a column using another column as a group for groupby() function.
I want to know what it means by total_acc_avg[6].
Is total_acc_avg a list? Is 6 the index of the list?
import pandas as pd
data = pd.DataFrame({'mort_acc':[6, None, 3, None, 2, None, 9, 8], # Create pandas DataFrame
'x2':range(11, 19),
'total_acc':[1, 6, 2, 3, 3, 3, 6, 1],
'group2':['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b']})
print(data)
total_acc_avg=data.groupby(by='total_acc').mean().mort_acc
print(total_acc_avg[6])
| [
"total_acc_avg is a pandas Series object that contains the average of the mort_acc column, grouped by the total_acc column. In this case, the 6th index of the total_acc_avg Series contains the average value of the mort_acc column for the group with total_acc value of 6.\n"
] | [
1
] | [] | [] | [
"list",
"mean",
"pandas",
"python"
] | stackoverflow_0074679258_list_mean_pandas_python.txt |
Q:
Tkinter make a frame fill whole column
I'm trying to create a simple GUI using Tkinter module. I have a layout consisting of two columns (with weights 1 and 2). Now, I'd like my two widgets that I add (cfg and cfgx) to fill up the whole column in which they are placed. How could I achieve such a thing with my current setup? Thanks in advance
import tkinter as tk
from components.ConfigCreator import ConfigCreator
WIDTH = 800
HEIGHT = 600
POS_X = 300
POS_Y = 200
class MainApplication(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
# root
self.parent = parent
self._setup_size_and_positioning()
self._setup_layout()
self._setup_widgets()
def _setup_size_and_positioning(self) -> None:
self.winfo_toplevel().title('Test app')
self.winfo_toplevel().geometry(f"{WIDTH}x{HEIGHT}+{POS_X}+{POS_Y}")
self.config(width=WIDTH, height=HEIGHT)
def _setup_layout(self) -> None:
self.grid(row=0, column=0)
self.columnconfigure(0, weight=1)
self.columnconfigure(1, weight=2)
def _setup_widgets(self) -> None:
cfg = ConfigCreator(self)
cfg.grid(row=0, column=0)
cfg.config(bg="limegreen")
cfgx = ConfigCreator(self)
cfgx.grid(row=0, column=1)
cfgx.config(bg="skyblue")
if __name__ == "__main__":
root = tk.Tk()
MainApplication(root).pack(side="top", fill="both", expand=True)
root.mainloop()
@edit
#ConfigCreator.py
import tkinter as tk
class ConfigCreator(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
A:
You've configured the weight on the columns, but you haven't given any weight to any rows. Because of that, and because the frames by default are only 1 pixel tall, the columns will be virtually invisible.
To fix this you can give non-zero weight to one or more rows. For example, self.rowconfigure(0, weight=1)
You also haven't configured the contents in each column to fill the space allocated. You should add sticky="nsew" to get each of the inner frames to expand to fill. For example, cfg.grid(row=0, column=0, sticky="nsew")
| Tkinter make a frame fill whole column | I'm trying to create a simple GUI using Tkinter module. I have a layout consisting of two columns (with weights 1 and 2). Now, I'd like my two widgets that I add (cfg and cfgx) to fill up the whole column in which they are placed. How could I achieve such a thing with my current setup? Thanks in advance
import tkinter as tk
from components.ConfigCreator import ConfigCreator
WIDTH = 800
HEIGHT = 600
POS_X = 300
POS_Y = 200
class MainApplication(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
# root
self.parent = parent
self._setup_size_and_positioning()
self._setup_layout()
self._setup_widgets()
def _setup_size_and_positioning(self) -> None:
self.winfo_toplevel().title('Test app')
self.winfo_toplevel().geometry(f"{WIDTH}x{HEIGHT}+{POS_X}+{POS_Y}")
self.config(width=WIDTH, height=HEIGHT)
def _setup_layout(self) -> None:
self.grid(row=0, column=0)
self.columnconfigure(0, weight=1)
self.columnconfigure(1, weight=2)
def _setup_widgets(self) -> None:
cfg = ConfigCreator(self)
cfg.grid(row=0, column=0)
cfg.config(bg="limegreen")
cfgx = ConfigCreator(self)
cfgx.grid(row=0, column=1)
cfgx.config(bg="skyblue")
if __name__ == "__main__":
root = tk.Tk()
MainApplication(root).pack(side="top", fill="both", expand=True)
root.mainloop()
@edit
#ConfigCreator.py
import tkinter as tk
class ConfigCreator(tk.Frame):
def __init__(self, parent, *args, **kwargs):
tk.Frame.__init__(self, parent, *args, **kwargs)
| [
"You've configured the weight on the columns, but you haven't given any weight to any rows. Because of that, and because the frames by default are only 1 pixel tall, the columns will be virtually invisible.\nTo fix this you can give non-zero weight to one or more rows. For example, self.rowconfigure(0, weight=1)\nYou also haven't configured the contents in each column to fill the space allocated. You should add sticky=\"nsew\" to get each of the inner frames to expand to fill. For example, cfg.grid(row=0, column=0, sticky=\"nsew\")\n\n"
] | [
2
] | [] | [] | [
"python",
"tkinter",
"user_interface"
] | stackoverflow_0074677698_python_tkinter_user_interface.txt |
Q:
import matplotlib.pyplot as plt
i have python 3.2.3 on windows.
installed matplotlib
i'm trying to do this:
import matplotlib.pyplot as plt
i get this:
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
import matplotlib.pyplot as plt
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\pyplot.py", line 24, in <module>
import matplotlib.colorbar
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\colorbar.py", line 29, in <module>
import matplotlib.collections as collections
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\collections.py", line 23, in <module>
import matplotlib.backend_bases as backend_bases
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\backend_bases.py", line 50, in <module>
import matplotlib.textpath as textpath
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\textpath.py", line 5, in <module>
import urllib.request, urllib.parse, urllib.error
ImportError: No module named urllib.request
any idea?
A:
You need to install urllib.
This is required by matplotlib
A:
Install Python 3.10 and update pip and then try to install Matplotlib and also install urllib using pip install urllib
| import matplotlib.pyplot as plt | i have python 3.2.3 on windows.
installed matplotlib
i'm trying to do this:
import matplotlib.pyplot as plt
i get this:
Traceback (most recent call last):
File "<pyshell#10>", line 1, in <module>
import matplotlib.pyplot as plt
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\pyplot.py", line 24, in <module>
import matplotlib.colorbar
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\colorbar.py", line 29, in <module>
import matplotlib.collections as collections
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\collections.py", line 23, in <module>
import matplotlib.backend_bases as backend_bases
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\backend_bases.py", line 50, in <module>
import matplotlib.textpath as textpath
File "E:\programs\python 3.2.3\lib\site-packages\matplotlib\textpath.py", line 5, in <module>
import urllib.request, urllib.parse, urllib.error
ImportError: No module named urllib.request
any idea?
| [
"You need to install urllib.\nThis is required by matplotlib\n",
"Install Python 3.10 and update pip and then try to install Matplotlib and also install urllib using pip install urllib\n"
] | [
0,
0
] | [
"Look Install urllib from : https://pypi.python.org/pypi/urllib2_file/0.2.1\nOr if you was instaled pip you can use sudo pip install urllib\n"
] | [
-1
] | [
"matplotlib",
"python",
"python_3.x"
] | stackoverflow_0023370715_matplotlib_python_python_3.x.txt |
Q:
Check if two columns are having matching values, but values are not in the same index places(Python, Pandas)
So, I have this data frame about Super Store Sales. I have 2 sheets:
First is named "Orders"
Second one is named "Returns"
In both sheets we have a matching column called "Order ID", but in the Return sheet we have less rows in "Order ID" of returned purchases and what I basically want to do is make a new column and check if Order ids are matching In Order sheet and in Return sheet and if they are matching I want a value "Returned" to be written and if values are not matching "Not returned".
This is df_order data frame
This is df_return
This is how i thought it should be checked but it is definitely not correct cause everywhere says "not returned", but I've checked manually and seen that some orders are matching. Please, help me out.
excel_path = r'C:\Users\Korisnik\Desktop\PythonFiles\Omega\SuperStoreUS.xlsx'
df = pd.read_excel(excel_path, sheet_name=None)
# 1.
df_order = df.get('Orders')
df_returns = df.get('Returns')
df_users = df.get('Users')
df_n.reset_index(drop=True)
df_returns.reset_index(drop=True)
df_n['Status'] = np.where( df_n['Order ID'].equals(df_returns['Order ID']) and df_returns["Status"] == "Returned", "Returned", "Not returned")
df_order= {'City':['Prior Lake','Chicago','NY','Prior Lake', 'Round Rock'],
'Order ID':[86838 ,90154,15000,10000, 12447]}
df_return= {'Order ID':[90154, 86838 ],
'Returned':['Returned', 'Returned']}
# Create DataFrame from dict
df_orders = pd.DataFrame.from_dict(df_order)
df_returns = pd.DataFrame.from_dict(df_return)
A:
You can use pandas.DataFrame.merge with pandas.Series.fillna :
df_order = pd.read_excel("SuperStoreUS.xlsx", sheet_name="Orders")
df_return = pd.read_excel("SuperStoreUS.xlsx", sheet_name="Returns")
Use either :
# --- To create a new dataframe
out = df_order.merge(df_return, on="Order ID", how="left")
out["Status"] = out["Status"].fillna("Not Returned")
Or:
# --- To update df_order
df_order = df_order.merge(df_return, on="Order ID", how="left")
df_order["Status"] = df_order["Status"].fillna("Not Returned")
A:
Another way is to create a new column in df_order, with values conditioned on the row's Order ID:
df_orders['Status'] = df_orders['Order ID'].map(lambda x: 'Returned' if x in df_returns['Order ID'].tolist() else 'Not returned')
| Check if two columns are having matching values, but values are not in the same index places(Python, Pandas) | So, I have this data frame about Super Store Sales. I have 2 sheets:
First is named "Orders"
Second one is named "Returns"
In both sheets we have a matching column called "Order ID", but in the Return sheet we have less rows in "Order ID" of returned purchases and what I basically want to do is make a new column and check if Order ids are matching In Order sheet and in Return sheet and if they are matching I want a value "Returned" to be written and if values are not matching "Not returned".
This is df_order data frame
This is df_return
This is how i thought it should be checked but it is definitely not correct cause everywhere says "not returned", but I've checked manually and seen that some orders are matching. Please, help me out.
excel_path = r'C:\Users\Korisnik\Desktop\PythonFiles\Omega\SuperStoreUS.xlsx'
df = pd.read_excel(excel_path, sheet_name=None)
# 1.
df_order = df.get('Orders')
df_returns = df.get('Returns')
df_users = df.get('Users')
df_n.reset_index(drop=True)
df_returns.reset_index(drop=True)
df_n['Status'] = np.where( df_n['Order ID'].equals(df_returns['Order ID']) and df_returns["Status"] == "Returned", "Returned", "Not returned")
df_order= {'City':['Prior Lake','Chicago','NY','Prior Lake', 'Round Rock'],
'Order ID':[86838 ,90154,15000,10000, 12447]}
df_return= {'Order ID':[90154, 86838 ],
'Returned':['Returned', 'Returned']}
# Create DataFrame from dict
df_orders = pd.DataFrame.from_dict(df_order)
df_returns = pd.DataFrame.from_dict(df_return)
| [
"You can use pandas.DataFrame.merge with pandas.Series.fillna :\ndf_order = pd.read_excel(\"SuperStoreUS.xlsx\", sheet_name=\"Orders\")\ndf_return = pd.read_excel(\"SuperStoreUS.xlsx\", sheet_name=\"Returns\")\n\nUse either :\n# --- To create a new dataframe\nout = df_order.merge(df_return, on=\"Order ID\", how=\"left\")\nout[\"Status\"] = out[\"Status\"].fillna(\"Not Returned\")\n\nOr:\n# --- To update df_order\ndf_order = df_order.merge(df_return, on=\"Order ID\", how=\"left\")\ndf_order[\"Status\"] = df_order[\"Status\"].fillna(\"Not Returned\")\n\n",
"Another way is to create a new column in df_order, with values conditioned on the row's Order ID:\ndf_orders['Status'] = df_orders['Order ID'].map(lambda x: 'Returned' if x in df_returns['Order ID'].tolist() else 'Not returned')\n\n"
] | [
1,
1
] | [] | [] | [
"matching",
"multiple_columns",
"pandas",
"python"
] | stackoverflow_0074679052_matching_multiple_columns_pandas_python.txt |
Q:
read entire folder and extract multiple lines and append to new file
I'm very new to python and this is far beyond what I'm capable of.
I have multiple text files
test01.txt
test02.txt
test03.txt
test*.txt
Each file has same # of lines and same structure
i want to extract lines 20-25 and put that into a text file that i can manipulate in excel.
since there are 100s of files, it would be great if we can put the text file name on top or next to the data too.
this is basically what i was able to do but as you can see it's not exactly "fast"
thanks!
file1 = open("test01.txt", "r")
content = file1.readlines()
file1 = open("values.txt","w")
file1.write("test01.txt" + "\n")
file1.writelines(content[33:36])
file1.close()
file1 = open("test02.txt", "r")
content = file1.readlines()
#Append-adds at last
file1 = open("values.txt","a")#append mode
file1.write("test02.txt" + "\n")
file1.writelines(content[33:36])
file1.close()
file1 = open("test03.txt", "r")
content = file1.readlines()
#Append-adds at last
file1 = open("values.txt","a")#append mode
file1.write("test03.txt" + "\n")
file1.writelines(content[33:36])
file1.close()
A:
Here is a script where you can read all files in a directory and write the name of the file and the content into a another file like you did.
import os
ValuesTextFile = open("values.txt","a")
Path = './files/'
for Filename in os.listdir(Path):
print (Filename)
ValuesTextFile.writelines(Filename)
File = open(Path + Filename, "r")
Content = File.readlines()
ValuesTextFile.writelines(Content[33:36])
File.close()
ValuesTextFile.close()
| read entire folder and extract multiple lines and append to new file | I'm very new to python and this is far beyond what I'm capable of.
I have multiple text files
test01.txt
test02.txt
test03.txt
test*.txt
Each file has same # of lines and same structure
i want to extract lines 20-25 and put that into a text file that i can manipulate in excel.
since there are 100s of files, it would be great if we can put the text file name on top or next to the data too.
this is basically what i was able to do but as you can see it's not exactly "fast"
thanks!
file1 = open("test01.txt", "r")
content = file1.readlines()
file1 = open("values.txt","w")
file1.write("test01.txt" + "\n")
file1.writelines(content[33:36])
file1.close()
file1 = open("test02.txt", "r")
content = file1.readlines()
#Append-adds at last
file1 = open("values.txt","a")#append mode
file1.write("test02.txt" + "\n")
file1.writelines(content[33:36])
file1.close()
file1 = open("test03.txt", "r")
content = file1.readlines()
#Append-adds at last
file1 = open("values.txt","a")#append mode
file1.write("test03.txt" + "\n")
file1.writelines(content[33:36])
file1.close()
| [
"Here is a script where you can read all files in a directory and write the name of the file and the content into a another file like you did.\nimport os\n\nValuesTextFile = open(\"values.txt\",\"a\")\nPath = './files/'\nfor Filename in os.listdir(Path):\n print (Filename)\n ValuesTextFile.writelines(Filename)\n File = open(Path + Filename, \"r\")\n Content = File.readlines()\n ValuesTextFile.writelines(Content[33:36])\n File.close()\nValuesTextFile.close()\n\n"
] | [
0
] | [] | [] | [
"new_operator",
"python"
] | stackoverflow_0074679114_new_operator_python.txt |
Q:
pandas how to avoid iterations on rows?
using python 3.7+
want to split paragraphs into new rows. need to use Spicy on each row to get the relevant result (not just split('.')). Is it possible with pandas vectorization? any help would be much appreciated
have this df -
>>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
... 'num_wings': [2, 0, 0, 0],
... 'some_description': ['falcons have wings. falcons fly', 'dog have 4 legs. they are the best', 'spiders create webs. spiders have 8 legs', 'fish swims. fish lives in water']},
... index=['falcon', 'dog', 'spider', 'fish'])
>>> df
num_legs num_wings some_description
falcon 2 2 'falcons have wings. falcons fly'
dog 4 0 'dog have 4 legs. they are the best'
spider 8 0 'spiders create webs. spiders have 8 legs'
fish 0 0 'fish swims. fish lives in water'
I want to iterate over rows and split every sentence into 2 so the result would be -
num_legs num_wings some_description
falcon 2 2 'falcons have wings.'
falcon 2 2 'falcons fly.'
dog 4 0 'dog have 4 legs'
dog 4 0 'they are the best'
spider 8 0 'spiders create webs'
spider 8 0 'spiders have 8 legs'
fish 0 0 'fish swims.'
fish 0 0 'fish lives in water'
maybe the only way is with iterrows/itertuples (which I understand are bad practice)?
Thank you
A:
a = 'some_description'
df.assign(some_description=df[a].str.split(r'\. ')).explode(a)
result:
num_legs num_wings some_description
falcon 2 2 falcons have wings
falcon 2 2 falcons fly
dog 4 0 dog have 4 legs
dog 4 0 they are the best
spider 8 0 spiders create webs
spider 8 0 spiders have 8 legs
fish 0 0 fish swims
fish 0 0 fish lives in water
| pandas how to avoid iterations on rows? | using python 3.7+
want to split paragraphs into new rows. need to use Spicy on each row to get the relevant result (not just split('.')). Is it possible with pandas vectorization? any help would be much appreciated
have this df -
>>> df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
... 'num_wings': [2, 0, 0, 0],
... 'some_description': ['falcons have wings. falcons fly', 'dog have 4 legs. they are the best', 'spiders create webs. spiders have 8 legs', 'fish swims. fish lives in water']},
... index=['falcon', 'dog', 'spider', 'fish'])
>>> df
num_legs num_wings some_description
falcon 2 2 'falcons have wings. falcons fly'
dog 4 0 'dog have 4 legs. they are the best'
spider 8 0 'spiders create webs. spiders have 8 legs'
fish 0 0 'fish swims. fish lives in water'
I want to iterate over rows and split every sentence into 2 so the result would be -
num_legs num_wings some_description
falcon 2 2 'falcons have wings.'
falcon 2 2 'falcons fly.'
dog 4 0 'dog have 4 legs'
dog 4 0 'they are the best'
spider 8 0 'spiders create webs'
spider 8 0 'spiders have 8 legs'
fish 0 0 'fish swims.'
fish 0 0 'fish lives in water'
maybe the only way is with iterrows/itertuples (which I understand are bad practice)?
Thank you
| [
"a = 'some_description'\ndf.assign(some_description=df[a].str.split(r'\\. ')).explode(a)\n\nresult:\n num_legs num_wings some_description\nfalcon 2 2 falcons have wings\nfalcon 2 2 falcons fly\ndog 4 0 dog have 4 legs\ndog 4 0 they are the best\nspider 8 0 spiders create webs\nspider 8 0 spiders have 8 legs\nfish 0 0 fish swims\nfish 0 0 fish lives in water\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"vectorization"
] | stackoverflow_0074679289_dataframe_pandas_python_vectorization.txt |
Q:
Get specific values out of dictionary with multiple keys in Python
I want to extract multiple ISINs out of a output.json file in python.
The output.json file looks like the following:
{'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'}, 'A1J7W9': {' 'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}, 'LYX0VQ': {'isin': 'LU1302703878'}, 'A2AMYP': {'ter': '0.22%', 'savingsPlan': None, 'inceptionDate': '02.11.16', 'fundSize': '48', 'isin': 'IE00BD34DB16'}}
...
My current approach is the following:
with open('output.json') as f:
data = json.load(f)
value_list = list()
for i in data:
value_list.append(i['isin'])
print(value_list)
However, I receive the error message:
Traceback (most recent call last):
File "/Users/placeholder.py", line 73, in <module>
value_list.append(i['isin'])
~^^^^^^^^
TypeError: string indices must be integers, not 'str'
I would highly appreciate your input!
Thank you in advance!
A:
Use data.values() as target in for loop to iterate over the JSON objects. Doing a loop over data iterates over the keys which is a string value (e.g. "A1J780").
data = {
'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'},
'A1J7W9': {'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}
}
value_list = []
for i in data.values():
value_list.append(i['isin'])
print(value_list)
Output:
['IE00B88DZ566', 'IE00B8KMSQ34']
If the isin key is not present in any of the dictionary objects then you would need to use i.get('isin') and check if the value is not None otherwise i['isin'] would raise an exception.
A:
The error message TypeError: string indices must be integers, not 'str' indicates that you are trying to access a dictionary value using a string as the key, but the type of the key should be an integer instead.
In your code, the i variable in the for loop is a string, because it represents the keys in the data dictionary. However, you are trying to access the 'isin' value in the dictionary using the i['isin'] syntax, which is not valid for a string key.
To fix this issue, you can use the i variable as the key to access the dictionary value, like this:
with open('output.json') as f:
data = json.load(f)
value_list = list()
for i in data:
value_list.append(data[i]['isin'])
print(value_list)
In this updated code, the data[i] syntax is used to access the dictionary value associated with the key i, and then the ['isin'] syntax is used to access the 'isin' value in the nested dictionary.
This code should produce the expected output of a list of ISINs from the output.json file.
| Get specific values out of dictionary with multiple keys in Python | I want to extract multiple ISINs out of a output.json file in python.
The output.json file looks like the following:
{'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'}, 'A1J7W9': {' 'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}, 'LYX0VQ': {'isin': 'LU1302703878'}, 'A2AMYP': {'ter': '0.22%', 'savingsPlan': None, 'inceptionDate': '02.11.16', 'fundSize': '48', 'isin': 'IE00BD34DB16'}}
...
My current approach is the following:
with open('output.json') as f:
data = json.load(f)
value_list = list()
for i in data:
value_list.append(i['isin'])
print(value_list)
However, I receive the error message:
Traceback (most recent call last):
File "/Users/placeholder.py", line 73, in <module>
value_list.append(i['isin'])
~^^^^^^^^
TypeError: string indices must be integers, not 'str'
I would highly appreciate your input!
Thank you in advance!
| [
"Use data.values() as target in for loop to iterate over the JSON objects. Doing a loop over data iterates over the keys which is a string value (e.g. \"A1J780\").\ndata = {\n 'A1J780': {'ter': '0.20%', 'wkn': 'A1J780', 'isin': 'IE00B88DZ566'},\n 'A1J7W9': {'ter': '0.20%', 'isin': 'IE00B8KMSQ34'}\n}\nvalue_list = []\nfor i in data.values():\n value_list.append(i['isin'])\nprint(value_list)\n\nOutput:\n['IE00B88DZ566', 'IE00B8KMSQ34']\n\nIf the isin key is not present in any of the dictionary objects then you would need to use i.get('isin') and check if the value is not None otherwise i['isin'] would raise an exception.\n",
"The error message TypeError: string indices must be integers, not 'str' indicates that you are trying to access a dictionary value using a string as the key, but the type of the key should be an integer instead.\nIn your code, the i variable in the for loop is a string, because it represents the keys in the data dictionary. However, you are trying to access the 'isin' value in the dictionary using the i['isin'] syntax, which is not valid for a string key.\nTo fix this issue, you can use the i variable as the key to access the dictionary value, like this:\nwith open('output.json') as f:\n data = json.load(f)\n\nvalue_list = list()\nfor i in data:\n value_list.append(data[i]['isin'])\nprint(value_list)\n\nIn this updated code, the data[i] syntax is used to access the dictionary value associated with the key i, and then the ['isin'] syntax is used to access the 'isin' value in the nested dictionary.\nThis code should produce the expected output of a list of ISINs from the output.json file.\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"json",
"python"
] | stackoverflow_0074679216_dictionary_json_python.txt |
Q:
Disable `pip install` Timeout For Slow Connections
I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day.
For this post I would like to try to figure out how to deal with this in pip.
The Problem
Almost every time I pip install something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before.
I get something along the lines of
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
What I want to happen
Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together.
I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well.
Other Information
Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too.
A:
Use option --timeout <sec> to set socket time out.
Also, as @Iain Shelvington mentioned, timeout = <sec> in pip configuration will also work.
TIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using man <command> or use <command> --help or check that command's docs online will be very useful too (Maybe better than Google).
A:
To set the timeout time to 30sec for example. The easiest way is executing: pip config global.timeout 30 or going to the pip configuration file pip.ini located in the directory ~\AppData\Roaming\pip in the case of Windows operating system. If the file does not exist there, create it and write:
[global]
timeout = 30.
| Disable `pip install` Timeout For Slow Connections | I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day.
For this post I would like to try to figure out how to deal with this in pip.
The Problem
Almost every time I pip install something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before.
I get something along the lines of
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
What I want to happen
Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together.
I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well.
Other Information
Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too.
| [
"Use option --timeout <sec> to set socket time out.\nAlso, as @Iain Shelvington mentioned, timeout = <sec> in pip configuration will also work.\nTIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using man <command> or use <command> --help or check that command's docs online will be very useful too (Maybe better than Google).\n",
"To set the timeout time to 30sec for example. The easiest way is executing: pip config global.timeout 30 or going to the pip configuration file pip.ini located in the directory ~\\AppData\\Roaming\\pip in the case of Windows operating system. If the file does not exist there, create it and write:\n[global]\ntimeout = 30.\n"
] | [
10,
0
] | [] | [] | [
"pip",
"python",
"request_timed_out"
] | stackoverflow_0059796680_pip_python_request_timed_out.txt |
Q:
How do I make an event listener with decorators in Python?
I want to make an event listener
like this:
@some.event
async def on_ready(some_info):
print(some_info)
@some.event
async def on_error(err):
print(err)
So for when something is ready, or if a message is received in like WebSockets, using this for Discord since some info is only available for when the Bot is Identified or Ready
I've seen something like:
def add_listener(func, name):
# ...
def remove_listener(func, name):
# ...
But I don't really know how to use it or create one
A:
Quick example :
################################################################################
# the code for the "framework"
event1_listeners = []
event2_listeners = []
def listen_event1(func):
event1_listeners.append(func)
return func
def listen_event2(func):
event2_listeners.append(func)
return func
def process_event(event):
if event["type"] == 1:
for func in event1_listeners:
func(event)
elif event["type"] == 2:
for func in event2_listeners:
func(event)
else:
raise NotImplementedError(f"{event['type']=!r}")
################################################################################
# your code
@listen_event1
def handle_event1_v1(event):
print(f"handle_event1_v1 : {event!r}")
@listen_event1
def handle_event1_v2(event):
print(f"handle_event1_v2 : {event!r}")
@listen_event2
def handle_event2(event):
print(f"handle_event2 : {event!r}")
################################################################################
# the events processed by the framework
process_event({"type": 1, "msg": "hello"})
process_event({"type": 2, "msg": "world"})
handle_event1_v1 : {'type': 1, 'msg': 'hello'}
handle_event1_v2 : {'type': 1, 'msg': 'hello'}
handle_event2 : {'type': 2, 'msg': 'world'}
Essentially, the decorators will store the function someplace, and when an event is received, the framework iterates over the functions registered for it.
Removing a listener dynamically is basically just removing the func reference from the list.
The decorator in this case is simply "sugar" to not having to do event1_listeners.append(func) yourself.
A:
Here's a simple class-based solution I quickly coded:
class Event:
def __init__(self):
# Initialise a list of listeners
self.__listeners = []
# Define a getter for the 'on' property which returns the decorator.
@property
def on(self):
# A declorator to run addListener on the input function.
def wrapper(func):
self.addListener(func)
return func
return wrapper
# Add and remove functions from the list of listeners.
def addListener(self,func):
if func in self.__listeners: return
self.__listeners.append(func)
def removeListener(self,func):
if func not in self.__listeners: return
self.__listeners.remove(func)
# Trigger events.
def trigger(self,args = []):
# Run all the functions that are saved.
for func in self.__listeners: func(*args)
It allows you to create an Event that functions can 'subscribe' to:
evn = Event()
# Some code...
evn.trigger(['arg x','arg y'])
The functions can both subscribe to the event with decorators:
@evn.on
def some_function(x,y): pass
Or with the addListener method:
def some_function(x,y): pass
evn.addListener(some_function)
You can also remove listeners:
evn.removeListener(some_function)
To create something similar to what you asked for you can do something like this:
# some.py
from event import Event
class SomeClass:
def __init__(self):
# Private event variable
self.__event = Event()
# Public event variable (decorator)
self.event = self.__event.on
some = SomeClass()
And then use it like so:
# main.py
from some import some
@some.event
async def on_ready(some_info):
print(some_info)
@some.event
async def on_error(err):
print(err)
| How do I make an event listener with decorators in Python? | I want to make an event listener
like this:
@some.event
async def on_ready(some_info):
print(some_info)
@some.event
async def on_error(err):
print(err)
So for when something is ready, or if a message is received in like WebSockets, using this for Discord since some info is only available for when the Bot is Identified or Ready
I've seen something like:
def add_listener(func, name):
# ...
def remove_listener(func, name):
# ...
But I don't really know how to use it or create one
| [
"Quick example :\n################################################################################\n# the code for the \"framework\"\nevent1_listeners = []\nevent2_listeners = []\n\ndef listen_event1(func):\n event1_listeners.append(func)\n return func\n\ndef listen_event2(func):\n event2_listeners.append(func)\n return func\n\ndef process_event(event):\n if event[\"type\"] == 1:\n for func in event1_listeners:\n func(event)\n elif event[\"type\"] == 2:\n for func in event2_listeners:\n func(event)\n else:\n raise NotImplementedError(f\"{event['type']=!r}\")\n\n################################################################################\n# your code\n@listen_event1\ndef handle_event1_v1(event):\n print(f\"handle_event1_v1 : {event!r}\")\n\n@listen_event1\ndef handle_event1_v2(event):\n print(f\"handle_event1_v2 : {event!r}\")\n\n@listen_event2\ndef handle_event2(event):\n print(f\"handle_event2 : {event!r}\")\n\n################################################################################\n# the events processed by the framework\nprocess_event({\"type\": 1, \"msg\": \"hello\"})\nprocess_event({\"type\": 2, \"msg\": \"world\"})\n\nhandle_event1_v1 : {'type': 1, 'msg': 'hello'}\nhandle_event1_v2 : {'type': 1, 'msg': 'hello'}\nhandle_event2 : {'type': 2, 'msg': 'world'}\n\nEssentially, the decorators will store the function someplace, and when an event is received, the framework iterates over the functions registered for it.\nRemoving a listener dynamically is basically just removing the func reference from the list.\nThe decorator in this case is simply \"sugar\" to not having to do event1_listeners.append(func) yourself.\n",
"Here's a simple class-based solution I quickly coded:\nclass Event:\n def __init__(self):\n # Initialise a list of listeners\n self.__listeners = []\n \n # Define a getter for the 'on' property which returns the decorator.\n @property\n def on(self):\n # A declorator to run addListener on the input function.\n def wrapper(func):\n self.addListener(func)\n return func\n return wrapper\n \n # Add and remove functions from the list of listeners.\n def addListener(self,func):\n if func in self.__listeners: return\n self.__listeners.append(func)\n \n def removeListener(self,func):\n if func not in self.__listeners: return\n self.__listeners.remove(func)\n \n # Trigger events.\n def trigger(self,args = []):\n # Run all the functions that are saved.\n for func in self.__listeners: func(*args)\n\nIt allows you to create an Event that functions can 'subscribe' to:\nevn = Event()\n\n# Some code...\n\nevn.trigger(['arg x','arg y'])\n\nThe functions can both subscribe to the event with decorators:\[email protected]\ndef some_function(x,y): pass\n\nOr with the addListener method:\ndef some_function(x,y): pass\nevn.addListener(some_function)\n\nYou can also remove listeners:\nevn.removeListener(some_function)\n\nTo create something similar to what you asked for you can do something like this:\n# some.py\n\nfrom event import Event\n\nclass SomeClass:\n def __init__(self):\n # Private event variable\n self.__event = Event()\n # Public event variable (decorator)\n self.event = self.__event.on\n\nsome = SomeClass()\n\nAnd then use it like so:\n# main.py\n\nfrom some import some\n\[email protected]\nasync def on_ready(some_info):\n print(some_info)\n\[email protected]\nasync def on_error(err):\n print(err)\n\n"
] | [
1,
0
] | [] | [] | [
"decorator",
"discord",
"python"
] | stackoverflow_0070982565_decorator_discord_python.txt |
Q:
SQL injection vulnerability?
I have this simple website put together for learning purposes implementing a minor sanitization function
`
def sqlescape(txt):
return (str(txt).replace(";",";").replace("'","'"))
def get_cipher(key):
cipher = crypt.new(str.encode(salt+key))
return cipher
def encode_body(body, key):
if key == '':
body = str.encode(body)
else:
cipher = get_cipher(key)
body = cipher.encrypt(body)
return base64.b64encode(body)
def decode_body(body, key):
body = (base64.b64decode(body)).decode()
if not key == '':
cipher = get_cipher(key)
body = cipher.decrypt(body)
body = body.decode()
return body
@app.route('/')
def home():
return render_template('home.html')
@app.route('/new')
def new_task():
return render_template('new_task.html')
@app.route('/addrec', methods=['POST', 'GET'])
def addrec():
if request.method == 'POST':
con = sql.connect(db)
msg = ""
try:
title = request.form['title']
body = request.form['body']
key = request.form['password']
body = encode_body(body, key)
cur = con.cursor()
cur.executescript("INSERT INTO tasks (title,body) VALUES (" +
html.unescape(sqlescape(title)) + "," + html.unescape(sqlescape(body.decode())) +
");")
con.commit()
msg = "Record successfully added"
except:
con.rollback()
msg = "error in insert operation"
finally:
return render_template("result.html", msg=msg)
con.close()
@app.route('/task')
def my_route():
taskid = request.args.get('id', default=1)
con = sql.connect(db)
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select title,body from tasks where id=?", taskid)
(title, body) = cur.fetchone()
return render_template(
"task.html",
title=title,
body=body, # body.decode(),
taskid=taskid)
@app.route('/decrypt', methods=['POST', 'GET'])
def decrypt():
if request.method == 'POST':
taskid = request.form['id']
key = request.form['password']
con = sql.connect(db)
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select title,body from tasks where id=?", taskid)
(title, body) = cur.fetchone()
body = decode_body(body, key)
return render_template("decrypt.html", title=title, body=body)
@app.route('/list')
def list():
con = sql.connect(db)
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select title,id from tasks")
rows = cur.fetchall()
return render_template("list.html", rows=rows)
if not os.path.isfile(db):
with sql.connect(db) as conn:
conn.execute(
'CREATE TABLE tasks (id integer primary key, title TEXT, body TEXT)'
)
`
Using bandit it still states there might be still be susceptible to SQL injection. I am looking for any potential ways a user might be to use SQL injection on it maybe with some examples as I haven been giving it a go myself to no avail as I am still new to this. Any pointers would be great.
A:
It looks like your code is vulnerable to SQL injection attacks in the addrec function. In particular, the following line of code concatenates user input directly into the SQL query string, which can allow an attacker to inject arbitrary SQL commands:
cur.executescript("INSERT INTO tasks (title,body) VALUES (" +
html.unescape(sqlescape(title)) + "," + html.unescape(sqlescape(body.decode())) +
");")
For example, if an attacker were to provide the following input for the title field:
'; DROP TABLE tasks; --
The resulting SQL query would be:
INSERT INTO tasks (title,body) VALUES (''; DROP TABLE tasks; --','[base64-encoded body value]');
This query would first insert a new record with a title value of '; DROP TABLE tasks; --', and then execute a DROP statement to delete the entire tasks table.
To protect against SQL injection attacks, you should use parameterized queries instead of concatenating user input into the query string. In Python, you can use the ? placeholder in your SQL query and pass the user input as a separate argument to the execute or executemany method. For example, the addrec function could be rewritten as follows:
@app.route('/addrec', methods=['POST', 'GET'])
def addrec():
if request.method == 'POST':
con = sqlite3.connect(db)
msg = ""
try:
# Get the user input
title = request.form['title']
body = request.form['body']
key = request.form['password']
body = encode_body(body, key)
# Use a parameterized query to insert the user input into the database
cur = con.cursor()
cur.execute("INSERT INTO tasks (title,body) VALUES (?, ?)", (title, body.decode()))
# Commit the changes to the database
con.commit()
msg = "Record successfully added"
except:
con.rollback()
msg = "error in insert operation"
finally:
return render_template("result.html", msg=msg)
con.close()
| SQL injection vulnerability? | I have this simple website put together for learning purposes implementing a minor sanitization function
`
def sqlescape(txt):
return (str(txt).replace(";",";").replace("'","'"))
def get_cipher(key):
cipher = crypt.new(str.encode(salt+key))
return cipher
def encode_body(body, key):
if key == '':
body = str.encode(body)
else:
cipher = get_cipher(key)
body = cipher.encrypt(body)
return base64.b64encode(body)
def decode_body(body, key):
body = (base64.b64decode(body)).decode()
if not key == '':
cipher = get_cipher(key)
body = cipher.decrypt(body)
body = body.decode()
return body
@app.route('/')
def home():
return render_template('home.html')
@app.route('/new')
def new_task():
return render_template('new_task.html')
@app.route('/addrec', methods=['POST', 'GET'])
def addrec():
if request.method == 'POST':
con = sql.connect(db)
msg = ""
try:
title = request.form['title']
body = request.form['body']
key = request.form['password']
body = encode_body(body, key)
cur = con.cursor()
cur.executescript("INSERT INTO tasks (title,body) VALUES (" +
html.unescape(sqlescape(title)) + "," + html.unescape(sqlescape(body.decode())) +
");")
con.commit()
msg = "Record successfully added"
except:
con.rollback()
msg = "error in insert operation"
finally:
return render_template("result.html", msg=msg)
con.close()
@app.route('/task')
def my_route():
taskid = request.args.get('id', default=1)
con = sql.connect(db)
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select title,body from tasks where id=?", taskid)
(title, body) = cur.fetchone()
return render_template(
"task.html",
title=title,
body=body, # body.decode(),
taskid=taskid)
@app.route('/decrypt', methods=['POST', 'GET'])
def decrypt():
if request.method == 'POST':
taskid = request.form['id']
key = request.form['password']
con = sql.connect(db)
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select title,body from tasks where id=?", taskid)
(title, body) = cur.fetchone()
body = decode_body(body, key)
return render_template("decrypt.html", title=title, body=body)
@app.route('/list')
def list():
con = sql.connect(db)
con.row_factory = sql.Row
cur = con.cursor()
cur.execute("select title,id from tasks")
rows = cur.fetchall()
return render_template("list.html", rows=rows)
if not os.path.isfile(db):
with sql.connect(db) as conn:
conn.execute(
'CREATE TABLE tasks (id integer primary key, title TEXT, body TEXT)'
)
`
Using bandit it still states there might be still be susceptible to SQL injection. I am looking for any potential ways a user might be to use SQL injection on it maybe with some examples as I haven been giving it a go myself to no avail as I am still new to this. Any pointers would be great.
| [
"It looks like your code is vulnerable to SQL injection attacks in the addrec function. In particular, the following line of code concatenates user input directly into the SQL query string, which can allow an attacker to inject arbitrary SQL commands:\ncur.executescript(\"INSERT INTO tasks (title,body) VALUES (\" +\n html.unescape(sqlescape(title)) + \",\" + html.unescape(sqlescape(body.decode())) +\n \");\")\n\nFor example, if an attacker were to provide the following input for the title field:\n'; DROP TABLE tasks; --\n\nThe resulting SQL query would be:\nINSERT INTO tasks (title,body) VALUES (''; DROP TABLE tasks; --','[base64-encoded body value]');\n\nThis query would first insert a new record with a title value of '; DROP TABLE tasks; --', and then execute a DROP statement to delete the entire tasks table.\nTo protect against SQL injection attacks, you should use parameterized queries instead of concatenating user input into the query string. In Python, you can use the ? placeholder in your SQL query and pass the user input as a separate argument to the execute or executemany method. For example, the addrec function could be rewritten as follows:\n @app.route('/addrec', methods=['POST', 'GET'])\ndef addrec():\n if request.method == 'POST':\n con = sqlite3.connect(db)\n msg = \"\"\n try:\n # Get the user input\n title = request.form['title']\n body = request.form['body']\n key = request.form['password']\n body = encode_body(body, key)\n\n # Use a parameterized query to insert the user input into the database\n cur = con.cursor()\n cur.execute(\"INSERT INTO tasks (title,body) VALUES (?, ?)\", (title, body.decode()))\n\n # Commit the changes to the database\n con.commit()\n msg = \"Record successfully added\"\n except:\n con.rollback()\n msg = \"error in insert operation\"\n\n finally:\n return render_template(\"result.html\", msg=msg)\n con.close()\n\n"
] | [
1
] | [] | [] | [
"python",
"sql",
"sql_injection"
] | stackoverflow_0074679347_python_sql_sql_injection.txt |
Q:
`TypeError: 'str' object is not callable` when a decorator function is caleld
I get a TypeError: 'str' object is not callable error when a decorator function is caleld. E.g. I
call the function msgReturnAsList, which is actually meant to return a list and therefore I do not understand why is it throwing an error that a str object is not callable.
I read at FreeCodeCamp that this TypeError occurs mainly in two occasions, neither of which has anything to do with my case:
1."If You Use str as a Variable Name in Python"
2. "If You Call a String Like a Function in Python"
Can somebody clarify what is the logic behind this and how do I get msgReturnAsList to return the string converted to upper by wrapThis and then converted to a list by the problematic decorator function msgReturnAsList?
def wrapThis(a):
a = str(a).upper()
return a
@wrapThis
def msgReturnAsList(msg):
msg = list(msg)
return msg
b = "Convert to upper and output it as a list of letters."
print(msgReturnAsList(b))
I tired changing the list to string, interestingly the error remains the same.
A:
A decorator method should return a method:
def wrapThis(func):
def wrapper_func(msg):
msg = str(msg).upper()
return func(msg)
return wrapper_func
@wrapThis
def msgReturnAsList(msg):
msg = list(msg)
return msg
b = "Convert to upper and output it as a list of letters."
print(msgReturnAsList(b))
How to Create and Use Decorators in Python With Examples
| `TypeError: 'str' object is not callable` when a decorator function is caleld | I get a TypeError: 'str' object is not callable error when a decorator function is caleld. E.g. I
call the function msgReturnAsList, which is actually meant to return a list and therefore I do not understand why is it throwing an error that a str object is not callable.
I read at FreeCodeCamp that this TypeError occurs mainly in two occasions, neither of which has anything to do with my case:
1."If You Use str as a Variable Name in Python"
2. "If You Call a String Like a Function in Python"
Can somebody clarify what is the logic behind this and how do I get msgReturnAsList to return the string converted to upper by wrapThis and then converted to a list by the problematic decorator function msgReturnAsList?
def wrapThis(a):
a = str(a).upper()
return a
@wrapThis
def msgReturnAsList(msg):
msg = list(msg)
return msg
b = "Convert to upper and output it as a list of letters."
print(msgReturnAsList(b))
I tired changing the list to string, interestingly the error remains the same.
| [
"A decorator method should return a method:\ndef wrapThis(func):\n def wrapper_func(msg):\n msg = str(msg).upper()\n return func(msg)\n return wrapper_func\n\n@wrapThis\ndef msgReturnAsList(msg):\n msg = list(msg)\n return msg\n\nb = \"Convert to upper and output it as a list of letters.\"\nprint(msgReturnAsList(b))\n\nHow to Create and Use Decorators in Python With Examples\n"
] | [
1
] | [] | [] | [
"python",
"python_decorators"
] | stackoverflow_0074679394_python_python_decorators.txt |
Q:
Plot one series for one column with Polars dataframe and Plotly
I can't find how to plot these two series A and B with time on X.
from numpy import linspace
import polars as pl
import plotly.express as px
import plotly.io as pio
pio.renderers.default = 'browser'
times = linspace(1, 6, 10)
df = pl.DataFrame({
'time': times,
'A': times**2,
'B': times**3,
})
fig = px.line(df)
fig.show()
Data keep showing as 10 series with 3 points, instead of 2 series with 10 points and the first column as X values.
Edit:
This line:
fig = px.line(df, x='time', y=['A', 'B'])
produces this error:
ValueError: Value of 'x' is not the name of a column in 'data_frame'. Expected one of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] but received: time
Using polars 0.15.0 and plotly 5.11.0
A:
You use Polars Dataframe instead of Pandas dataframe and indexing is a little different here and what is why you have this error. In order to plot it, one way to do it is to convert the dataframe from Polars to Pandas on the fly by using to_pandas():
fig = px.line(df.to_pandas(),x='time', y=['A', 'B'])
Output
You can also use this way:
fig = px.line(x=df['time'], y=[df["A"],df["B"]])
A:
In your code, you can use the line() function to plot the A and B columns from your DataFrame, which contain the time-series data for the two series. You can then set the x and y parameters of the line() function to specify which column should be used for the X-axis and which columns should be used for the Y-axis.
Here's an example of how you can use the line() function to plot the A and B columns from your DataFrame with time on the X-axis:
# Import the necessary modules
from numpy import linspace
import plotly.express as px
# Create a DataFrame with time-series data for two series
times = linspace(1, 6, 10)
df = pl.DataFrame({
'time': times,
'A': times**2,
'B': times**3,
})
# Use the line() function to plot the A and B columns from the DataFrame
fig = px.line(df, x='time', y=['A', 'B'])
# Show the plot
fig.show()
This will create a line chart with the A and B columns from your DataFrame as the data series, and the time column as the X-axis. The resulting plot should show the two series with time on the X-axis.
| Plot one series for one column with Polars dataframe and Plotly | I can't find how to plot these two series A and B with time on X.
from numpy import linspace
import polars as pl
import plotly.express as px
import plotly.io as pio
pio.renderers.default = 'browser'
times = linspace(1, 6, 10)
df = pl.DataFrame({
'time': times,
'A': times**2,
'B': times**3,
})
fig = px.line(df)
fig.show()
Data keep showing as 10 series with 3 points, instead of 2 series with 10 points and the first column as X values.
Edit:
This line:
fig = px.line(df, x='time', y=['A', 'B'])
produces this error:
ValueError: Value of 'x' is not the name of a column in 'data_frame'. Expected one of [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] but received: time
Using polars 0.15.0 and plotly 5.11.0
| [
"You use Polars Dataframe instead of Pandas dataframe and indexing is a little different here and what is why you have this error. In order to plot it, one way to do it is to convert the dataframe from Polars to Pandas on the fly by using to_pandas():\nfig = px.line(df.to_pandas(),x='time', y=['A', 'B'])\n\nOutput\n\nYou can also use this way:\nfig = px.line(x=df['time'], y=[df[\"A\"],df[\"B\"]])\n\n",
"In your code, you can use the line() function to plot the A and B columns from your DataFrame, which contain the time-series data for the two series. You can then set the x and y parameters of the line() function to specify which column should be used for the X-axis and which columns should be used for the Y-axis.\nHere's an example of how you can use the line() function to plot the A and B columns from your DataFrame with time on the X-axis:\n# Import the necessary modules\nfrom numpy import linspace\nimport plotly.express as px\n\n# Create a DataFrame with time-series data for two series\ntimes = linspace(1, 6, 10)\ndf = pl.DataFrame({\n 'time': times,\n 'A': times**2,\n 'B': times**3,\n})\n\n# Use the line() function to plot the A and B columns from the DataFrame\nfig = px.line(df, x='time', y=['A', 'B'])\n\n# Show the plot\nfig.show()\n\nThis will create a line chart with the A and B columns from your DataFrame as the data series, and the time column as the X-axis. The resulting plot should show the two series with time on the X-axis.\n"
] | [
1,
0
] | [] | [] | [
"plotly",
"python"
] | stackoverflow_0074678281_plotly_python.txt |
Q:
Can I interrogate a PySpark DataFrame to get the list of referenced columns?
Given a PySpark DataFrame is it possible to obtain a list of source columns that are being referenced by the DataFrame?
Perhaps a more concrete example might help explain what I'm after. Say I have a DataFrame defined as:
import pyspark.sql.functions as func
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
source_df = spark.createDataFrame(
[("pru", 23, "finance"), ("paul", 26, "HR"), ("noel", 20, "HR")],
["name", "age", "department"],
)
source_df.createOrReplaceTempView("people")
sqlDF = spark.sql("SELECT name, age, department FROM people")
df = sqlDF.groupBy("department").agg(func.max("age").alias("max_age"))
df.show()
which returns:
+----------+--------+
|department|max_age |
+----------+--------+
| finance| 23|
| HR| 26|
+----------+--------+
The columns that are referenced by df are [department, age]. Is it possible to get that list of referenced columns programatically?
Thanks to Capturing the result of explain() in pyspark I know I can extract the plan as a string:
df._sc._jvm.PythonSQLUtils.explainString(df._jdf.queryExecution(), "formatted")
which returns:
== Physical Plan ==
AdaptiveSparkPlan (6)
+- HashAggregate (5)
+- Exchange (4)
+- HashAggregate (3)
+- Project (2)
+- Scan ExistingRDD (1)
(1) Scan ExistingRDD
Output [3]: [name#0, age#1L, department#2]
Arguments: [name#0, age#1L, department#2], MapPartitionsRDD[4] at applySchemaToPythonRDD at NativeMethodAccessorImpl.java:0, ExistingRDD, UnknownPartitioning(0)
(2) Project
Output [2]: [age#1L, department#2]
Input [3]: [name#0, age#1L, department#2]
(3) HashAggregate
Input [2]: [age#1L, department#2]
Keys [1]: [department#2]
Functions [1]: [partial_max(age#1L)]
Aggregate Attributes [1]: [max#22L]
Results [2]: [department#2, max#23L]
(4) Exchange
Input [2]: [department#2, max#23L]
Arguments: hashpartitioning(department#2, 200), ENSURE_REQUIREMENTS, [plan_id=60]
(5) HashAggregate
Input [2]: [department#2, max#23L]
Keys [1]: [department#2]
Functions [1]: [max(age#1L)]
Aggregate Attributes [1]: [max(age#1L)#12L]
Results [2]: [department#2, max(age#1L)#12L AS max_age#13L]
(6) AdaptiveSparkPlan
Output [2]: [department#2, max_age#13L]
Arguments: isFinalPlan=false
which is useful, however its not what I need. I need a list of the referenced columns. Is this possible?
Perhaps another way of asking the question is... is there a way to obtain the explain plan as an object that I can iterate over/explore?
A:
There is an object for that unfortunately its a java object, and not translated to pyspark.
You can still access it with Spark constucts:
>>> df._jdf.queryExecution().executedPlan().apply(0).output().apply(0).toString()
u'department#1621'
>>> df._jdf.queryExecution().executedPlan().apply(0).output().apply(1).toString()
u'max_age#1632L'
You could loop through both the above apply to get the information you are looking for with something like:
plan = df._jdf.queryExecution().executedPlan()
steps = [ plan.apply(i).toString() for i in range(1,100) if not isinstance(plan.apply(i), type(None)) ]
Bit of a hack but apparently size doesn't work.
A:
Yes, it is possible to obtain a list of source columns that are being referenced by a PySpark DataFrame. To do this, you can use the DataFrame's df.columns attribute, which returns a list of the column names in the DataFrame. In your example, this would be df.columns which would return ['department', 'max_age'].
Alternatively, you can also use the df.dtypes attribute, which returns a list of tuples containing the column names and their data types. For example, df.dtypes would return [('department', 'string'), ('max_age', 'bigint')].
| Can I interrogate a PySpark DataFrame to get the list of referenced columns? | Given a PySpark DataFrame is it possible to obtain a list of source columns that are being referenced by the DataFrame?
Perhaps a more concrete example might help explain what I'm after. Say I have a DataFrame defined as:
import pyspark.sql.functions as func
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
source_df = spark.createDataFrame(
[("pru", 23, "finance"), ("paul", 26, "HR"), ("noel", 20, "HR")],
["name", "age", "department"],
)
source_df.createOrReplaceTempView("people")
sqlDF = spark.sql("SELECT name, age, department FROM people")
df = sqlDF.groupBy("department").agg(func.max("age").alias("max_age"))
df.show()
which returns:
+----------+--------+
|department|max_age |
+----------+--------+
| finance| 23|
| HR| 26|
+----------+--------+
The columns that are referenced by df are [department, age]. Is it possible to get that list of referenced columns programatically?
Thanks to Capturing the result of explain() in pyspark I know I can extract the plan as a string:
df._sc._jvm.PythonSQLUtils.explainString(df._jdf.queryExecution(), "formatted")
which returns:
== Physical Plan ==
AdaptiveSparkPlan (6)
+- HashAggregate (5)
+- Exchange (4)
+- HashAggregate (3)
+- Project (2)
+- Scan ExistingRDD (1)
(1) Scan ExistingRDD
Output [3]: [name#0, age#1L, department#2]
Arguments: [name#0, age#1L, department#2], MapPartitionsRDD[4] at applySchemaToPythonRDD at NativeMethodAccessorImpl.java:0, ExistingRDD, UnknownPartitioning(0)
(2) Project
Output [2]: [age#1L, department#2]
Input [3]: [name#0, age#1L, department#2]
(3) HashAggregate
Input [2]: [age#1L, department#2]
Keys [1]: [department#2]
Functions [1]: [partial_max(age#1L)]
Aggregate Attributes [1]: [max#22L]
Results [2]: [department#2, max#23L]
(4) Exchange
Input [2]: [department#2, max#23L]
Arguments: hashpartitioning(department#2, 200), ENSURE_REQUIREMENTS, [plan_id=60]
(5) HashAggregate
Input [2]: [department#2, max#23L]
Keys [1]: [department#2]
Functions [1]: [max(age#1L)]
Aggregate Attributes [1]: [max(age#1L)#12L]
Results [2]: [department#2, max(age#1L)#12L AS max_age#13L]
(6) AdaptiveSparkPlan
Output [2]: [department#2, max_age#13L]
Arguments: isFinalPlan=false
which is useful, however its not what I need. I need a list of the referenced columns. Is this possible?
Perhaps another way of asking the question is... is there a way to obtain the explain plan as an object that I can iterate over/explore?
| [
"There is an object for that unfortunately its a java object, and not translated to pyspark.\nYou can still access it with Spark constucts:\n>>> df._jdf.queryExecution().executedPlan().apply(0).output().apply(0).toString()\nu'department#1621'\n>>> df._jdf.queryExecution().executedPlan().apply(0).output().apply(1).toString()\nu'max_age#1632L'\n\nYou could loop through both the above apply to get the information you are looking for with something like:\nplan = df._jdf.queryExecution().executedPlan()\nsteps = [ plan.apply(i).toString() for i in range(1,100) if not isinstance(plan.apply(i), type(None)) ]\n\nBit of a hack but apparently size doesn't work.\n",
"Yes, it is possible to obtain a list of source columns that are being referenced by a PySpark DataFrame. To do this, you can use the DataFrame's df.columns attribute, which returns a list of the column names in the DataFrame. In your example, this would be df.columns which would return ['department', 'max_age'].\nAlternatively, you can also use the df.dtypes attribute, which returns a list of tuples containing the column names and their data types. For example, df.dtypes would return [('department', 'string'), ('max_age', 'bigint')].\n"
] | [
4,
0
] | [
"You can try the below codes, this will give you a column list and its data type in the data frame.\nfor field in df.schema.fields:\n print(field.name +\" , \"+str(field.dataType))\n\n"
] | [
-1
] | [
"apache_spark",
"pyspark",
"python"
] | stackoverflow_0074598689_apache_spark_pyspark_python.txt |
Q:
How do I copy all folder contents from one location to another location in Python?
I have been trying to make a python file that will copy contents from one folder to another. I would like it to work on any Windows system that I run it on. It must copy ALL things ...
i need solution to this problem.
A:
Here is one way to do this using the shutil module in Python:
import shutil
# Replace the source and destination paths with the appropriate paths on your system
src_dir = "C:\\source\\folder"
dst_dir = "C:\\destination\\folder"
# Use the shutil.copytree function to copy everything from the source directory to the destination directory
shutil.copytree(src_dir, dst_dir)
This will copy all files and subdirectories from src_dir to dst_dir. Note that dst_dir must not already exist, otherwise the copytree function will raise an error.
You can also use the copy function from the shutil module to copy individual files. For example:
import shutil
# Replace the source and destination paths with the appropriate paths on your system
src_file = "C:\\source\\folder\\file.txt"
dst_file = "C:\\destination\\folder\\file.txt"
# Use the shutil.copy function to copy the file
shutil.copy(src_file, dst_file)
This will copy the file at src_file to the destination at dst_file. If dst_file already exists, it will be overwritten.
You can use these functions in a loop to iterate over all files in a directory and copy them to the destination directory. Here is an example:
import os
import shutil
# Replace the source and destination paths with the appropriate paths on your system
src_dir = "C:\\source\\folder"
dst_dir = "C:\\destination\\folder"
# Create the destination directory if it does not already exist
if not os.path.exists(dst_dir):
os.makedirs(dst_dir)
# Iterate over all files in the source directory
for filename in os.listdir(src_dir):
# Construct the full paths for the source and destination files
src_file = os.path.join(src_dir, filename)
dst_file = os.path.join(dst_dir, filename)
# Use the shutil.copy function to copy the file
shutil.copy(src_file, dst_file)
A:
import shutil
import os
# Path to the directory
source = 'C:/path/to/source/folder'
# Path to the destination directory
destination = 'C:/path/to/destination/folder'
# Copy contents from source to destination
shutil.copytree(source, destination)
# List all the files in source directory
source_files = os.listdir(source)
# Iterate over all the files in source directory
for file_name in source_files:
# Create full path of file in source directory
full_file_name = os.path.join(source, file_name)
# If file is a directory, create corresponding directory in destination
# directory
if os.path.isdir(full_file_name):
shutil.copytree(full_file_name, os.path.join(destination, file_name))
else:
# Copy the file from source to destination
shutil.copy2(full_file_name, destination)
| How do I copy all folder contents from one location to another location in Python? | I have been trying to make a python file that will copy contents from one folder to another. I would like it to work on any Windows system that I run it on. It must copy ALL things ...
i need solution to this problem.
| [
"Here is one way to do this using the shutil module in Python:\nimport shutil\n\n# Replace the source and destination paths with the appropriate paths on your system\nsrc_dir = \"C:\\\\source\\\\folder\"\ndst_dir = \"C:\\\\destination\\\\folder\"\n\n# Use the shutil.copytree function to copy everything from the source directory to the destination directory\nshutil.copytree(src_dir, dst_dir)\n\nThis will copy all files and subdirectories from src_dir to dst_dir. Note that dst_dir must not already exist, otherwise the copytree function will raise an error.\nYou can also use the copy function from the shutil module to copy individual files. For example:\nimport shutil\n\n# Replace the source and destination paths with the appropriate paths on your system\nsrc_file = \"C:\\\\source\\\\folder\\\\file.txt\"\ndst_file = \"C:\\\\destination\\\\folder\\\\file.txt\"\n\n# Use the shutil.copy function to copy the file\nshutil.copy(src_file, dst_file)\n\nThis will copy the file at src_file to the destination at dst_file. If dst_file already exists, it will be overwritten.\nYou can use these functions in a loop to iterate over all files in a directory and copy them to the destination directory. Here is an example:\nimport os\nimport shutil\n\n# Replace the source and destination paths with the appropriate paths on your system\nsrc_dir = \"C:\\\\source\\\\folder\"\ndst_dir = \"C:\\\\destination\\\\folder\"\n\n# Create the destination directory if it does not already exist\nif not os.path.exists(dst_dir):\n os.makedirs(dst_dir)\n\n# Iterate over all files in the source directory\nfor filename in os.listdir(src_dir):\n # Construct the full paths for the source and destination files\n src_file = os.path.join(src_dir, filename)\n dst_file = os.path.join(dst_dir, filename)\n \n # Use the shutil.copy function to copy the file\n shutil.copy(src_file, dst_file)\n\n",
"import shutil\nimport os\n\n# Path to the directory\nsource = 'C:/path/to/source/folder'\n\n# Path to the destination directory\ndestination = 'C:/path/to/destination/folder'\n\n# Copy contents from source to destination\nshutil.copytree(source, destination)\n\n# List all the files in source directory\nsource_files = os.listdir(source)\n\n# Iterate over all the files in source directory\nfor file_name in source_files:\n # Create full path of file in source directory\n full_file_name = os.path.join(source, file_name)\n # If file is a directory, create corresponding directory in destination\n # directory\n if os.path.isdir(full_file_name):\n shutil.copytree(full_file_name, os.path.join(destination, file_name))\n else:\n # Copy the file from source to destination\n shutil.copy2(full_file_name, destination)\n\n"
] | [
0,
0
] | [] | [] | [
"coding_style",
"copy",
"paste",
"python",
"windows"
] | stackoverflow_0074679356_coding_style_copy_paste_python_windows.txt |
Q:
The problem of not being able to change the background color, thickness and position of the texts in the Streamlit Multiple Page App's Side Bar
Streamlit Version: 1.13.0
in Homepage.py:
import streamlit as st
st.set_page_config(
page_title= "Multipage App",
page_icon="")
st.title("Main Page")
st.markdown('<style>div[class="css-6qob1r e1fqkh3o3"] {color:black; font-weight: 900; background: url("https://media2.giphy.com/media/46hpy8xB3MiHfruixn/giphy.gif");background-repeat: no-repeat;background-size:350%;} </style>', unsafe_allow_html=True)
st.markdown('<style>div[class="css-y3drt2 e1fqkh3o5"] {color:red; } </style>', unsafe_allow_html=True)#NOT WORKING-------------------------------------------
I want to change the color, font, and position of the Homepage, Application, and Contact texts. How do I do this?
A:
You can do this via inline CSS and st.markdown, although it's admittedly hacky.
def _set_block_container_style(
max_width: int = 1200,
max_width_100_percent: bool = False,
padding_top: int = 1,
padding_right: int = 1,
padding_left: int = 1,
padding_bottom: int = 1,
):
if max_width_100_percent:
max_width_str = f"max-width: 100%;"
else:
max_width_str = f"max-width: {max_width}px;"
styl = f"""
<style>
.reportview-container .main .block-container{{
{max_width_str}
padding-top: {padding_top}rem;
padding-right: {padding_right}rem;
padding-left: {padding_left}rem;
padding-bottom: {padding_bottom}rem;
}}
}}
</style>
A:
You have to pay close attention to the classes to get the right class and also making sure you wrap the css in a valid format in order to get it working.
This is a work around which I think should solve your problem. But I am not really sure if there are change in class names in deferent versions of streamlit.
st.markdown("""
<style>
ul[class="css-wjbhl0 e1fqkh3o9"]{
position: relative;
padding-top: 2rem;
display: flex;
justify-content: center;
flex-direction: column;
align-items: center;
}
.css-17lntkn {
font-weight: bold;
font-size: 20px;
color: yellow;
}
.css-pkbazv {
font-weight: bold;
font-size: 20px;
}
</style>""", unsafe_allow_html=True)
| The problem of not being able to change the background color, thickness and position of the texts in the Streamlit Multiple Page App's Side Bar | Streamlit Version: 1.13.0
in Homepage.py:
import streamlit as st
st.set_page_config(
page_title= "Multipage App",
page_icon="")
st.title("Main Page")
st.markdown('<style>div[class="css-6qob1r e1fqkh3o3"] {color:black; font-weight: 900; background: url("https://media2.giphy.com/media/46hpy8xB3MiHfruixn/giphy.gif");background-repeat: no-repeat;background-size:350%;} </style>', unsafe_allow_html=True)
st.markdown('<style>div[class="css-y3drt2 e1fqkh3o5"] {color:red; } </style>', unsafe_allow_html=True)#NOT WORKING-------------------------------------------
I want to change the color, font, and position of the Homepage, Application, and Contact texts. How do I do this?
| [
"You can do this via inline CSS and st.markdown, although it's admittedly hacky.\ndef _set_block_container_style(\n max_width: int = 1200,\n max_width_100_percent: bool = False,\n padding_top: int = 1,\n padding_right: int = 1,\n padding_left: int = 1,\n padding_bottom: int = 1,\n):\n\nif max_width_100_percent:\n max_width_str = f\"max-width: 100%;\"\nelse:\n max_width_str = f\"max-width: {max_width}px;\"\n \nstyl = f\"\"\"\n <style>\n .reportview-container .main .block-container{{\n {max_width_str}\n padding-top: {padding_top}rem;\n padding-right: {padding_right}rem;\n padding-left: {padding_left}rem;\n padding-bottom: {padding_bottom}rem;\n }}\n }}\n </style>\n\n",
"You have to pay close attention to the classes to get the right class and also making sure you wrap the css in a valid format in order to get it working.\nThis is a work around which I think should solve your problem. But I am not really sure if there are change in class names in deferent versions of streamlit.\nst.markdown(\"\"\"\n <style>\n ul[class=\"css-wjbhl0 e1fqkh3o9\"]{\n position: relative;\n padding-top: 2rem;\n display: flex;\n justify-content: center;\n flex-direction: column;\n align-items: center;\n }\n \n .css-17lntkn {\n font-weight: bold;\n font-size: 20px;\n color: yellow;\n }\n \n .css-pkbazv {\n font-weight: bold;\n font-size: 20px;\n }\n </style>\"\"\", unsafe_allow_html=True)\n\n"
] | [
0,
0
] | [] | [] | [
"css",
"html",
"python",
"python_3.x",
"streamlit"
] | stackoverflow_0074551058_css_html_python_python_3.x_streamlit.txt |
Q:
Plotly imshow reversing y labels reverses the image
I'd like to visualize a 20x20 matrix, where top left point is (-10, 9) and lower right point is (9, -10). So the x is increasing from left to right and y is decreasing from top to bottom. So my idea was to pass x labels as a list: [-10, -9 ... 9, 9] and y labels as [9, 8 ... -9, -10]. This worked as intended in seaborn (matplotlib), however doing so in plotly just reverses the image vertically. Here's the code:
import numpy as np
import plotly.express as px
img = np.arange(20**2).reshape((20, 20))
fig = px.imshow(img,
x=list(range(-10, 10)),
y=list(range(-10, 10)),
)
fig.show()
import numpy as np
import plotly.express as px
img = np.arange(20**2).reshape((20, 20))
fig = px.imshow(img,
x=list(range(-10, 10)),
y=list(reversed(range(-10, 10))),
)
fig.show()
Why is this happening and how can I fix it?
EDIT: Adding seaborn code to see the difference. As you can see, reversing the range for labels only changes the labels and has no effect on the image whatsoever, this is the effect I want in plotly.
import seaborn as sns
import numpy as np
img = np.arange(20**2).reshape((20, 20))
sns.heatmap(img,
xticklabels=list(range(-10, 10)),
yticklabels=list(range(-10, 10))
)
import seaborn as sns
import numpy as np
img = np.arange(20**2).reshape((20, 20))
sns.heatmap(img,
xticklabels=list(range(-10, 10)),
yticklabels=list(reversed(range(-10, 10)))
)
A:
To get the desired visualization in plotly, you will need to specify the x and y values manually. Instead of passing in the labels as a list, you should specify the x and y values as a list of tuples, like so:
x = [(-10, 9), (-9, 8), ... (9, -10)]
y = [(-10, 9), (-9, 8), ... (9, -10)]
This should produce the desired visualization.
A:
As I mentioned in my comment, the internal representation of px.imshow() is a heatmap. I coded your objective with a heatmap in a graph object. I didn't change this for any clear reason, I just took a different approach because I couldn't achieve it with px.imshow().
import plotly.graph_objects as go
img = np.arange(20**2).reshape((20, 20))
fig = go.Figure(data=go.Heatmap(z=img.tolist()[::-1]))
fig.update_yaxes(tickvals=np.arange(0, 20, 1), ticktext=[str(x) for x in np.arange(-10, 10, 1)])
fig.update_xaxes(tickvals=np.arange(0, 20, 1), ticktext=[str(x) for x in np.arange(-10, 10, 1)])
fig.update_layout(autosize=False, height=500, width=500)
fig.show()
A:
It looks like the issue is that the y values in the px.imshow() function are being interpreted as the actual y-values for the data points in your matrix, rather than labels for the y-axis. Therefore, when you provide y=list(reversed(range(-10, 10))), the y-values of your data points are being reversed, resulting in the vertical reversal of the image.
One way to fix this is to use the yaxis_title argument in the px.imshow() function to specify the title for the y-axis, and then use the tickvals and ticktext arguments to specify the values and labels for the y-axis ticks, respectively.
For example:
import numpy as np
import plotly.express as px
img = np.arange(20**2).reshape((20, 20))
y_values = list(range(-10, 10))
fig = px.imshow(img,
x=list(range(-10, 10)),
yaxis_title="y-axis",
tickvals=y_values,
ticktext=reversed(y_values),
)
fig.show()
This should produce an image where the y-axis labels are in the correct order, without reversing the image.
| Plotly imshow reversing y labels reverses the image | I'd like to visualize a 20x20 matrix, where top left point is (-10, 9) and lower right point is (9, -10). So the x is increasing from left to right and y is decreasing from top to bottom. So my idea was to pass x labels as a list: [-10, -9 ... 9, 9] and y labels as [9, 8 ... -9, -10]. This worked as intended in seaborn (matplotlib), however doing so in plotly just reverses the image vertically. Here's the code:
import numpy as np
import plotly.express as px
img = np.arange(20**2).reshape((20, 20))
fig = px.imshow(img,
x=list(range(-10, 10)),
y=list(range(-10, 10)),
)
fig.show()
import numpy as np
import plotly.express as px
img = np.arange(20**2).reshape((20, 20))
fig = px.imshow(img,
x=list(range(-10, 10)),
y=list(reversed(range(-10, 10))),
)
fig.show()
Why is this happening and how can I fix it?
EDIT: Adding seaborn code to see the difference. As you can see, reversing the range for labels only changes the labels and has no effect on the image whatsoever, this is the effect I want in plotly.
import seaborn as sns
import numpy as np
img = np.arange(20**2).reshape((20, 20))
sns.heatmap(img,
xticklabels=list(range(-10, 10)),
yticklabels=list(range(-10, 10))
)
import seaborn as sns
import numpy as np
img = np.arange(20**2).reshape((20, 20))
sns.heatmap(img,
xticklabels=list(range(-10, 10)),
yticklabels=list(reversed(range(-10, 10)))
)
| [
"To get the desired visualization in plotly, you will need to specify the x and y values manually. Instead of passing in the labels as a list, you should specify the x and y values as a list of tuples, like so:\nx = [(-10, 9), (-9, 8), ... (9, -10)]\ny = [(-10, 9), (-9, 8), ... (9, -10)]\n\nThis should produce the desired visualization.\n",
"As I mentioned in my comment, the internal representation of px.imshow() is a heatmap. I coded your objective with a heatmap in a graph object. I didn't change this for any clear reason, I just took a different approach because I couldn't achieve it with px.imshow().\nimport plotly.graph_objects as go\n\nimg = np.arange(20**2).reshape((20, 20))\n\nfig = go.Figure(data=go.Heatmap(z=img.tolist()[::-1]))\n\nfig.update_yaxes(tickvals=np.arange(0, 20, 1), ticktext=[str(x) for x in np.arange(-10, 10, 1)])\nfig.update_xaxes(tickvals=np.arange(0, 20, 1), ticktext=[str(x) for x in np.arange(-10, 10, 1)])\nfig.update_layout(autosize=False, height=500, width=500)\nfig.show()\n\n\n",
"It looks like the issue is that the y values in the px.imshow() function are being interpreted as the actual y-values for the data points in your matrix, rather than labels for the y-axis. Therefore, when you provide y=list(reversed(range(-10, 10))), the y-values of your data points are being reversed, resulting in the vertical reversal of the image.\nOne way to fix this is to use the yaxis_title argument in the px.imshow() function to specify the title for the y-axis, and then use the tickvals and ticktext arguments to specify the values and labels for the y-axis ticks, respectively.\nFor example:\nimport numpy as np\nimport plotly.express as px\n\nimg = np.arange(20**2).reshape((20, 20))\n\ny_values = list(range(-10, 10))\n\nfig = px.imshow(img,\n x=list(range(-10, 10)),\n yaxis_title=\"y-axis\",\n tickvals=y_values,\n ticktext=reversed(y_values),\n )\nfig.show()\n\nThis should produce an image where the y-axis labels are in the correct order, without reversing the image.\n"
] | [
2,
2,
0
] | [
"import plotly.graph_objects as go\nimport plotly.express as px\n\nimg = np.arange(20**2).reshape((20, 20))\nfig = px.imshow(img,\n x=list(range(-10, 10)),\n y=list(range(10, 30, 1)),\n)\n\nfig.show()\n\nOutput:\n\n"
] | [
-1
] | [
"plotly",
"plotly_python",
"python"
] | stackoverflow_0074601283_plotly_plotly_python_python.txt |
Q:
How to I assign values to letters that come from a csv file?
I am calculating the gpa of a given curriculum. I want to multiply the number of credits by the grade that the student achieved. I want to make the program know that A in the csv file is equal to 4, B is equal to 3 and so on.
enter image description here
I tried telling the program that the letters on the slice of data equal a number, but it didn't work
I have written this so far:
def over_gpa(data):
total_cred = np.sum(data[:,3])
messi = np.unique(data[:,3:])
for k in messi:
value = []
y = data[:,3]
f = data[:,4]
x = y * f
value.append(x)
m = np.sum(value)
total_gpa = m/total_cred
print(total_gpa)
over_gpa(data)
A:
I'd initialize a dict that maps letters to grades and then apply it to a new column.
vals = { "A" : 4, "B" : 3, "C" : 2, "D" : 1, "F" : 0 }
df['Grade number'] = df['Grade'].apply(lambda x: vals.get(x))
| How to I assign values to letters that come from a csv file? | I am calculating the gpa of a given curriculum. I want to multiply the number of credits by the grade that the student achieved. I want to make the program know that A in the csv file is equal to 4, B is equal to 3 and so on.
enter image description here
I tried telling the program that the letters on the slice of data equal a number, but it didn't work
I have written this so far:
def over_gpa(data):
total_cred = np.sum(data[:,3])
messi = np.unique(data[:,3:])
for k in messi:
value = []
y = data[:,3]
f = data[:,4]
x = y * f
value.append(x)
m = np.sum(value)
total_gpa = m/total_cred
print(total_gpa)
over_gpa(data)
| [
"I'd initialize a dict that maps letters to grades and then apply it to a new column.\nvals = { \"A\" : 4, \"B\" : 3, \"C\" : 2, \"D\" : 1, \"F\" : 0 }\ndf['Grade number'] = df['Grade'].apply(lambda x: vals.get(x))\n\n"
] | [
0
] | [] | [] | [
"assign",
"python"
] | stackoverflow_0074679411_assign_python.txt |
Q:
How can I make the movement faster if I press the key?
I'm making a Tetris game now. I want to implement it so that if I press the down key , the block falls quickly, and if I press the left and right keys, the block moves quickly. Pressing the key used the pygame.key.get_pressed() function and used pygame.time.set_timer() function to make speed change. The game speed was set to 600 for the interval of pygame.time.set_timer(), but if I press the down key, the block drops quickly because the interval was set to 150 to speed up the game so that the block drops quickly. The problem is to implement the function on the left and right direction keys. It is also possible to move the left and right keys quickly if I change the interval. The problem is that the pygame.time.set_timer() function changes the speed of the entire game, so the block falls quickly as well as the left and right movements of the block. Is there a way to speed up left and right movements without touching the speed of other things? I'd appreciate it if you let me know, thanks!
code
elif start:
for event in pygame.event.get():
attack_stack = 0
pos = pygame.mouse.get_pos()
if event.type == QUIT:
done = True
elif event.type == USEREVENT:
# Set speed
if not game_over:
keys_pressed = pygame.key.get_pressed()
# Soft drop
if keys_pressed[K_DOWN]:
pygame.time.set_timer(pygame.USEREVENT, 100)
elif keys_pressed[K_RIGHT]:
pygame.time.set_timer(pygame.USEREVENT, 100)
if not is_rightedge1(dx, dy, mino_en, rotation, matrix):
ui_variables.move_sound.play()
dx += 1
elif keys_pressed[K_LEFT]:
pygame.time.set_timer(pygame.USEREVENT, 100)
if not is_leftedge1(dx, dy, mino_en, rotation, matrix):
ui_variables.move_sound.play()
dx -= 1
else:
pygame.time.set_timer(pygame.USEREVENT, 600)
A:
It looks to me that the reason your piece is dropping faster when you press a button is because if the user doesn't press a button the timer is set to 600ms. If they do press a button then the timer is set to 100ms.
As for how to fix this, you'll need to decouple the downward piece movement and user-input movement. I'd suggest making another event that only handles downward movement.
DROPEVENT = pygame.USEREVENT + 1
elif event.Type == DROPEVENT:
pygame.time.set_timer(DROPEVENT, 600)
# Your piece drop logic.
This snippet is not complete nor tested at all, but I hope it get the idea across. You don't seem to have posted the logic that handles dropping the piece, but you would need to move this into the new event.
A:
Waiting a long time (100 ms is already long) before reacting to user input feels bad, so you don’t want to wait a full piece-movement time before checking for input again just because there is none at the moment you check. Instead, poll for input at a steady pace of (say) 30 Hz; for simplicity, the usual approach is to just run the whole game at that frequency even if nothing needs to change on the screen for some (or even most) frames. (This technique also naturally allows smooth animations of or between piece movements.)
The usual implementation of “nothing” on a frame is to adjust the piece’s position every frame but integer-divide that position by some constant before using it for anything other than keeping track of its progress toward the next movement (like drawing it or checking for collisions). You might use pixels as the “invisible unit” when movement must be by whole tiles; games where sprites can be drawn at any pixel divide each into some convenient number of subpixels in the same fashion.
| How can I make the movement faster if I press the key? | I'm making a Tetris game now. I want to implement it so that if I press the down key , the block falls quickly, and if I press the left and right keys, the block moves quickly. Pressing the key used the pygame.key.get_pressed() function and used pygame.time.set_timer() function to make speed change. The game speed was set to 600 for the interval of pygame.time.set_timer(), but if I press the down key, the block drops quickly because the interval was set to 150 to speed up the game so that the block drops quickly. The problem is to implement the function on the left and right direction keys. It is also possible to move the left and right keys quickly if I change the interval. The problem is that the pygame.time.set_timer() function changes the speed of the entire game, so the block falls quickly as well as the left and right movements of the block. Is there a way to speed up left and right movements without touching the speed of other things? I'd appreciate it if you let me know, thanks!
code
elif start:
for event in pygame.event.get():
attack_stack = 0
pos = pygame.mouse.get_pos()
if event.type == QUIT:
done = True
elif event.type == USEREVENT:
# Set speed
if not game_over:
keys_pressed = pygame.key.get_pressed()
# Soft drop
if keys_pressed[K_DOWN]:
pygame.time.set_timer(pygame.USEREVENT, 100)
elif keys_pressed[K_RIGHT]:
pygame.time.set_timer(pygame.USEREVENT, 100)
if not is_rightedge1(dx, dy, mino_en, rotation, matrix):
ui_variables.move_sound.play()
dx += 1
elif keys_pressed[K_LEFT]:
pygame.time.set_timer(pygame.USEREVENT, 100)
if not is_leftedge1(dx, dy, mino_en, rotation, matrix):
ui_variables.move_sound.play()
dx -= 1
else:
pygame.time.set_timer(pygame.USEREVENT, 600)
| [
"It looks to me that the reason your piece is dropping faster when you press a button is because if the user doesn't press a button the timer is set to 600ms. If they do press a button then the timer is set to 100ms.\nAs for how to fix this, you'll need to decouple the downward piece movement and user-input movement. I'd suggest making another event that only handles downward movement.\nDROPEVENT = pygame.USEREVENT + 1\n\nelif event.Type == DROPEVENT:\n pygame.time.set_timer(DROPEVENT, 600)\n # Your piece drop logic.\n\nThis snippet is not complete nor tested at all, but I hope it get the idea across. You don't seem to have posted the logic that handles dropping the piece, but you would need to move this into the new event.\n",
"Waiting a long time (100 ms is already long) before reacting to user input feels bad, so you don’t want to wait a full piece-movement time before checking for input again just because there is none at the moment you check. Instead, poll for input at a steady pace of (say) 30 Hz; for simplicity, the usual approach is to just run the whole game at that frequency even if nothing needs to change on the screen for some (or even most) frames. (This technique also naturally allows smooth animations of or between piece movements.)\nThe usual implementation of “nothing” on a frame is to adjust the piece’s position every frame but integer-divide that position by some constant before using it for anything other than keeping track of its progress toward the next movement (like drawing it or checking for collisions). You might use pixels as the “invisible unit” when movement must be by whole tiles; games where sprites can be drawn at any pixel divide each into some convenient number of subpixels in the same fashion.\n"
] | [
0,
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074673028_pygame_python.txt |
Q:
Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?
I am getting an interesting error while trying to use Unpickler.load(), here is the source code:
open(target, 'a').close()
scores = {};
with open(target, "rb") as file:
unpickler = pickle.Unpickler(file);
scores = unpickler.load();
if not isinstance(scores, dict):
scores = {};
Here is the traceback:
Traceback (most recent call last):
File "G:\python\pendu\user_test.py", line 3, in <module>:
save_user_points("Magix", 30);
File "G:\python\pendu\user.py", line 22, in save_user_points:
scores = unpickler.load();
EOFError: Ran out of input
The file I am trying to read is empty.
How can I avoid getting this error, and get an empty variable instead?
A:
Most of the answers here have dealt with how to mange EOFError exceptions, which is really handy if you're unsure about whether the pickled object is empty or not.
However, if you're surprised that the pickle file is empty, it could be because you opened the filename through 'wb' or some other mode that could have over-written the file.
for example:
filename = 'cd.pkl'
with open(filename, 'wb') as f:
classification_dict = pickle.load(f)
This will over-write the pickled file. You might have done this by mistake before using:
...
open(filename, 'rb') as f:
And then got the EOFError because the previous block of code over-wrote the cd.pkl file.
When working in Jupyter, or in the console (Spyder) I usually write a wrapper over the reading/writing code, and call the wrapper subsequently. This avoids common read-write mistakes, and saves a bit of time if you're going to be reading the same file multiple times through your travails
A:
I would check that the file is not empty first:
import os
scores = {} # scores is an empty dict already
if os.path.getsize(target) > 0:
with open(target, "rb") as f:
unpickler = pickle.Unpickler(f)
# if file is not empty scores will be equal
# to the value unpickled
scores = unpickler.load()
Also open(target, 'a').close() is doing nothing in your code and you don't need to use ;.
A:
It is very likely that the pickled file is empty.
It is surprisingly easy to overwrite a pickle file if you're copying and pasting code.
For example the following writes a pickle file:
pickle.dump(df,open('df.p','wb'))
And if you copied this code to reopen it, but forgot to change 'wb' to 'rb' then you would overwrite the file:
df=pickle.load(open('df.p','wb'))
The correct syntax is
df=pickle.load(open('df.p','rb'))
A:
As you see, that's actually a natural error ..
A typical construct for reading from an Unpickler object would be like this ..
try:
data = unpickler.load()
except EOFError:
data = list() # or whatever you want
EOFError is simply raised, because it was reading an empty file, it just meant End of File ..
A:
You can catch that exception and return whatever you want from there.
open(target, 'a').close()
scores = {};
try:
with open(target, "rb") as file:
unpickler = pickle.Unpickler(file);
scores = unpickler.load();
if not isinstance(scores, dict):
scores = {};
except EOFError:
return {}
A:
if path.exists(Score_file):
try :
with open(Score_file , "rb") as prev_Scr:
return Unpickler(prev_Scr).load()
except EOFError :
return dict()
A:
I have encountered this error many times and it always occurs because after writing into the file, I didn't close it. If we don't close the file the content stays in the buffer and the file stays empty.
To save the content into the file, either file should be closed or file_object should go out of scope.
That's why at the time of loading it's giving the ran out of input error because the file is empty. So you have two options :
file_object.close()
file_object.flush(): if you don't wanna close your file in between the program, you can use the flush() function as it will forcefully move the content from the buffer to the file.
A:
Had the same issue. It turns out when I was writing to my pickle file I had not used the file.close(). Inserted that line in and the error was no more.
A:
Note that the mode of opening files is 'a' or some other have alphabet 'a' will also make error because of the overwritting.
pointer = open('makeaafile.txt', 'ab+')
tes = pickle.load(pointer, encoding='utf-8')
A:
temp_model = os.path.join(models_dir, train_type + '_' + part + '_' + str(pc))
# print(type(temp_model)) # <class 'str'>
filehandler = open(temp_model, "rb")
# print(type(filehandler)) # <class '_io.BufferedReader'>
try:
pdm_temp = pickle.load(filehandler)
except UnicodeDecodeError:
pdm_temp = pickle.load(filehandler, fix_imports=True, encoding="latin1")
A:
This error comes when your pickle file is empty (0 Bytes). You need to check the size of your pickle file first. This was the scenario in my case. Hope this helps!
| Why do I get "Pickle - EOFError: Ran out of input" reading an empty file? | I am getting an interesting error while trying to use Unpickler.load(), here is the source code:
open(target, 'a').close()
scores = {};
with open(target, "rb") as file:
unpickler = pickle.Unpickler(file);
scores = unpickler.load();
if not isinstance(scores, dict):
scores = {};
Here is the traceback:
Traceback (most recent call last):
File "G:\python\pendu\user_test.py", line 3, in <module>:
save_user_points("Magix", 30);
File "G:\python\pendu\user.py", line 22, in save_user_points:
scores = unpickler.load();
EOFError: Ran out of input
The file I am trying to read is empty.
How can I avoid getting this error, and get an empty variable instead?
| [
"Most of the answers here have dealt with how to mange EOFError exceptions, which is really handy if you're unsure about whether the pickled object is empty or not.\nHowever, if you're surprised that the pickle file is empty, it could be because you opened the filename through 'wb' or some other mode that could have over-written the file.\nfor example:\nfilename = 'cd.pkl'\nwith open(filename, 'wb') as f:\n classification_dict = pickle.load(f)\n\nThis will over-write the pickled file. You might have done this by mistake before using:\n...\nopen(filename, 'rb') as f:\n\nAnd then got the EOFError because the previous block of code over-wrote the cd.pkl file. \nWhen working in Jupyter, or in the console (Spyder) I usually write a wrapper over the reading/writing code, and call the wrapper subsequently. This avoids common read-write mistakes, and saves a bit of time if you're going to be reading the same file multiple times through your travails \n",
"I would check that the file is not empty first:\nimport os\n\nscores = {} # scores is an empty dict already\n\nif os.path.getsize(target) > 0: \n with open(target, \"rb\") as f:\n unpickler = pickle.Unpickler(f)\n # if file is not empty scores will be equal\n # to the value unpickled\n scores = unpickler.load()\n\nAlso open(target, 'a').close() is doing nothing in your code and you don't need to use ;.\n",
"It is very likely that the pickled file is empty.\nIt is surprisingly easy to overwrite a pickle file if you're copying and pasting code.\nFor example the following writes a pickle file:\npickle.dump(df,open('df.p','wb'))\n\nAnd if you copied this code to reopen it, but forgot to change 'wb' to 'rb' then you would overwrite the file:\ndf=pickle.load(open('df.p','wb'))\n\nThe correct syntax is\ndf=pickle.load(open('df.p','rb'))\n\n",
"As you see, that's actually a natural error ..\nA typical construct for reading from an Unpickler object would be like this ..\ntry:\n data = unpickler.load()\nexcept EOFError:\n data = list() # or whatever you want\n\nEOFError is simply raised, because it was reading an empty file, it just meant End of File ..\n",
"You can catch that exception and return whatever you want from there. \nopen(target, 'a').close()\nscores = {};\ntry:\n with open(target, \"rb\") as file:\n unpickler = pickle.Unpickler(file);\n scores = unpickler.load();\n if not isinstance(scores, dict):\n scores = {};\nexcept EOFError:\n return {}\n\n",
"if path.exists(Score_file):\n try : \n with open(Score_file , \"rb\") as prev_Scr:\n\n return Unpickler(prev_Scr).load()\n\n except EOFError : \n\n return dict() \n\n",
"I have encountered this error many times and it always occurs because after writing into the file, I didn't close it. If we don't close the file the content stays in the buffer and the file stays empty.\nTo save the content into the file, either file should be closed or file_object should go out of scope.\nThat's why at the time of loading it's giving the ran out of input error because the file is empty. So you have two options :\n\nfile_object.close()\nfile_object.flush(): if you don't wanna close your file in between the program, you can use the flush() function as it will forcefully move the content from the buffer to the file.\n\n",
"Had the same issue. It turns out when I was writing to my pickle file I had not used the file.close(). Inserted that line in and the error was no more.\n",
"Note that the mode of opening files is 'a' or some other have alphabet 'a' will also make error because of the overwritting.\npointer = open('makeaafile.txt', 'ab+')\ntes = pickle.load(pointer, encoding='utf-8')\n\n",
"temp_model = os.path.join(models_dir, train_type + '_' + part + '_' + str(pc))\n# print(type(temp_model)) # <class 'str'>\nfilehandler = open(temp_model, \"rb\")\n# print(type(filehandler)) # <class '_io.BufferedReader'>\ntry:\n pdm_temp = pickle.load(filehandler)\nexcept UnicodeDecodeError:\n pdm_temp = pickle.load(filehandler, fix_imports=True, encoding=\"latin1\")\n\n",
"This error comes when your pickle file is empty (0 Bytes). You need to check the size of your pickle file first. This was the scenario in my case. Hope this helps!\n"
] | [
297,
176,
27,
11,
3,
2,
1,
1,
0,
0,
0
] | [
"from os.path import getsize as size\nfrom pickle import *\nif size(target)>0:\n with open(target,'rb') as f:\n scores={i:j for i,j in enumerate(load(f))}\nelse: scores={}\n\n#line 1.\nwe importing Function 'getsize' from Library 'OS' sublibrary 'path' and we rename it with command 'as' for shorter style of writing. Important is hier that we loading only one single Func that we need and not whole Library!\nline 2.\nSame Idea, but when we dont know wich modul we will use in code at the begining, we can import all library using a command '*'.\nline 3.\nConditional Statement... if size of your file >0 ( means obj is not an empty). 'target' is variable that schould be a bit earlier predefined.\njust an Example : target=(r'd:\\dir1\\dir.2..\\YourDataFile.bin')\nLine 4.\n'With open(target) as file:' an open construction for any file, u dont need then to use file.close(). it helps to avoid some typical Errors such as \"Run out of input\" or Permissions rights.\n'rb' mod means 'rea binary' that u can only read(load) the data from your binary file but u cant modify/rewrite it.\nLine5.\nList comprehension method in applying to a Dictionary..\nline 6. Case your datafile is empty, it will not raise an any Error msg, but return just an empty dictionary.\n"
] | [
-1
] | [
"file",
"pickle",
"python"
] | stackoverflow_0024791987_file_pickle_python.txt |
Q:
How to Exclude a column in a row of data in sqlalchemy and fastapi
I am trying to load only AuthUser.id and AuthUser.username from the below result
statement = select(func.count(UserCountry.id).label("uid"),
AuthUser.id,AuthUser.username).\
join(CountryTool, CountryTool.id == UserCountry.country_tool_id).\
join(Country, Country.id == CountryTool.country_id).\
join(Tool, Tool.id == CountryTool.tool_id).\
join(AuthUser, AuthUser.id == UserCountry.user_id).\
where(CountryTool.id == country_tool_id).\
group_by(AuthUser.id,AuthUser.username)
current json response is
{
"uid": 1,
"id": 1,
"username": "ross"
},
{
"uid": 1,
"id": 2,
"username": "harvey"
}
response that I need is
{
"id": 1,
"username": "ross"
},
{
"id": 2,
"username": "harvey"
}
A:
You can do it like this:
result = statement.all()
response = []
for r in result:
response.append({
"id": r[1],
"username": r[2]
})
This code will iterate over the query result, and create a dictionary object with only the id and username fields, and append it to the response list. You can then return the response list as the JSON response.
| How to Exclude a column in a row of data in sqlalchemy and fastapi | I am trying to load only AuthUser.id and AuthUser.username from the below result
statement = select(func.count(UserCountry.id).label("uid"),
AuthUser.id,AuthUser.username).\
join(CountryTool, CountryTool.id == UserCountry.country_tool_id).\
join(Country, Country.id == CountryTool.country_id).\
join(Tool, Tool.id == CountryTool.tool_id).\
join(AuthUser, AuthUser.id == UserCountry.user_id).\
where(CountryTool.id == country_tool_id).\
group_by(AuthUser.id,AuthUser.username)
current json response is
{
"uid": 1,
"id": 1,
"username": "ross"
},
{
"uid": 1,
"id": 2,
"username": "harvey"
}
response that I need is
{
"id": 1,
"username": "ross"
},
{
"id": 2,
"username": "harvey"
}
| [
"You can do it like this:\nresult = statement.all()\nresponse = []\n\nfor r in result:\n\n response.append({\n \"id\": r[1],\n \"username\": r[2]\n })\n\nThis code will iterate over the query result, and create a dictionary object with only the id and username fields, and append it to the response list. You can then return the response list as the JSON response.\n"
] | [
0
] | [] | [] | [
"fastapi",
"python",
"python_3.x",
"sqlalchemy"
] | stackoverflow_0074679473_fastapi_python_python_3.x_sqlalchemy.txt |
Q:
Create New rows from a multiple columns of lists
I am scraping a website:- https://spfpharmacy.com/
I have successfully scraped this using selenium using the below code.
test_list = []
test_list = list(string.ascii_uppercase)
med_url = []
for i in tqdm(test_
list):
driver.get(f'https://spfpharmacy.com/search/?drugName={i}')
for i in driver.find_elements(By.XPATH,"//a[@class='rxrequired default']"):
med_url.append(i.get_attribute("href"))
data = []
for i in tqdm(med_url):
driver.get(i)
time.sleep(1)
try:
med_name = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div[@class='product-name']"):
med_name.append(i.text)
except:
med_name.append(None)
try:
manuf_name = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='manufactured-name']"):
manuf_name.append(i.text)
except:
manuf_name.append(i.text)
try:
country = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='product-country']"):
country.append(i.text)
except:
country.append(None)
try:
pres_req = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='product-prescription']"):
pres_req.append(i.text)
except:
pres_req.append(None)
str_price = []
try:
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='product-dose-text']"):
for j in driver.find_elements(By.XPATH,f"//div[@id='brand_dose']//div//select//option[@data-str='{i.text}']"):
str_price.append({i.text, j.text})
except:
str_price.append(None)
data.append({
'Medicine_name':med_name,
'Manufacture_name':manuf_name,
'Product_Counry':country,
'Prescription_Required':pres_req,
'Product_Details':str_price})
where test_list is a list of alphabets in uppercase which completes the URL like:-
https://spfpharmacy.com/search/?drugName=A which gives the details of all the medicines with A.
After scraping the data I am getting results as shown below:-
But I want to get the name of each medicine in a single row and all details associated with that medicine under different columns.
Something like this.
I tried using explode, and transform and also searched over the internet and stack overflow but was unable to convert this into the expected format.
A:
It looks like you are appending the values to the list within the try block, which means that if an exception occurs, the list will not be updated. Instead, you should append the values to the list outside of the try block, and use a default value within the except block. For example, instead of this:
try:
med_name = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div[@class='product-name']"):
med_name.append(i.text)
except:
med_name.append(None)
You can do this:
med_name = []
try:
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div[@class='product-name']"):
med_name.append(i.text)
except:
med_name.append(None)
This way, the med_name list will always be updated, whether or not an exception occurs.
To achieve the expected output, you can simply use the extend method to add the elements of the lists to the data list, instead of creating a dictionary and appending that to the data list. For example, instead of this:
data.append({
'Medicine_name':med_name,
'Manufacture_name':manuf_name,
'Product_Counry':country,
'Prescription_Required':pres_req,
'Product_Details':str_price})
You can do this:
data.extend(zip(med_name, manuf_name, country, pres_req, str_price))
This will create a list of tuples, where each tuple contains the values of the corresponding elements from the med_name, manuf_name, country, pres_req, and str_price lists. This will give you the desired output, where each medicine's details are on a separate row.
| Create New rows from a multiple columns of lists | I am scraping a website:- https://spfpharmacy.com/
I have successfully scraped this using selenium using the below code.
test_list = []
test_list = list(string.ascii_uppercase)
med_url = []
for i in tqdm(test_
list):
driver.get(f'https://spfpharmacy.com/search/?drugName={i}')
for i in driver.find_elements(By.XPATH,"//a[@class='rxrequired default']"):
med_url.append(i.get_attribute("href"))
data = []
for i in tqdm(med_url):
driver.get(i)
time.sleep(1)
try:
med_name = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div[@class='product-name']"):
med_name.append(i.text)
except:
med_name.append(None)
try:
manuf_name = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='manufactured-name']"):
manuf_name.append(i.text)
except:
manuf_name.append(i.text)
try:
country = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='product-country']"):
country.append(i.text)
except:
country.append(None)
try:
pres_req = []
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='product-prescription']"):
pres_req.append(i.text)
except:
pres_req.append(None)
str_price = []
try:
for i in driver.find_elements(By.XPATH,"//div[@id='brand_dose']//div//span[@class='product-dose-text']"):
for j in driver.find_elements(By.XPATH,f"//div[@id='brand_dose']//div//select//option[@data-str='{i.text}']"):
str_price.append({i.text, j.text})
except:
str_price.append(None)
data.append({
'Medicine_name':med_name,
'Manufacture_name':manuf_name,
'Product_Counry':country,
'Prescription_Required':pres_req,
'Product_Details':str_price})
where test_list is a list of alphabets in uppercase which completes the URL like:-
https://spfpharmacy.com/search/?drugName=A which gives the details of all the medicines with A.
After scraping the data I am getting results as shown below:-
But I want to get the name of each medicine in a single row and all details associated with that medicine under different columns.
Something like this.
I tried using explode, and transform and also searched over the internet and stack overflow but was unable to convert this into the expected format.
| [
"It looks like you are appending the values to the list within the try block, which means that if an exception occurs, the list will not be updated. Instead, you should append the values to the list outside of the try block, and use a default value within the except block. For example, instead of this:\ntry:\n med_name = []\n for i in driver.find_elements(By.XPATH,\"//div[@id='brand_dose']//div[@class='product-name']\"):\n med_name.append(i.text)\nexcept:\n med_name.append(None)\n\nYou can do this:\nmed_name = []\ntry:\n for i in driver.find_elements(By.XPATH,\"//div[@id='brand_dose']//div[@class='product-name']\"):\n med_name.append(i.text)\nexcept:\n med_name.append(None)\n\nThis way, the med_name list will always be updated, whether or not an exception occurs.\nTo achieve the expected output, you can simply use the extend method to add the elements of the lists to the data list, instead of creating a dictionary and appending that to the data list. For example, instead of this:\ndata.append({\n 'Medicine_name':med_name,\n 'Manufacture_name':manuf_name,\n 'Product_Counry':country,\n 'Prescription_Required':pres_req,\n 'Product_Details':str_price})\n\nYou can do this:\ndata.extend(zip(med_name, manuf_name, country, pres_req, str_price))\n\nThis will create a list of tuples, where each tuple contains the values of the corresponding elements from the med_name, manuf_name, country, pres_req, and str_price lists. This will give you the desired output, where each medicine's details are on a separate row.\n"
] | [
0
] | [] | [] | [
"data_preprocessing",
"dataframe",
"pandas",
"python",
"selenium"
] | stackoverflow_0074679389_data_preprocessing_dataframe_pandas_python_selenium.txt |
Q:
Timeseries - group data by specific time period slices
I am working with a csv file that has a dataset of 13 years worth of 5m time intervals.
I am trying to slice sections of this dataset into specific time periods.
example
time_period = (df['time'] >= '01:00:00') & (df['time']<='5:00:00')
time_period_df = df.loc[time_period]
I would expect an output of only the time between 1-5 to be included in this time period, however, I am getting all 24hrs in the output
I would like the output to print only time in between and including 1:00:00 and 5:00:00.
A:
It looks like you are using the comparison operators >= and <= to try and specify the time range you want to include in your time period dataframe. However, these comparison operators will not work as expected on string values like the ones you have in your time column. Instead of using these operators, you can use the str.slice() method to extract the hour portion of the time strings and then use the comparison operators on those numeric values to specify your time range.
Here is an example of how you could do this:
# First, extract the hour portion of the time strings
df['hour'] = df['time'].str.slice(0, 2)
# Next, create a boolean mask using the comparison operators on the 'hour' column
time_period = (df['hour'] >= '01') & (df['hour'] <= '05')
# Finally, use this boolean mask to create your time period dataframe
time_period_df = df.loc[time_period]
This should give you a dataframe that includes only the rows with time values between and including 1:00:00 and 5:00:00.
Note that this solution assumes that the time strings in your time column are in the format 'HH:MM:SS'. If the time strings are in a different format, you will need to adjust the str.slice() call accordingly.
| Timeseries - group data by specific time period slices | I am working with a csv file that has a dataset of 13 years worth of 5m time intervals.
I am trying to slice sections of this dataset into specific time periods.
example
time_period = (df['time'] >= '01:00:00') & (df['time']<='5:00:00')
time_period_df = df.loc[time_period]
I would expect an output of only the time between 1-5 to be included in this time period, however, I am getting all 24hrs in the output
I would like the output to print only time in between and including 1:00:00 and 5:00:00.
| [
"It looks like you are using the comparison operators >= and <= to try and specify the time range you want to include in your time period dataframe. However, these comparison operators will not work as expected on string values like the ones you have in your time column. Instead of using these operators, you can use the str.slice() method to extract the hour portion of the time strings and then use the comparison operators on those numeric values to specify your time range.\nHere is an example of how you could do this:\n# First, extract the hour portion of the time strings\ndf['hour'] = df['time'].str.slice(0, 2)\n\n# Next, create a boolean mask using the comparison operators on the 'hour' column\ntime_period = (df['hour'] >= '01') & (df['hour'] <= '05')\n\n# Finally, use this boolean mask to create your time period dataframe\ntime_period_df = df.loc[time_period]\n\nThis should give you a dataframe that includes only the rows with time values between and including 1:00:00 and 5:00:00.\nNote that this solution assumes that the time strings in your time column are in the format 'HH:MM:SS'. If the time strings are in a different format, you will need to adjust the str.slice() call accordingly.\n"
] | [
0
] | [] | [] | [
"group_by",
"pandas",
"python",
"python_datetime",
"time_series"
] | stackoverflow_0074679543_group_by_pandas_python_python_datetime_time_series.txt |
Q:
How to make my function take less time in python?
I am trying to do this problem:
https://www.chegg.com/homework-help/questions-and-answers/blocks-pyramid-def-pyramidblocks-n-m-h-pyramid-structure-although-ancient-mesoamerican-fam-q38542637
This is my code:
def pyramid_blocks(n, m, h):
return sum((n+i)*(m+i) for i in range(h))
But the problem is that whenever I try to test it, it tells me that it is too slow, so is there a way to make it take less time? I also tried to do it with lists, but with that too it takes too much time.
def pyramid_blocks(n, m, h):
return sum((n+i)*(m+i) for i in range(h))
def pyramid_blocks(n, m, h):
r=[]
t=[]
mlp=[]
for i in range(h):
r.append(n+i)
t.append(m+i)
for i, j in zip(r,t):
mlp.append(i*j)
return sum(mlp)
A:
You should use math to understand the components that are part of the answer. you need to calculate sum of all numbers until h, denote s, and the sum of all squares 1^2+2^2+3+2+... denotes sum_power
it comes down to: h*m*n + s*n + s*m + sum_power
this is a working solution:
import time
def pyramid_blocks(n, m, h):
return sum((n+i)*(m+i) for i in range(h))
def efficient_pyramid_blocks(n, m, h):
s = (h-1)*h/2
sum_power = (h-1)*h*(2*(h-1)+1)/6
return int(h*n*m + s*(n+m) + sum_power)
if __name__ == '__main__':
h = 100000
m = 32
n = 47
start = time.time()
print(pyramid_blocks(n, m, h))
print(time.time()-start)
start = time.time()
print(efficient_pyramid_blocks(n, m, h))
print(time.time() - start)
and the output is:
333723479800000
0.016993045806884766
333723479800000
1.2159347534179688e-05
A:
A bit of math can help:
here we use a symbolic calculator to get the equation for the blocks in the pyramid using a sigma sum of (n+count)*(m+count) from count = 0 to count = h-1
Then we just paste the equation into code (the double slash // is just to use integer division instead of float division):
def count_blocks(n,m,h):
return (2*(h**3)+3*(h**2)*m+3*(h**2)*n+6*h*m*n-3*(h**2)-3*h*m-3*h*n+h)//6
Edit: This also outputs the correct result of 10497605327499753 for n,m,h = (2123377, 2026271, 2437)
| How to make my function take less time in python? | I am trying to do this problem:
https://www.chegg.com/homework-help/questions-and-answers/blocks-pyramid-def-pyramidblocks-n-m-h-pyramid-structure-although-ancient-mesoamerican-fam-q38542637
This is my code:
def pyramid_blocks(n, m, h):
return sum((n+i)*(m+i) for i in range(h))
But the problem is that whenever I try to test it, it tells me that it is too slow, so is there a way to make it take less time? I also tried to do it with lists, but with that too it takes too much time.
def pyramid_blocks(n, m, h):
return sum((n+i)*(m+i) for i in range(h))
def pyramid_blocks(n, m, h):
r=[]
t=[]
mlp=[]
for i in range(h):
r.append(n+i)
t.append(m+i)
for i, j in zip(r,t):
mlp.append(i*j)
return sum(mlp)
| [
"You should use math to understand the components that are part of the answer. you need to calculate sum of all numbers until h, denote s, and the sum of all squares 1^2+2^2+3+2+... denotes sum_power\nit comes down to: h*m*n + s*n + s*m + sum_power\nthis is a working solution:\nimport time\n\ndef pyramid_blocks(n, m, h):\n return sum((n+i)*(m+i) for i in range(h))\n\n\ndef efficient_pyramid_blocks(n, m, h):\n s = (h-1)*h/2\n sum_power = (h-1)*h*(2*(h-1)+1)/6\n return int(h*n*m + s*(n+m) + sum_power)\n\n\nif __name__ == '__main__':\n h = 100000\n m = 32\n n = 47\n start = time.time()\n print(pyramid_blocks(n, m, h))\n print(time.time()-start)\n start = time.time()\n print(efficient_pyramid_blocks(n, m, h))\n print(time.time() - start)\n\nand the output is:\n333723479800000\n0.016993045806884766\n333723479800000\n1.2159347534179688e-05\n\n",
"A bit of math can help:\nhere we use a symbolic calculator to get the equation for the blocks in the pyramid using a sigma sum of (n+count)*(m+count) from count = 0 to count = h-1\nThen we just paste the equation into code (the double slash // is just to use integer division instead of float division):\ndef count_blocks(n,m,h):\n return (2*(h**3)+3*(h**2)*m+3*(h**2)*n+6*h*m*n-3*(h**2)-3*h*m-3*h*n+h)//6\n\nEdit: This also outputs the correct result of 10497605327499753 for n,m,h = (2123377, 2026271, 2437)\n"
] | [
0,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074664728_list_python.txt |
Q:
Filtering on multiple fields in an Elasticsearch ObjectField()
I am having trouble figuring out the filtering syntax for ObjectFields() in django-elasticsearch-dsl. In particular, when I try to filter on multiple subfields of the same ObjectField(), I'm getting incorrect results.
For example, consider the following document
class ItemDocument(Document):
product = fields.ObjectField(properties={
'id': fields.IntegerField(),
'name': fields.TextField(),
'description': fields.TextField()
})
details = fields.ObjectField(properties={
'category_id': fields.IntegerField(),
'id': fields.IntegerField(),
'value': fields.FloatField()
})
description = fields.TextField()
I want to find an Item with a detail object that has both category_id == 3 and value < 1.5, so I created the following query
x = ItemDocument.search().filter(Q("match",details__category_id=3) & Q("range",details__value={'lt':1.5})).execute()
Unfortunately, this returns all items which have a detail object with category_id==3 and a separate detail object with value < 1.5 e.g.
{
"product": ...
"details": [
{
"category_id": 3,
"id": 7,
"value": 20.0
},
{
"category_id": 4,
"id": 7,
"value": 1.0
},
...
]
}
instead of my desired result of all items that have a detail object with both category_id==3 AND value < 1.5 e.g.
{
"product": ...
"details": [
{
"category_id": 3,
"id": 7,
"value": 1.0
},
...
]
}
How do I properly format this query using django-elasticsearch-dsl?
A:
You can use the nested query in Elasticsearch to filter on multiple subfields of the same ObjectField. Here is an example of how you can do this in django-elasticsearch-dsl:
x = ItemDocument.search().query(
"nested",
path="details",
query=Q("match", details__category_id=3) & Q("range", details__value={'lt':1.5})
).execute()
The nested query allows you to filter on multiple subfields of the details object, and only return documents that have a detail object with both category_id==3 and value < 1.5.
You can also use the inner_hits option with the nested query to get the details of the matching detail objects:
x = ItemDocument.search().query(
"nested",
path="details",
query=Q("match", details__category_id=3) & Q("range", details__value={'lt':1.5}),
inner_hits={}
).execute()
This will add a nested field to each search result, which contains the details of the matching detail objects. You can access this field in your code like this:
results = x.to_dict()
for result in results['hits']['hits']:
nested_results = result['inner_hits']['details']['hits']['hits']
# do something with nested_results
| Filtering on multiple fields in an Elasticsearch ObjectField() | I am having trouble figuring out the filtering syntax for ObjectFields() in django-elasticsearch-dsl. In particular, when I try to filter on multiple subfields of the same ObjectField(), I'm getting incorrect results.
For example, consider the following document
class ItemDocument(Document):
product = fields.ObjectField(properties={
'id': fields.IntegerField(),
'name': fields.TextField(),
'description': fields.TextField()
})
details = fields.ObjectField(properties={
'category_id': fields.IntegerField(),
'id': fields.IntegerField(),
'value': fields.FloatField()
})
description = fields.TextField()
I want to find an Item with a detail object that has both category_id == 3 and value < 1.5, so I created the following query
x = ItemDocument.search().filter(Q("match",details__category_id=3) & Q("range",details__value={'lt':1.5})).execute()
Unfortunately, this returns all items which have a detail object with category_id==3 and a separate detail object with value < 1.5 e.g.
{
"product": ...
"details": [
{
"category_id": 3,
"id": 7,
"value": 20.0
},
{
"category_id": 4,
"id": 7,
"value": 1.0
},
...
]
}
instead of my desired result of all items that have a detail object with both category_id==3 AND value < 1.5 e.g.
{
"product": ...
"details": [
{
"category_id": 3,
"id": 7,
"value": 1.0
},
...
]
}
How do I properly format this query using django-elasticsearch-dsl?
| [
"You can use the nested query in Elasticsearch to filter on multiple subfields of the same ObjectField. Here is an example of how you can do this in django-elasticsearch-dsl:\nx = ItemDocument.search().query(\n \"nested\",\n path=\"details\",\n query=Q(\"match\", details__category_id=3) & Q(\"range\", details__value={'lt':1.5})\n).execute()\n\nThe nested query allows you to filter on multiple subfields of the details object, and only return documents that have a detail object with both category_id==3 and value < 1.5.\nYou can also use the inner_hits option with the nested query to get the details of the matching detail objects:\nx = ItemDocument.search().query(\n \"nested\",\n path=\"details\",\n query=Q(\"match\", details__category_id=3) & Q(\"range\", details__value={'lt':1.5}),\n inner_hits={}\n).execute()\n\nThis will add a nested field to each search result, which contains the details of the matching detail objects. You can access this field in your code like this:\nresults = x.to_dict()\nfor result in results['hits']['hits']:\n nested_results = result['inner_hits']['details']['hits']['hits']\n # do something with nested_results\n\n"
] | [
1
] | [] | [] | [
"django",
"elasticsearch",
"elasticsearch_dsl_py",
"python"
] | stackoverflow_0074679018_django_elasticsearch_elasticsearch_dsl_py_python.txt |
Q:
Python web client to access API of an online supermarket
Suppose you are writing a python web client to access an API of an online supermarket. Given below are the API details.
Base URL= http://host1.open.uom.lk:8080
enter image description here```
Write a python program to retrieve all the products from the API Server and print the total number of products currently stored in the server.
Hint: the json response will be of the following example format:
{
"message": "success",
"data": [
{
"id": 85,
"productName": "Araliya Basmathi Rice",
"description": "White Basmathi Rice imported from Pakistan. High-quality rice with extra fragrance. Organically grown.",
"category": "Rice",
"brand": "CIC",
"expiredDate": "2023.05.04",
"manufacturedDate": "2022.02.20",
"batchNumber": 324567,
"unitPrice": 1020,
"quantity": 200,
"createdDate": "2022.02.24"
},
{
"id": 86,
"productName": "Araliya Basmathi Rice",
"description": "White Basmathi Rice imported from Pakistan. High-quality rice with extra fragrance. Organically grown.",
"category": "Rice",
"brand": "CIC",
"expiredDate": "2023.05.04",
"manufacturedDate": "2022.02.20",
"batchNumber": 324567,
"unitPrice": 1020,
"quantity": 200,
"createdDate": "2022.02.24"
},
...
...
}
Please help me to answer for this question.
I tried below code but it's showing an error for me:
import requests
BASE_URL = "http://host1.open.uom.lk:8080"
response = requests.post(f"{BASE_URL}/api/products/", json=data)
print(response.json())
parsed = json.load(response.text())
len(parsed[“data”])
A:
Here is an example of how you could write a Python program to retrieve all the products from the API server and print the total number of products currently stored in the server:
import requests
# Define the base URL of the API
BASE_URL = "http://host1.open.uom.lk:8080"
# Use the requests library to make a GET request to the API's /api/products/ endpoint
response = requests.get(f"{BASE_URL}/api/products/")
# Check the status code of the response to make sure the request was successful
if response.status_code == 200:
# If the request was successful, parse the JSON response
products = response.json()
# Retrieve the list of products from the "data" key in the JSON response
products_list = products["data"]
# Print the total number of products stored in the server
print(f"Total number of products in the server: {len(products_list)}")
else:
# If the request was not successful, print an error message
print(f"An error occurred: {response.text}")
This code should print the total number of products stored in the server.
| Python web client to access API of an online supermarket | Suppose you are writing a python web client to access an API of an online supermarket. Given below are the API details.
Base URL= http://host1.open.uom.lk:8080
enter image description here```
Write a python program to retrieve all the products from the API Server and print the total number of products currently stored in the server.
Hint: the json response will be of the following example format:
{
"message": "success",
"data": [
{
"id": 85,
"productName": "Araliya Basmathi Rice",
"description": "White Basmathi Rice imported from Pakistan. High-quality rice with extra fragrance. Organically grown.",
"category": "Rice",
"brand": "CIC",
"expiredDate": "2023.05.04",
"manufacturedDate": "2022.02.20",
"batchNumber": 324567,
"unitPrice": 1020,
"quantity": 200,
"createdDate": "2022.02.24"
},
{
"id": 86,
"productName": "Araliya Basmathi Rice",
"description": "White Basmathi Rice imported from Pakistan. High-quality rice with extra fragrance. Organically grown.",
"category": "Rice",
"brand": "CIC",
"expiredDate": "2023.05.04",
"manufacturedDate": "2022.02.20",
"batchNumber": 324567,
"unitPrice": 1020,
"quantity": 200,
"createdDate": "2022.02.24"
},
...
...
}
Please help me to answer for this question.
I tried below code but it's showing an error for me:
import requests
BASE_URL = "http://host1.open.uom.lk:8080"
response = requests.post(f"{BASE_URL}/api/products/", json=data)
print(response.json())
parsed = json.load(response.text())
len(parsed[“data”])
| [
"Here is an example of how you could write a Python program to retrieve all the products from the API server and print the total number of products currently stored in the server:\nimport requests\n\n# Define the base URL of the API\nBASE_URL = \"http://host1.open.uom.lk:8080\"\n\n# Use the requests library to make a GET request to the API's /api/products/ endpoint\nresponse = requests.get(f\"{BASE_URL}/api/products/\")\n\n# Check the status code of the response to make sure the request was successful\nif response.status_code == 200:\n # If the request was successful, parse the JSON response\n products = response.json()\n \n # Retrieve the list of products from the \"data\" key in the JSON response\n products_list = products[\"data\"]\n \n # Print the total number of products stored in the server\n print(f\"Total number of products in the server: {len(products_list)}\")\nelse:\n # If the request was not successful, print an error message\n print(f\"An error occurred: {response.text}\")\n\nThis code should print the total number of products stored in the server.\n"
] | [
0
] | [] | [] | [
"api",
"python"
] | stackoverflow_0074679483_api_python.txt |
Q:
Use trained ML to classify new dataset
I have save my ML model that have been train. ML that I chose is SVM and I'm using pickle to save it. How can I use this model to classify new dataset which in csv file that have no sentiment label in it?
I'm using Jupyter Notebook to do this project
A:
It would be useful to know what library (sklearn, pytorch, tensorflow, etc.) you used to train the model, but for this example I'm going to assume you used the pickle library to pickle and the sklearn library to train the model.
At some point you used something like pickle.dump(model, open(filename, 'wb')) to save your model.
You can also use pickle to load the model through pickle.load(open(filename, 'rb')) and then call the model through model.predict(new_X) where new_X represents the dataframe you want to make the predictions on.
| Use trained ML to classify new dataset | I have save my ML model that have been train. ML that I chose is SVM and I'm using pickle to save it. How can I use this model to classify new dataset which in csv file that have no sentiment label in it?
I'm using Jupyter Notebook to do this project
| [
"It would be useful to know what library (sklearn, pytorch, tensorflow, etc.) you used to train the model, but for this example I'm going to assume you used the pickle library to pickle and the sklearn library to train the model.\nAt some point you used something like pickle.dump(model, open(filename, 'wb')) to save your model.\nYou can also use pickle to load the model through pickle.load(open(filename, 'rb')) and then call the model through model.predict(new_X) where new_X represents the dataframe you want to make the predictions on.\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"machine_learning",
"python",
"sentiment_analysis"
] | stackoverflow_0074679490_jupyter_notebook_machine_learning_python_sentiment_analysis.txt |
Q:
How to find the position of a Rect in python?
I was trying to make a code that moves a rect object (gotten with the get_rect function) but I need its coordinates to make it move 1 pixel away (if there are any other ways to do this, let me know.)
Here is the code:
import sys, pygame
pygame.init()
size = width, height = 1920, 1080
black = 0, 0, 0
screen = pygame.display.set_mode(size)
ball = pygame.image.load("ball.png")
rectball = ball.get_rect()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT: sys.exit()
rectball.move()
screen.fill(black)
screen.blit(ball, rectball)
pygame.display.flip()
You will notice that on line 11 the parameters are unfilled. This is because I was going to make it coords + 1.
A:
If I am correct I think it is rect.left or rect.top that gives it. Another method is to create another rectangle and check if the rectangle is in that.
A:
#To move a rect object in Pygame, you can use the move_ip() method. This method takes two arguments, the x and y coordinates to move the rect by. For example, if you want to move the rect one pixel to the right and one pixel down, you can use the following code:
rectball.move_ip(1, 1)
#You can also use the x and y attributes of the rect to move it by a certain amount. For example, if you want to move the rect one pixel to the right, you can use the following code:
rectball.x += 1
#Note that this will only move the rect object and not the image that it represents. To move the image on the screen, you will also need to update the position of the ball surface using the blit() method.
#Here is an updated version of your code that moves the rect and the image on the screen:
import sys
import pygame
pygame.init()
size = width, height = 1920, 1080
black = 0, 0, 0
screen = pygame.display.set_mode(size)
ball = pygame.image.load("ball.png")
rectball = ball.get_rect()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
# Move the rect one pixel to the right and one pixel down
rectball.move_ip(1, 1)
# Update the position of the ball on the screen
screen.blit(ball, rectball)
pygame.display.flip()
| How to find the position of a Rect in python? | I was trying to make a code that moves a rect object (gotten with the get_rect function) but I need its coordinates to make it move 1 pixel away (if there are any other ways to do this, let me know.)
Here is the code:
import sys, pygame
pygame.init()
size = width, height = 1920, 1080
black = 0, 0, 0
screen = pygame.display.set_mode(size)
ball = pygame.image.load("ball.png")
rectball = ball.get_rect()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT: sys.exit()
rectball.move()
screen.fill(black)
screen.blit(ball, rectball)
pygame.display.flip()
You will notice that on line 11 the parameters are unfilled. This is because I was going to make it coords + 1.
| [
"If I am correct I think it is rect.left or rect.top that gives it. Another method is to create another rectangle and check if the rectangle is in that.\n",
"#To move a rect object in Pygame, you can use the move_ip() method. This method takes two arguments, the x and y coordinates to move the rect by. For example, if you want to move the rect one pixel to the right and one pixel down, you can use the following code:\n\nrectball.move_ip(1, 1)\n\n#You can also use the x and y attributes of the rect to move it by a certain amount. For example, if you want to move the rect one pixel to the right, you can use the following code:\n\nrectball.x += 1\n\n#Note that this will only move the rect object and not the image that it represents. To move the image on the screen, you will also need to update the position of the ball surface using the blit() method.\n\n#Here is an updated version of your code that moves the rect and the image on the screen:\n\nimport sys\nimport pygame\n\npygame.init()\n\nsize = width, height = 1920, 1080\nblack = 0, 0, 0\n\nscreen = pygame.display.set_mode(size)\n\nball = pygame.image.load(\"ball.png\")\nrectball = ball.get_rect()\n\nwhile True:\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n sys.exit()\n\n # Move the rect one pixel to the right and one pixel down\n rectball.move_ip(1, 1)\n\n # Update the position of the ball on the screen\n screen.blit(ball, rectball)\n\n pygame.display.flip()\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074679565_python.txt |
Q:
Making a sliding puzzle game using turtle (not tkinter or pygame)
So I'm currently trying to program a slidepuzzle game without importing tkinter or pygame. So far i've generated a board and populated it with working buttons (quit,load,reset) but I'm really lost on how to program the actual slide puzzle game with the images i've been provided.
This code generates the screen and buttons that makeup my board. clicking the load button (which i already have setup) allows the user to type in the puzzle they want to load and unscramble. The issue is that I don't know how to get all the images onto the board and Im not sure what direction i should go in to actually program the game elements (it's just a screen and buttons right now). I'm a newbie programmer so any help is really appreciated.
screen = turtle.Screen()
def generate_screen():
`os.chdir('Resources') # Changes directory to allow access to .gifs in Resources
screen.setup(700, 700)
screen.title("Sliding Puzzle Game")
screen.tracer(0)
generate_scoreboard()
generate_leaderboard()
iconturtle = turtle.Turtle()
iconturtle.penup()
for file in os.listdir():
screen.register_shape(file)
iconturtle.goto(280, -270)
iconturtle.shape('quitbutton.gif')
iconturtle.stamp()
iconturtle.goto(180, -270)
iconturtle.shape('loadbutton.gif')
iconturtle.stamp()
iconturtle.goto(80, -270)
iconturtle.shape('resetbutton.gif')
iconturtle.stamp()`
`def load_yoshi():
os.chdir('Images\\yoshi')
screen.tracer(1)
screen.register_shape('yoshi_thumbnail.gif')
t = turtle.Turtle()
t.penup()
t.shape('yoshi_thumbnail.gif')
t.goto(250,290)
t.stamp()
screen.update()
files = glob.glob('*.gif') # pulling out only .gif
images = files
print(images)
for file in images:
screen.register_shape(file)`
A:
I've only seen turtle used to draw lines, not shapes, much less
movable game pieces. I think pygame would definitely be better for
this – OneCricketeer
Below is an example slide game simplified from an earlier answer I wrote about creating numbered tiles using turtle:
from turtle import Screen, Turtle
from functools import partial
from random import random
SIZE = 4
TILE_SIZE = 100
OFFSETS = [(-1, 0), (0, -1), (1, 0), (0, 1)]
CURSOR_SIZE = 20
def slide(tile, row, col, x, y):
tile.onclick(None) # disable handler inside handler
for dy, dx in OFFSETS:
try:
if row + dy >= 0 <= col + dx and matrix[row + dy][col + dx] is None:
matrix[row][col] = None
row, col = row + dy, col + dx
matrix[row][col] = tile
x, y = tile.position()
tile.setposition(x + dx * TILE_SIZE, y - dy * TILE_SIZE)
break
except IndexError:
pass
tile.onclick(partial(slide, tile, row, col))
screen = Screen()
matrix = [[None for _ in range(SIZE)] for _ in range(SIZE)]
offset = TILE_SIZE * 1.5
for row in range(SIZE):
for col in range(SIZE):
if row == SIZE - 1 == col:
break
tile = Turtle('square', visible=False)
tile.shapesize(TILE_SIZE / CURSOR_SIZE)
tile.fillcolor(random(), random(), random())
tile.penup()
tile.goto(col * TILE_SIZE - offset, offset - row * TILE_SIZE)
tile.onclick(partial(slide, tile, row, col))
tile.showturtle()
matrix[row][col] = tile
screen.mainloop()
Click on a tile next to the blank space to have it move into that space:
| Making a sliding puzzle game using turtle (not tkinter or pygame) | So I'm currently trying to program a slidepuzzle game without importing tkinter or pygame. So far i've generated a board and populated it with working buttons (quit,load,reset) but I'm really lost on how to program the actual slide puzzle game with the images i've been provided.
This code generates the screen and buttons that makeup my board. clicking the load button (which i already have setup) allows the user to type in the puzzle they want to load and unscramble. The issue is that I don't know how to get all the images onto the board and Im not sure what direction i should go in to actually program the game elements (it's just a screen and buttons right now). I'm a newbie programmer so any help is really appreciated.
screen = turtle.Screen()
def generate_screen():
`os.chdir('Resources') # Changes directory to allow access to .gifs in Resources
screen.setup(700, 700)
screen.title("Sliding Puzzle Game")
screen.tracer(0)
generate_scoreboard()
generate_leaderboard()
iconturtle = turtle.Turtle()
iconturtle.penup()
for file in os.listdir():
screen.register_shape(file)
iconturtle.goto(280, -270)
iconturtle.shape('quitbutton.gif')
iconturtle.stamp()
iconturtle.goto(180, -270)
iconturtle.shape('loadbutton.gif')
iconturtle.stamp()
iconturtle.goto(80, -270)
iconturtle.shape('resetbutton.gif')
iconturtle.stamp()`
`def load_yoshi():
os.chdir('Images\\yoshi')
screen.tracer(1)
screen.register_shape('yoshi_thumbnail.gif')
t = turtle.Turtle()
t.penup()
t.shape('yoshi_thumbnail.gif')
t.goto(250,290)
t.stamp()
screen.update()
files = glob.glob('*.gif') # pulling out only .gif
images = files
print(images)
for file in images:
screen.register_shape(file)`
| [
"\nI've only seen turtle used to draw lines, not shapes, much less\nmovable game pieces. I think pygame would definitely be better for\nthis – OneCricketeer\n\nBelow is an example slide game simplified from an earlier answer I wrote about creating numbered tiles using turtle:\nfrom turtle import Screen, Turtle\nfrom functools import partial\nfrom random import random\n\nSIZE = 4\nTILE_SIZE = 100\nOFFSETS = [(-1, 0), (0, -1), (1, 0), (0, 1)]\n\nCURSOR_SIZE = 20\n\ndef slide(tile, row, col, x, y):\n tile.onclick(None) # disable handler inside handler\n\n for dy, dx in OFFSETS:\n try:\n if row + dy >= 0 <= col + dx and matrix[row + dy][col + dx] is None:\n matrix[row][col] = None\n row, col = row + dy, col + dx\n matrix[row][col] = tile\n x, y = tile.position()\n tile.setposition(x + dx * TILE_SIZE, y - dy * TILE_SIZE)\n break\n except IndexError:\n pass\n\n tile.onclick(partial(slide, tile, row, col))\n\nscreen = Screen()\n\nmatrix = [[None for _ in range(SIZE)] for _ in range(SIZE)]\n\noffset = TILE_SIZE * 1.5\n\nfor row in range(SIZE):\n for col in range(SIZE):\n if row == SIZE - 1 == col:\n break\n\n tile = Turtle('square', visible=False)\n tile.shapesize(TILE_SIZE / CURSOR_SIZE)\n tile.fillcolor(random(), random(), random())\n tile.penup()\n tile.goto(col * TILE_SIZE - offset, offset - row * TILE_SIZE)\n tile.onclick(partial(slide, tile, row, col))\n tile.showturtle()\n\n matrix[row][col] = tile\n\nscreen.mainloop()\n\nClick on a tile next to the blank space to have it move into that space:\n\n"
] | [
0
] | [] | [] | [
"game_development",
"python",
"python_turtle",
"turtle_graphics"
] | stackoverflow_0074672476_game_development_python_python_turtle_turtle_graphics.txt |
Q:
How can I encryption Acii code using a Caesar method I want to use the same program
# Caesar Cipher
# http://inventwithpython.com/hacking (BSD Licensed)
import pyperclip
# the string to be encrypted/decrypted
message = 'This is my secret message.'
# the encryption/decryption key
key = 13
# tells the program to encrypt or decrypt
mode = 'encrypt' # set to 'encrypt' or 'decrypt'
# every possible symbol that can be encrypted
LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
# stores the encrypted/decrypted form of the message
translated = ''
# capitalize the string in message
message = message.upper()
# run the encryption/decryption code on each symbol in the message string
for symbol in message:
if symbol in LETTERS:
# get the encrypted (or decrypted) number for this symbol
num = LETTERS.find(symbol) # get the number of the symbol
if mode == 'encrypt':
num = num + key
elif mode == 'decrypt':
num = num - key
# handle the wrap-around if num is larger than the length of
# LETTERS or less than 0
if num >= len(LETTERS):
num = num - len(LETTERS)
elif num < 0:
num = num + len(LETTERS)
# add encrypted/decrypted number's symbol at the end of translated
translated = translated + LETTERS[num]
else:
# just add the symbol without encrypting/decrypting
translated = translated + symbol
# print the encrypted/decrypted string to the screen
print(translated)
# copy the encrypted/decrypted string to the clipboard
pyperclip.copy(translated)
Input:
(This is my secret message)
Output:
(GUVF VF ZL FRPERG ZRFFNTR)
This program is encrypted in a Caesar method I want this program to encrypt Ascii code using a Caesar method
How can i convert it to do that?
A:
To encrypt the ASCII code of each symbol in the message using a Caesar cipher, you will need to modify the code as follows:
Replace the LETTERS string with a string containing all the ASCII characters that you want to encrypt. For example, you could use the string string.printable from the string module, which includes all the ASCII characters that are considered printable, including digits, letters, and punctuation marks.
In the for loop that iterates over each symbol in the message, use the ord() function to convert the symbol to its ASCII code, and the chr() function to convert the encrypted or decrypted ASCII code back to a character.
Here is an example of how you could modify the code to encrypt the ASCII code of each symbol in the message using a Caesar cipher:
import string
import pyperclip
# the string to be encrypted/decrypted
message = 'This is my secret message.'
# the encryption/decryption key
key = 13
# tells the program to encrypt or decrypt
mode = 'encrypt' # set to 'encrypt' or 'decrypt'
# every possible ASCII character that can be encrypted
LETTERS = string.printable
# stores the encrypted/decrypted form of the message
translated = ''
# run the encryption/decryption code on each symbol in the message string
for symbol in message:
# get the ASCII code of the symbol
ascii_code = ord(symbol)
if ascii_code in range(len(LETTERS)):
# get the encrypted (or decrypted) ASCII code for this symbol
if mode == 'encrypt':
ascii_code = ascii_code + key
elif mode == 'decrypt':
ascii_code = ascii_code - key
# handle the wrap-around if the ASCII code is out of range
if ascii_code >= len(LETTERS):
ascii_code = ascii_code - len(LETTERS)
elif ascii_code < 0:
ascii_code = ascii_code + len(LETTERS)
# add the encrypted/decrypted character to the translated string
translated = translated + chr(ascii_code)
else:
# just add the character without encrypting/decrypting
translated = translated + symbol
# print the encrypted/decrypted string to the screen
print(translated)
# copy the encrypted/decrypted string to the clipboard
pyperclip.copy(translated)
This code should encrypt the ASCII code of each symbol in the message using a Caesar cipher and print the encrypted message to the screen. The encrypted message will also be copied to the clipboard.
| How can I encryption Acii code using a Caesar method I want to use the same program | # Caesar Cipher
# http://inventwithpython.com/hacking (BSD Licensed)
import pyperclip
# the string to be encrypted/decrypted
message = 'This is my secret message.'
# the encryption/decryption key
key = 13
# tells the program to encrypt or decrypt
mode = 'encrypt' # set to 'encrypt' or 'decrypt'
# every possible symbol that can be encrypted
LETTERS = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
# stores the encrypted/decrypted form of the message
translated = ''
# capitalize the string in message
message = message.upper()
# run the encryption/decryption code on each symbol in the message string
for symbol in message:
if symbol in LETTERS:
# get the encrypted (or decrypted) number for this symbol
num = LETTERS.find(symbol) # get the number of the symbol
if mode == 'encrypt':
num = num + key
elif mode == 'decrypt':
num = num - key
# handle the wrap-around if num is larger than the length of
# LETTERS or less than 0
if num >= len(LETTERS):
num = num - len(LETTERS)
elif num < 0:
num = num + len(LETTERS)
# add encrypted/decrypted number's symbol at the end of translated
translated = translated + LETTERS[num]
else:
# just add the symbol without encrypting/decrypting
translated = translated + symbol
# print the encrypted/decrypted string to the screen
print(translated)
# copy the encrypted/decrypted string to the clipboard
pyperclip.copy(translated)
Input:
(This is my secret message)
Output:
(GUVF VF ZL FRPERG ZRFFNTR)
This program is encrypted in a Caesar method I want this program to encrypt Ascii code using a Caesar method
How can i convert it to do that?
| [
"To encrypt the ASCII code of each symbol in the message using a Caesar cipher, you will need to modify the code as follows:\nReplace the LETTERS string with a string containing all the ASCII characters that you want to encrypt. For example, you could use the string string.printable from the string module, which includes all the ASCII characters that are considered printable, including digits, letters, and punctuation marks.\nIn the for loop that iterates over each symbol in the message, use the ord() function to convert the symbol to its ASCII code, and the chr() function to convert the encrypted or decrypted ASCII code back to a character.\nHere is an example of how you could modify the code to encrypt the ASCII code of each symbol in the message using a Caesar cipher:\nimport string\nimport pyperclip\n\n# the string to be encrypted/decrypted\nmessage = 'This is my secret message.'\n# the encryption/decryption key\nkey = 13\n\n# tells the program to encrypt or decrypt\nmode = 'encrypt' # set to 'encrypt' or 'decrypt'\n# every possible ASCII character that can be encrypted\nLETTERS = string.printable\n\n# stores the encrypted/decrypted form of the message\ntranslated = ''\n\n# run the encryption/decryption code on each symbol in the message string\nfor symbol in message:\n # get the ASCII code of the symbol\n ascii_code = ord(symbol)\n if ascii_code in range(len(LETTERS)):\n # get the encrypted (or decrypted) ASCII code for this symbol\n if mode == 'encrypt':\n ascii_code = ascii_code + key\n elif mode == 'decrypt':\n ascii_code = ascii_code - key\n\n # handle the wrap-around if the ASCII code is out of range\n if ascii_code >= len(LETTERS):\n ascii_code = ascii_code - len(LETTERS)\n elif ascii_code < 0:\n ascii_code = ascii_code + len(LETTERS)\n\n # add the encrypted/decrypted character to the translated string\n translated = translated + chr(ascii_code)\n\n else:\n # just add the character without encrypting/decrypting\n translated = translated + symbol\n\n# print the encrypted/decrypted string to the screen\nprint(translated)\n\n# copy the encrypted/decrypted string to the clipboard\npyperclip.copy(translated)\n\nThis code should encrypt the ASCII code of each symbol in the message using a Caesar cipher and print the encrypted message to the screen. The encrypted message will also be copied to the clipboard.\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074679603_python.txt |
Q:
Outer minimum vectorization in numpy follow up
This is a follow-up to my previous question.
Given an NxM matrix A, I want to efficiently obtain the NxN matrix whose ith row is the sum along the 2nd axis of the result of applying np.minimum between A and the ith row of A.
Using a for loop,
> A = np.array([[1, 2], [3, 4], [5,6]])
> output = np.zeros(shape=(A.shape[0], A.shape[0]))
> for i in range(A.shape[0]):
output[i] = np.sum(np.minimum(A, A[i]), axis=1)
> output
array([[ 3., 3., 3.],
[ 3., 7., 7.],
[ 3., 7., 11.]])
Is is possible to optimize this further without the for loop?
Edit: I would also like to do it without allocating an MxMxN tensor because of memory constraints.
A:
instead of a for loop. Using the NumPy minimum and sum functions, you can compute the desired matrix output as follows:
output = np.sum(np.minimum(A[:, None], A), axis=2)
| Outer minimum vectorization in numpy follow up | This is a follow-up to my previous question.
Given an NxM matrix A, I want to efficiently obtain the NxN matrix whose ith row is the sum along the 2nd axis of the result of applying np.minimum between A and the ith row of A.
Using a for loop,
> A = np.array([[1, 2], [3, 4], [5,6]])
> output = np.zeros(shape=(A.shape[0], A.shape[0]))
> for i in range(A.shape[0]):
output[i] = np.sum(np.minimum(A, A[i]), axis=1)
> output
array([[ 3., 3., 3.],
[ 3., 7., 7.],
[ 3., 7., 11.]])
Is is possible to optimize this further without the for loop?
Edit: I would also like to do it without allocating an MxMxN tensor because of memory constraints.
| [
"instead of a for loop. Using the NumPy minimum and sum functions, you can compute the desired matrix output as follows:\noutput = np.sum(np.minimum(A[:, None], A), axis=2)\n\n"
] | [
0
] | [] | [] | [
"numpy",
"python",
"vectorization"
] | stackoverflow_0074679407_numpy_python_vectorization.txt |
Q:
Regular Expression with Python (pattern with character exception)
Please I tried to create a pattern that can index some references, but I have a case that is hard for me to separate.
line = "thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38)."
line = re.sub(r'([^–])(\d+):(\d+)([^\\|–|\}|\d])(\d+)', r'\1\2:\3\4\5\\index[KT]{?@?!0\2|0\3 @\2:\3\4\5}', line)
print("12 => ", line)
line = re.sub(r'([^–])(\d+):(\d+)(?!\-)', r'\1\2:\3\\index[KT]{?@?!0\2|0\3 @\2:\3}', line)
print("13 => ", line)
Return
12 => thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
13 => thiêu (30:33\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:3\index[KT]{?@?!037|03 @37:3}6-38\index[KT]{?@?!037|036 @37:3\index[KT]{?@?!037|03 @37:3}6-38}).
I want it to do the indexing like that:
12 => thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
13 => thiêu (30:33\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
A:
Does this do what you want? It's not clear what the conversion requirements are, but this matches your target strings:
import re
line = "thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38)."
want1 = 'thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).'
want2 = 'thiêu (30:33\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).'
line1 = re.sub(r'(\d+):(\d+)-(\d+)', r'\1:\2-\3\\index[KT]{?@?!0\1|0\2 @\1:\2-\3}', line)
print(line1)
assert line1 == want1
line2 = re.sub(r'(\d+):((\d+)(?:-\d+)?)', r'\1:\2\\index[KT]{?@?!0\1|0\3 @\1:\2}', line)
print(line2)
assert line2 == want2
Output:
thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
thiêu (30:33\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
| Regular Expression with Python (pattern with character exception) | Please I tried to create a pattern that can index some references, but I have a case that is hard for me to separate.
line = "thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38)."
line = re.sub(r'([^–])(\d+):(\d+)([^\\|–|\}|\d])(\d+)', r'\1\2:\3\4\5\\index[KT]{?@?!0\2|0\3 @\2:\3\4\5}', line)
print("12 => ", line)
line = re.sub(r'([^–])(\d+):(\d+)(?!\-)', r'\1\2:\3\\index[KT]{?@?!0\2|0\3 @\2:\3}', line)
print("13 => ", line)
Return
12 => thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
13 => thiêu (30:33\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:3\index[KT]{?@?!037|03 @37:3}6-38\index[KT]{?@?!037|036 @37:3\index[KT]{?@?!037|03 @37:3}6-38}).
I want it to do the indexing like that:
12 => thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
13 => thiêu (30:33\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:36-38\index[KT]{?@?!037|036 @37:36-38}).
| [
"Does this do what you want? It's not clear what the conversion requirements are, but this matches your target strings:\nimport re\n\nline = \"thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38).\"\nwant1 = 'thiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\\index[KT]{?@?!037|036 @37:36-38}).'\nwant2 = 'thiêu (30:33\\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:36-38\\index[KT]{?@?!037|036 @37:36-38}).'\n\nline1 = re.sub(r'(\\d+):(\\d+)-(\\d+)', r'\\1:\\2-\\3\\\\index[KT]{?@?!0\\1|0\\2 @\\1:\\2-\\3}', line)\nprint(line1)\nassert line1 == want1\nline2 = re.sub(r'(\\d+):((\\d+)(?:-\\d+)?)', r'\\1:\\2\\\\index[KT]{?@?!0\\1|0\\3 @\\1:\\2}', line)\nprint(line2)\nassert line2 == want2\n\nOutput:\nthiêu (30:33). chương 36-37 ghi lại tất cả những 2:6 việc này đã thật sự xảyra như thế nào (37:36-38\\index[KT]{?@?!037|036 @37:36-38}).\nthiêu (30:33\\index[KT]{?@?!030|033 @30:33}). chương 36-37 ghi lại tất cả những 2:6\\index[KT]{?@?!02|06 @2:6} việc này đã thật sự xảyra như thế nào (37:36-38\\index[KT]{?@?!037|036 @37:36-38}).\n\n"
] | [
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074679413_python_regex.txt |
Q:
colors are wrong numpy array for pillow image when I use txt
firstly I am transforming an image into numpy array and writing it to a text file and this part is working
the problem is when i copy the txt content and dynamically paste it as vector and display the image. the colors are showing wrong.
enter image description here
enter image description here
`
import cv2
import sys
import numpy
from PIL import Image
numpy.set_printoptions(threshold=sys.maxsize)
def img_to_txt_array(img):
image = cv2.imread(img)
# print(image)
f = open("img_array.txt", "w")
f.write(str(image))
f.close()
meuArquivo = open('img_array.txt', 'r')
with open('img_array.txt', 'r') as fd:
txt = fd.read()
txt = txt.replace(" ", ",")
txt = txt.replace('\n',',\n')
txt = txt.replace("[,", "[")
txt = txt.replace(',[', '[')
txt = txt.replace(",,", ",")
txt = txt.replace(',[', '[')
txt = txt.replace("[,", "[")
txt = txt.replace(",,", ",")
with open('img_array.txt', 'w') as fd:
fd.write(txt)
with open('img_array.txt', 'r') as fr:
lines = fr.readlines()
with open('img_array.txt', 'w') as fw:
for line in lines:
if line.strip('\n') != ',':
fw.write(line)
def show_imagem(array):
# Create a NumPy array
arry = numpy.array(array)
# Create a PIL image from the NumPy array
image = Image.fromarray(arry.astype('uint8'), 'RGB')
# Save the image
#image.save('image.jpg')
# Show the image
image.show(image)
array = [] #paste here the txt
img_to_txt_array('mickey.png')
show_imagem(array)
`
I need to get the colors right
| colors are wrong numpy array for pillow image when I use txt | firstly I am transforming an image into numpy array and writing it to a text file and this part is working
the problem is when i copy the txt content and dynamically paste it as vector and display the image. the colors are showing wrong.
enter image description here
enter image description here
`
import cv2
import sys
import numpy
from PIL import Image
numpy.set_printoptions(threshold=sys.maxsize)
def img_to_txt_array(img):
image = cv2.imread(img)
# print(image)
f = open("img_array.txt", "w")
f.write(str(image))
f.close()
meuArquivo = open('img_array.txt', 'r')
with open('img_array.txt', 'r') as fd:
txt = fd.read()
txt = txt.replace(" ", ",")
txt = txt.replace('\n',',\n')
txt = txt.replace("[,", "[")
txt = txt.replace(',[', '[')
txt = txt.replace(",,", ",")
txt = txt.replace(',[', '[')
txt = txt.replace("[,", "[")
txt = txt.replace(",,", ",")
with open('img_array.txt', 'w') as fd:
fd.write(txt)
with open('img_array.txt', 'r') as fr:
lines = fr.readlines()
with open('img_array.txt', 'w') as fw:
for line in lines:
if line.strip('\n') != ',':
fw.write(line)
def show_imagem(array):
# Create a NumPy array
arry = numpy.array(array)
# Create a PIL image from the NumPy array
image = Image.fromarray(arry.astype('uint8'), 'RGB')
# Save the image
#image.save('image.jpg')
# Show the image
image.show(image)
array = [] #paste here the txt
img_to_txt_array('mickey.png')
show_imagem(array)
`
I need to get the colors right
| [] | [] | [
"It looks like the problem is that the image array is being saved in a text file as an array of strings rather than an array of integers. When you read the text file and convert it back into an array, the values are being interpreted as strings, resulting in the wrong colors when the image is displayed.\nOne solution to this problem is to save the array in the text file as an array of integers rather than an array of strings. You can do this by using the numpy.savetxt() function, which saves an array to a text file in a specified format. Here is an example of how you could modify your code to save the array as integers in the text file:\nimport cv2\nimport sys\nimport numpy\nfrom PIL import Image\n\nnumpy.set_printoptions(threshold=sys.maxsize)\n\ndef img_to_txt_array(img):\n image = cv2.imread(img)\n # Save the array as integers in the text file\n numpy.savetxt('img_array.txt', image, fmt='%d')\n\ndef show_imagem(array):\n # Create a NumPy array from the text file\n arry = numpy.loadtxt('img_array.txt', dtype=int)\n \n # Create a PIL image from the NumPy array\n image = Image.fromarray(arry.astype('uint8'), 'RGB')\n \n # Save the image\n #image.save('image.jpg')\n \n # Show the image\n image.show(image)\n\narray = [] #paste here the txt\n\nimg_to_txt_array('mickey.png')\nshow_imagem(array)\n\nIn this code, the img_to_txt_array() function saves the image array as integers in the text file using the numpy.savetxt() function. The show_imagem() function reads the array from the text file using the numpy.loadtxt() function, and then creates and displays the image from the array. This should fix the problem of the wrong colors in the image.\n"
] | [
-1
] | [
"numpy",
"python",
"python_3.x",
"python_imaging_library"
] | stackoverflow_0074679658_numpy_python_python_3.x_python_imaging_library.txt |
Q:
subprocess.call can't find file/shutil.which failed in pycharm
I am trying to transform a mp3 to a wav file in pycharm using subprocess
import subprocess
subprocess.call(['ffmpeg', '-i','test.mp3','test.wav'])
It returns error of not finding file, so I change the 'ffmpeg' to its path on my pc and it work.
The problem is that I am making an app and others might install ffpmeg on other's location (since it is download with zip and can be unzip at any place), but I don't know how to get its full path.
I tried using os module
import os
print(os.path('ffmpeg.exe'))
but it seems like it is not able to get the path of exe
Traceback (most recent call last):
File "C:\Users\Percy\PycharmProjects\APP\test3.py", line 8, in <module>
print(os.path('ffmpeg.exe'))
TypeError: 'module' object is not callable
I also tried shutil module
import shutil
print(shutil.which('ffmpeg'))
print(shutil.which('ffmpeg.exe'))
but it returns 2 None (prob wrong cause I am 100% sure I have installed ffmpeg)
None
None
I want to ask if there is any way to get the full path of ffmpeg in pycharm or any method that I can make ffmpeg install in designated path with the app when it is downloaded by users
A:
If you can make "everyone" to install using my ffmpeg-downloader then all of you can install FFmpeg by:
pip install ffmpeg-downloader
ffdl install
Then in Python your package could use
import ffmpeg_downloader as ffdl
sp.run([ffdl.ffmpeg_path, '-i', 'input.mp4', 'output.mkv'])
Alternately, you can use static-ffmpeg to (dynamically) install FFmpeg to Lib/site-package. (See the linked GitHub page for howto.)
| subprocess.call can't find file/shutil.which failed in pycharm | I am trying to transform a mp3 to a wav file in pycharm using subprocess
import subprocess
subprocess.call(['ffmpeg', '-i','test.mp3','test.wav'])
It returns error of not finding file, so I change the 'ffmpeg' to its path on my pc and it work.
The problem is that I am making an app and others might install ffpmeg on other's location (since it is download with zip and can be unzip at any place), but I don't know how to get its full path.
I tried using os module
import os
print(os.path('ffmpeg.exe'))
but it seems like it is not able to get the path of exe
Traceback (most recent call last):
File "C:\Users\Percy\PycharmProjects\APP\test3.py", line 8, in <module>
print(os.path('ffmpeg.exe'))
TypeError: 'module' object is not callable
I also tried shutil module
import shutil
print(shutil.which('ffmpeg'))
print(shutil.which('ffmpeg.exe'))
but it returns 2 None (prob wrong cause I am 100% sure I have installed ffmpeg)
None
None
I want to ask if there is any way to get the full path of ffmpeg in pycharm or any method that I can make ffmpeg install in designated path with the app when it is downloaded by users
| [
"If you can make \"everyone\" to install using my ffmpeg-downloader then all of you can install FFmpeg by:\npip install ffmpeg-downloader\nffdl install\n\nThen in Python your package could use\nimport ffmpeg_downloader as ffdl\n\nsp.run([ffdl.ffmpeg_path, '-i', 'input.mp4', 'output.mkv'])\n\nAlternately, you can use static-ffmpeg to (dynamically) install FFmpeg to Lib/site-package. (See the linked GitHub page for howto.)\n"
] | [
0
] | [] | [] | [
"ffmpeg",
"python",
"shutil",
"subprocess"
] | stackoverflow_0074678072_ffmpeg_python_shutil_subprocess.txt |
Q:
plt.legend() when plotting multiple dataframes in a for loop
suppose i have three dataframes
df1 = pd.DataFrame({"A" : [1,2,3], "B" : [4,5,6]})
df2 = pd.DataFrame({"A" : [2,5,3], "B" : [7,3,1]})
df3 = pd.DataFrame({"A" : [1,2,1], "B" : [5,3,6]})
I put all three dataframes in a list to perform an identical operation on all three dataframes
dframes = [df1, df2, df3]
for frame in dframes:
frame["C"] = frame["A"] + frame["B"]
plt.plot(frame["C"])
works like a charm. my problem: when i want to add a legend.
plt.legend() throws
No artists with labels found to put in legend.
plt.legend(frame) uses the names of the columns in the dataframes, i.e.,
"A", "B" & "C"
and not as desired
"df1", "df2", "df3"
how can i grab the correct line handles ?
A:
As mentioned in the message you've received after trying plt.legend(), the function is looking for labels. So, let's supply them inside plt.plot by setting the label parameter.
We can use enumerate to get index values for the dfs in your list dframes as well, to be used inside the f-strings.
dframes = [df1, df2, df3]
for i, frame in enumerate(dframes):
frame["C"] = frame["A"] + frame["B"]
plt.plot(frame["C"], label=f'df{i+1}') # setting the label: `df1`, `df2`, `df3`
plt.legend() # or `plt.legend(bbox_to_anchor=[1,1])` to move it outside of the plot
plt.show()
Result
Update. Example using a dict and using the keys for the labels:
dframes = {'ll_800': df1, 'll_600wo_200': df2, 'wo_800': df3}
for k, v in dframes.items():
v["C"] = v["A"] + v["B"]
plt.plot(v["C"], label=k)
plt.legend(bbox_to_anchor=[1,1])
plt.show()
Result
| plt.legend() when plotting multiple dataframes in a for loop | suppose i have three dataframes
df1 = pd.DataFrame({"A" : [1,2,3], "B" : [4,5,6]})
df2 = pd.DataFrame({"A" : [2,5,3], "B" : [7,3,1]})
df3 = pd.DataFrame({"A" : [1,2,1], "B" : [5,3,6]})
I put all three dataframes in a list to perform an identical operation on all three dataframes
dframes = [df1, df2, df3]
for frame in dframes:
frame["C"] = frame["A"] + frame["B"]
plt.plot(frame["C"])
works like a charm. my problem: when i want to add a legend.
plt.legend() throws
No artists with labels found to put in legend.
plt.legend(frame) uses the names of the columns in the dataframes, i.e.,
"A", "B" & "C"
and not as desired
"df1", "df2", "df3"
how can i grab the correct line handles ?
| [
"As mentioned in the message you've received after trying plt.legend(), the function is looking for labels. So, let's supply them inside plt.plot by setting the label parameter.\nWe can use enumerate to get index values for the dfs in your list dframes as well, to be used inside the f-strings.\ndframes = [df1, df2, df3]\nfor i, frame in enumerate(dframes):\n frame[\"C\"] = frame[\"A\"] + frame[\"B\"]\n plt.plot(frame[\"C\"], label=f'df{i+1}') # setting the label: `df1`, `df2`, `df3`\n \nplt.legend() # or `plt.legend(bbox_to_anchor=[1,1])` to move it outside of the plot\nplt.show()\n\nResult\n\n\nUpdate. Example using a dict and using the keys for the labels:\ndframes = {'ll_800': df1, 'll_600wo_200': df2, 'wo_800': df3}\nfor k, v in dframes.items():\n v[\"C\"] = v[\"A\"] + v[\"B\"]\n plt.plot(v[\"C\"], label=k)\nplt.legend(bbox_to_anchor=[1,1])\nplt.show()\n\nResult\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"matplotlib",
"pandas",
"python"
] | stackoverflow_0074679576_dataframe_matplotlib_pandas_python.txt |
Q:
selenium python element select from dropdown menu
Trying to select multiple elements from dropdown menu via selenium in python.
Website from URL. But Timeoutexception error is occurring.
I have tried Inspect menu from GoogleChrome. //label[@for="inputGenre"]/parent::div//select[@placeholder="Choose a Category"] gives exactly the select tag that I need. But unfortunately, with selenium I cannot locate any element within this tag. Any ideas why the error occur?
code is below;
slect_element = Select(WebDriverWait(driver, 10).until(EC.element_located_to_be_selected((By.XPATH, '//label[@for="inputGenre"]/parent::div//select[@placeholder="Choose a Category"]'))))
slect_element.select_by_index(1)
slect_element.select_by_value('23')
strangeness is it is possible to locate it and get its text values from code below;
drp_menu=driver.find_elements(By.XPATH,'//label[@for="inputGenre"]/parent::div//div[@class="dropdown-main"]/ul/li')
print(len(drp_menu))
ls_categories=[]
for i in drp_menu:
ls_categories.append(i.get_attribute('innerText'))
print is giving 15 elements, and get_attribute(innerText) gives text of each option element text.
Anyway, Thanks a lot @Prophet
A:
That Select element is hidden and can't be used by Selenium as we use normal Select elements to select drop-down menu items.
Here we need to open the drop-down as we open any other elements by clicking them, select the desired options and click the Search button.
So, these lines are opening that drop down and selecting 2 options in the drop-down menu:
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group'][contains(.,'Category')]//div[@class='dropdown-display-label']"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group'][contains(.,'Category')]//li[@data-value='23']"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group'][contains(.,'Category')]//li[@data-value='1']"))).click()
The result so far is:
And by finally clicking the Search button the result is:
The entire code is:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(service=webdriver_service, options=options)
wait = WebDriverWait(driver, 20)
url = "https://channelcrawler.com/"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group'][contains(.,'Category')]//div[@class='dropdown-display-label']"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group'][contains(.,'Category')]//li[@data-value='23']"))).click()
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group'][contains(.,'Category')]//li[@data-value='1']"))).click()
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type='submit']"))).click()
| selenium python element select from dropdown menu | Trying to select multiple elements from dropdown menu via selenium in python.
Website from URL. But Timeoutexception error is occurring.
I have tried Inspect menu from GoogleChrome. //label[@for="inputGenre"]/parent::div//select[@placeholder="Choose a Category"] gives exactly the select tag that I need. But unfortunately, with selenium I cannot locate any element within this tag. Any ideas why the error occur?
code is below;
slect_element = Select(WebDriverWait(driver, 10).until(EC.element_located_to_be_selected((By.XPATH, '//label[@for="inputGenre"]/parent::div//select[@placeholder="Choose a Category"]'))))
slect_element.select_by_index(1)
slect_element.select_by_value('23')
strangeness is it is possible to locate it and get its text values from code below;
drp_menu=driver.find_elements(By.XPATH,'//label[@for="inputGenre"]/parent::div//div[@class="dropdown-main"]/ul/li')
print(len(drp_menu))
ls_categories=[]
for i in drp_menu:
ls_categories.append(i.get_attribute('innerText'))
print is giving 15 elements, and get_attribute(innerText) gives text of each option element text.
Anyway, Thanks a lot @Prophet
| [
"That Select element is hidden and can't be used by Selenium as we use normal Select elements to select drop-down menu items.\nHere we need to open the drop-down as we open any other elements by clicking them, select the desired options and click the Search button.\nSo, these lines are opening that drop down and selecting 2 options in the drop-down menu:\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//div[@class='form-group'][contains(.,'Category')]//div[@class='dropdown-display-label']\"))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//div[@class='form-group'][contains(.,'Category')]//li[@data-value='23']\"))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//div[@class='form-group'][contains(.,'Category')]//li[@data-value='1']\"))).click()\n\nThe result so far is:\n\nAnd by finally clicking the Search button the result is:\n\nThe entire code is:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(service=webdriver_service, options=options)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://channelcrawler.com/\"\n\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//div[@class='form-group'][contains(.,'Category')]//div[@class='dropdown-display-label']\"))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//div[@class='form-group'][contains(.,'Category')]//li[@data-value='23']\"))).click()\nwait.until(EC.element_to_be_clickable((By.XPATH, \"//div[@class='form-group'][contains(.,'Category')]//li[@data-value='1']\"))).click()\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[type='submit']\"))).click()\n\n"
] | [
2
] | [] | [] | [
"drop_down_menu",
"python",
"select",
"selenium",
"xpath"
] | stackoverflow_0074679519_drop_down_menu_python_select_selenium_xpath.txt |
Q:
How to set turtle tracer false using tkinter?
I have to generate two turtle windows and draw in each one, so I'm using tkinter to create and show the windows. My code currently opens the right screen and draws in it, but the turtle is really slow so I want to set the turtle tracer to false to use the update function, but I can't figure out how to.
This is my turtle_interpreter.py file, which has all the functions I use to draw the L-system:
import turtle
from tkinter import *
class Window(Tk):
def __init__(self, title, geometry):
super().__init__()
self.running = True
self.geometry(geometry)
self.title(title)
self.protocol("WM_DELETE_WINDOW", self.destroy_window)
self.canvas = Canvas(self)
self.canvas.pack(side=LEFT, expand=True, fill=BOTH)
self.turtle = turtle.RawTurtle(turtle.TurtleScreen(self.canvas))
def update_window(self):
'''
sets window to update
'''
if self.running:
self.update()
def destroy_window(self):
'''
sets window to close
'''
self.running = False
self.destroy()
def drawString(turt, dstring, distance, angle):
'''Interpret the characters in string dstring as a series
of turtle commands. Distance specifies the distance
to travel for each forward command. Angle specifies the
angle (in degrees) for each right or left command. The list
of turtle supported turtle commands is:
F : forward
- : turn right
+ : turn left
'''
for char in dstring:
if char == 'F':
turt.forward(distance)
elif char == '-':
turt.right(angle)
elif char == '+':
turt.left(angle)
def place(turt, xpos, ypos, angle=None):
'''
places turtle at given coordinates and angle
'''
turt.penup()
turt.goto(xpos, ypos)
if angle != None:
turt.setheading(angle)
turt.pendown()
def goto(turt, xpos, ypos):
'''
moves turtle to given coordinates
'''
turt.penup()
turt.goto(xpos, ypos)
turt.pendown()
def setColor(turt, color):
'''
sets turtle color
'''
turt.color(color)
And this is the file where the functions get called. Running it draws the L-system.
import turtle_interpreter as turt_int
import lsystem_scene_three as lsystem
def turtle_scene_two():
'''
generates scene two
'''
# create window
win_two = turt_int.Window('Turtle Scene 2', '640x480+650+0')
# assign turtle
turt2 = win_two.turtle
# lsystem setup
lsystemFile = lsystem.Lsystem('lsystem_scene_two.txt')
tstr = lsystemFile.buildString(4)
# draw stuff
turt_int.setColor(turt2, (0, 0, 0))
turt_int.place(turt2, 0, -200, 90)
turt_int.drawString(turt2, tstr, 4, 90)
# update window (loop)
while win_two.running:
win_two.update_window()
turtle_scene_two()
Hope this makes sense. Let me know if it doesn't.
Appreciate your help!
Tried a few things but nothing was promising. Calling turtle generates another screen (which I don't want).
A:
Since you didn't provide all your code, I can't test this, so I'm guessing a good start would be changing this:
self.turtle = turtle.RawTurtle(turtle.TurtleScreen(self.canvas))
to something like:
screen = turtle.TurtleScreen(self.canvas)
screen.tracer(False)
self.turtle = turtle.RawTurtle(screen)
| How to set turtle tracer false using tkinter? | I have to generate two turtle windows and draw in each one, so I'm using tkinter to create and show the windows. My code currently opens the right screen and draws in it, but the turtle is really slow so I want to set the turtle tracer to false to use the update function, but I can't figure out how to.
This is my turtle_interpreter.py file, which has all the functions I use to draw the L-system:
import turtle
from tkinter import *
class Window(Tk):
def __init__(self, title, geometry):
super().__init__()
self.running = True
self.geometry(geometry)
self.title(title)
self.protocol("WM_DELETE_WINDOW", self.destroy_window)
self.canvas = Canvas(self)
self.canvas.pack(side=LEFT, expand=True, fill=BOTH)
self.turtle = turtle.RawTurtle(turtle.TurtleScreen(self.canvas))
def update_window(self):
'''
sets window to update
'''
if self.running:
self.update()
def destroy_window(self):
'''
sets window to close
'''
self.running = False
self.destroy()
def drawString(turt, dstring, distance, angle):
'''Interpret the characters in string dstring as a series
of turtle commands. Distance specifies the distance
to travel for each forward command. Angle specifies the
angle (in degrees) for each right or left command. The list
of turtle supported turtle commands is:
F : forward
- : turn right
+ : turn left
'''
for char in dstring:
if char == 'F':
turt.forward(distance)
elif char == '-':
turt.right(angle)
elif char == '+':
turt.left(angle)
def place(turt, xpos, ypos, angle=None):
'''
places turtle at given coordinates and angle
'''
turt.penup()
turt.goto(xpos, ypos)
if angle != None:
turt.setheading(angle)
turt.pendown()
def goto(turt, xpos, ypos):
'''
moves turtle to given coordinates
'''
turt.penup()
turt.goto(xpos, ypos)
turt.pendown()
def setColor(turt, color):
'''
sets turtle color
'''
turt.color(color)
And this is the file where the functions get called. Running it draws the L-system.
import turtle_interpreter as turt_int
import lsystem_scene_three as lsystem
def turtle_scene_two():
'''
generates scene two
'''
# create window
win_two = turt_int.Window('Turtle Scene 2', '640x480+650+0')
# assign turtle
turt2 = win_two.turtle
# lsystem setup
lsystemFile = lsystem.Lsystem('lsystem_scene_two.txt')
tstr = lsystemFile.buildString(4)
# draw stuff
turt_int.setColor(turt2, (0, 0, 0))
turt_int.place(turt2, 0, -200, 90)
turt_int.drawString(turt2, tstr, 4, 90)
# update window (loop)
while win_two.running:
win_two.update_window()
turtle_scene_two()
Hope this makes sense. Let me know if it doesn't.
Appreciate your help!
Tried a few things but nothing was promising. Calling turtle generates another screen (which I don't want).
| [
"Since you didn't provide all your code, I can't test this, so I'm guessing a good start would be changing this:\nself.turtle = turtle.RawTurtle(turtle.TurtleScreen(self.canvas))\n\nto something like:\nscreen = turtle.TurtleScreen(self.canvas)\nscreen.tracer(False)\nself.turtle = turtle.RawTurtle(screen)\n\n"
] | [
0
] | [] | [] | [
"python",
"python_turtle",
"tkinter",
"turtle_graphics"
] | stackoverflow_0074670230_python_python_turtle_tkinter_turtle_graphics.txt |
Q:
Why does my pyrogram bot keep turning off?
For some reason my bot always turns off without printing any output to the command line or showing any kind of error. The bot functions properly for a few hours after being turned on. Basic code looks like this:
app = Client("my_account", '123456', '123456789abcd')
TESTING = "321"
USER_ID = "123"
chat_mapping = {TESTING: "-10011111111111", USER_ID: "-10011111111111"}
@app.on_message()
def my_handler(client, message):
if str(message.chat.id) not in chat_mapping:
return
elif str(message.chat.id) == USER_ID:
storeMsg(message)
else:
print(message.text)
app.run()
Any advice would be greatly appreciated!
A:
if str(message.chat.id) not in chat_mapping
in this lane, your statement will check if message.chat.id is equal one of the keys of dictionary, not values.
Means your message.chat.id can't be 123 or 321.
USER_ID = some id
chat_mapping = [some ids]
@app.on_message()
def my_handler(client, message):
if str(message.chat.id) not in chat_mapping:
return
elif str(message.chat.id) == USER_ID:
storeMsg(message)
else:
print(message.text)
app.run()
| Why does my pyrogram bot keep turning off? | For some reason my bot always turns off without printing any output to the command line or showing any kind of error. The bot functions properly for a few hours after being turned on. Basic code looks like this:
app = Client("my_account", '123456', '123456789abcd')
TESTING = "321"
USER_ID = "123"
chat_mapping = {TESTING: "-10011111111111", USER_ID: "-10011111111111"}
@app.on_message()
def my_handler(client, message):
if str(message.chat.id) not in chat_mapping:
return
elif str(message.chat.id) == USER_ID:
storeMsg(message)
else:
print(message.text)
app.run()
Any advice would be greatly appreciated!
| [
"if str(message.chat.id) not in chat_mapping\nin this lane, your statement will check if message.chat.id is equal one of the keys of dictionary, not values.\nMeans your message.chat.id can't be 123 or 321.\nUSER_ID = some id\nchat_mapping = [some ids] \[email protected]_message()\ndef my_handler(client, message):\n if str(message.chat.id) not in chat_mapping:\n return\n elif str(message.chat.id) == USER_ID:\n storeMsg(message)\n else:\n print(message.text)\n\napp.run()\n\n"
] | [
0
] | [] | [] | [
"pyrogram",
"python"
] | stackoverflow_0071444813_pyrogram_python.txt |
Q:
Python : compare data frame row value with previous row value
I am trying to create a new Column by comparing the value row with its previous value
error that I get is ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
I have checked the data type of columns.
All of them are float64
But I am getting an error
CODE:
cols=['High', 'Low', 'Open', 'Volume', "Adj Close"]
df = df.drop(columns = cols)
df['EMA60'] = df['Close'].ewm(span=60, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['MACD_60_100'] = df['EMA60'] - df['EMA100']
df['SIGNAL_60_100'] = df['MACD_60_100'].ewm(span=9, adjust=False).mean()
df['HIST_60_100'] = df['MACD_60_100'] - df['SIGNAL_60_100'] # Histogram
df = df.iloc[1: , :] # Delete first row in DF as it contains NAN
print(df.dtypes)
print (df)
if df[df['HIST_60_100'] > df['HIST_60_100'].shift(+1)]: # check if the valus is > previous row value
df['COLOR-60-100'] = "GREEN"
else:
df['COLOR-60-100'] = "RED"
print(df.to_string())
ERROR:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_10972/77395577.py in <module>
32
33
---> 34 get_data_from_yahoo(symbol+".NS")
35
36 # df.to_excel(sheetXls, index=False)
~\AppData\Local\Temp/ipykernel_10972/520355426.py in get_data_from_yahoo(symbol, interval, start, end)
26 print(df.dtypes)
27 print (df)
---> 28 if df[df['HIST_60_100'] > df['HIST_60_100'].shift(+1)]:
29 df['COLOR-60-100'] = "GREEN"
30 else:
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\generic.py in __nonzero__(self)
1327
1328 def __nonzero__(self):
-> 1329 raise ValueError(
1330 f"The truth value of a {type(self).__name__} is ambiguous. "
1331 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
A:
No need for an if.. else statement here. You can use numpy.where instead.
numpy.where(condition, [x, y, ]/)
Replace this :
if df[df['HIST_60_100'] > df['HIST_60_100'].shift(+1)]: # check if the valus is > previous row value
df['COLOR-60-100'] = "GREEN"
else:
df['COLOR-60-100'] = "RED"
By this :
df['COLOR-60-100'] = np.where(df['HIST_60_100'].gt(df['HIST_60_100'].shift(1), "GREEN", "RED")
| Python : compare data frame row value with previous row value | I am trying to create a new Column by comparing the value row with its previous value
error that I get is ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
I have checked the data type of columns.
All of them are float64
But I am getting an error
CODE:
cols=['High', 'Low', 'Open', 'Volume', "Adj Close"]
df = df.drop(columns = cols)
df['EMA60'] = df['Close'].ewm(span=60, adjust=False).mean()
df['EMA100'] = df['Close'].ewm(span=100, adjust=False).mean()
df['MACD_60_100'] = df['EMA60'] - df['EMA100']
df['SIGNAL_60_100'] = df['MACD_60_100'].ewm(span=9, adjust=False).mean()
df['HIST_60_100'] = df['MACD_60_100'] - df['SIGNAL_60_100'] # Histogram
df = df.iloc[1: , :] # Delete first row in DF as it contains NAN
print(df.dtypes)
print (df)
if df[df['HIST_60_100'] > df['HIST_60_100'].shift(+1)]: # check if the valus is > previous row value
df['COLOR-60-100'] = "GREEN"
else:
df['COLOR-60-100'] = "RED"
print(df.to_string())
ERROR:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_10972/77395577.py in <module>
32
33
---> 34 get_data_from_yahoo(symbol+".NS")
35
36 # df.to_excel(sheetXls, index=False)
~\AppData\Local\Temp/ipykernel_10972/520355426.py in get_data_from_yahoo(symbol, interval, start, end)
26 print(df.dtypes)
27 print (df)
---> 28 if df[df['HIST_60_100'] > df['HIST_60_100'].shift(+1)]:
29 df['COLOR-60-100'] = "GREEN"
30 else:
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\generic.py in __nonzero__(self)
1327
1328 def __nonzero__(self):
-> 1329 raise ValueError(
1330 f"The truth value of a {type(self).__name__} is ambiguous. "
1331 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
| [
"No need for an if.. else statement here. You can use numpy.where instead.\n\nnumpy.where(condition, [x, y, ]/)\n\nReplace this :\nif df[df['HIST_60_100'] > df['HIST_60_100'].shift(+1)]: # check if the valus is > previous row value\n df['COLOR-60-100'] = \"GREEN\"\nelse:\n df['COLOR-60-100'] = \"RED\"\n\nBy this :\ndf['COLOR-60-100'] = np.where(df['HIST_60_100'].gt(df['HIST_60_100'].shift(1), \"GREEN\", \"RED\")\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"numpy",
"pandas",
"python"
] | stackoverflow_0074679598_dataframe_numpy_pandas_python.txt |
Q:
How to append csv data as a ROW to another existing csv and move to 1st row. When i try to append all the data is at the bottom of the first column
I have a csv with one row of data. It represents legacy headers that I am trying to append as 1 new row (or consider it as many columns) in a second csv. I need to compare the legacy header with the second csv's current headers, so after i append the data from the first csv i want to move it so that it's the first row too.
The issue right now is that when i append my data from the first csv it just all goes to the bottom of the first column.
See below for my code. How can i make it so that it takes the 1 row of data in my first csv and appends it to my second csv as ONE NEW ROW. After how would i move it so that it becomes the first row in my data (not as a header)
with open('filewith1row.csv', 'r', encoding='utf8') as reader:
with open('mainfile.csv', 'a', encoding='utf8') as writer:
for line in reader:
writer.write(line)
Please help!! Thank you in advanced
A:
You could use pandas to import the csv files, combine the two, and then overwrite the original mainfile.csv.
I have created some dummy data to demonstrate. Here are the test files that I used:
mainfile.csv
Fruit,Animals,Numbers
Apple,Cat,5
Banana,Dog,8
Cherry,Goat,2
Durian,Horse,4
filewith1row.csv
Fruta,Animales,Números
This is the code that I used to combine the two CSVs.
Code:
import pandas as pd
mainfile = pd.read_csv('mainfile.csv', header=None)
one_liner = pd.read_csv('filewith1row.csv', header=None)
mainfile.loc[0.5]=one_liner.loc[0]
mainfile = mainfile.sort_index()
mainfile.to_csv('mainfile.csv', index=False, header=False)
Output:
mainfile.csv
Fruit,Animals,Numbers
Fruta,Animales,Números
Apple,Cat,5
Banana,Dog,8
Cherry,Goat,2
Durian,Horse,4
A:
Combine the two files into a new one.
cat hdr.csv
first_col,second_col,third_col
cat data.csv
1,2,3
4,5,6
7,8,9
with open('new_file.csv', 'w') as new_file:
with open('hdr.csv') as hdr_file:
new_file.write(hdr_file.read())
with open('data.csv') as data_file:
new_file.write(data_file.read())
cat new_file.csv
first_col,second_col,third_col
1,2,3
4,5,6
7,8,9
| How to append csv data as a ROW to another existing csv and move to 1st row. When i try to append all the data is at the bottom of the first column | I have a csv with one row of data. It represents legacy headers that I am trying to append as 1 new row (or consider it as many columns) in a second csv. I need to compare the legacy header with the second csv's current headers, so after i append the data from the first csv i want to move it so that it's the first row too.
The issue right now is that when i append my data from the first csv it just all goes to the bottom of the first column.
See below for my code. How can i make it so that it takes the 1 row of data in my first csv and appends it to my second csv as ONE NEW ROW. After how would i move it so that it becomes the first row in my data (not as a header)
with open('filewith1row.csv', 'r', encoding='utf8') as reader:
with open('mainfile.csv', 'a', encoding='utf8') as writer:
for line in reader:
writer.write(line)
Please help!! Thank you in advanced
| [
"You could use pandas to import the csv files, combine the two, and then overwrite the original mainfile.csv.\nI have created some dummy data to demonstrate. Here are the test files that I used:\nmainfile.csv\nFruit,Animals,Numbers\nApple,Cat,5\nBanana,Dog,8\nCherry,Goat,2\nDurian,Horse,4\n\nfilewith1row.csv\nFruta,Animales,Números\n\n\nThis is the code that I used to combine the two CSVs.\nCode:\nimport pandas as pd\n\nmainfile = pd.read_csv('mainfile.csv', header=None)\none_liner = pd.read_csv('filewith1row.csv', header=None)\n\nmainfile.loc[0.5]=one_liner.loc[0]\nmainfile = mainfile.sort_index() \nmainfile.to_csv('mainfile.csv', index=False, header=False)\n\nOutput:\nmainfile.csv\nFruit,Animals,Numbers\nFruta,Animales,Números\nApple,Cat,5\nBanana,Dog,8\nCherry,Goat,2\nDurian,Horse,4\n\n",
"Combine the two files into a new one.\ncat hdr.csv \nfirst_col,second_col,third_col\n\ncat data.csv \n1,2,3\n4,5,6\n7,8,9\n\n\nwith open('new_file.csv', 'w') as new_file:\n with open('hdr.csv') as hdr_file:\n new_file.write(hdr_file.read())\n with open('data.csv') as data_file:\n new_file.write(data_file.read())\n\ncat new_file.csv \nfirst_col,second_col,third_col\n1,2,3\n4,5,6\n7,8,9\n\n\n\n"
] | [
1,
0
] | [] | [] | [
"append",
"csv",
"python",
"row"
] | stackoverflow_0074678018_append_csv_python_row.txt |
Q:
How to convert a number into words based on a dictionary?
First, the user inputs a number. Then I want to print a word for each digit in the number. Which word is determined by a dictionary. When a digit is not in the dictionary, then I want the program to print "!".
Here is an example of how the code should work:
Enter numbers : 12345
Result: one two three ! !
Because 4 and 5 are not in Exist our dictionary, the program has to print "!".
DictN= {
'1':'one',
'2':'two',
'3':'three'
}
InpN = input('Enter Ur Number: ')
for i,j in DictN.items():
for numb in InpN:
if numb in i:
print(j)
else:
print('!')
MY WRONG OUTPUT
one
!
!
!
!
two
!
!
!
!
three
!
Process finished with exit code 0
A:
# Consider the input as an int
indata = input('Enter Ur Number: ')
for x in indata: print(DictN.get(x,'!'))
A:
a dict works like this:
# the alphabetical letters are the keys, and the numbers are the values which belong to the keys
a = {'a': 1, 'b': 2}
# if you want to have value from a it would be
print(a.get('a'))
# or
print(a['a'])
# which is in your example:
DictN= {
'1':'one',
'2':'two',
'3':'three'
}
InpN = input('Enter Ur Number: ')
print(DictN[InpN])
in your example you could do something like this
DictN= {
'1':'one',
'2':'two',
'3':'three'
}
InpN = input('Enter Ur Number: ')
for i in InpN:
print(InpN[i], '!')
by the way you would not use these variable names in Python. This source is good for best practices:
https://peps.python.org/pep-0008/
https://python-reference.readthedocs.io/en/latest/docs/dict/get.html
A:
The solution to your problem can be programmed pretty much exactly how it's written in english, like this:
DictN = {
'1':'one',
'2':'two',
'3':'three'
}
InpN = input('Enter Ur Number: ')
for i in InpN:
if i in DictN:
print(DictN[i])
else:
print("!")
gerrel93 explained how to retrieve data from a dictionary, and for many methods regarding iterables (objects such as strings and lists), dictionaries are treated as a list of their keys. The in keyword is one example of this.
A:
Since the other answers already show you how to use a dictionary, thought I'd demonstrate an alternative way of solving your problem. Would have written this as a comment, but I don't have enough reputation :)
The hope is that this helps you learn about iterators and some neat builtins
inp = input("Enter Ur Number: ")
words = ["one", "two", "three"]
dict_n = dict(zip(range(1, len(words) + 1), words))
print(" ".join([dict_n.get(int(n), "!") for n in inp]))
Breaking it down
>>> range(1, len(words)) # iterator
range(1, 3)
>>> dict(zip([1,2,3], [4,5,6])) # eg: zip elements of two iterators
{1: 4, 2: 5, 3: 6}
>>> dict(zip(range(1, len(words)), words))
{1: 'one', 2: 'two'}
>>> [n for n in range(3)] # list comprehension
[0, 1, 2]
>>> " ".join(["1", "2", "3"])
'1 2 3'
Hope the above block helps break it down :)
A:
The problem with your code is that you loop over the dictionary, and then check whether a certain value is in the input. You should do this in the reverse way: Loop over the input and then check whether aech digit is in the dictionary:
DictN= {
'1':'one',
'2':'two',
'3':'three'
}
InpN = input('Enter Ur Number: ')
for digit in InpN:
if digit in DictN:
print(DictN[digit], end=" ")
else:
print("!", end=" ")
print("")
The output is the following:
Enter Ur Number: 12345
one two three ! !
So, first we define the dictionary DicN and receive the input in InpN exactly as you did. Then we loop over the string we received as input. We do this because we want to print a single character for each digit. In the loop, we check for each digit whether it is in the dictionary. If it is, then we retreive the correct word from the dictionary. If it isn't, then we print !.
Do also note that I used end=" " in the prints. This is because otherwise every word or ! would be printed on a new line. The end argument of the print function determines what value is added after the printed string. By default this is "\n", a newline. That's why I changed it to a space. But this does also mean that we have to place print() after the code, because otherwise the following print call would print its text on the same line.
A:
Alternative using map:
DictN={'1':'one','2':'two','3':'three'}
print(*map(lambda i:DictN.get(i,'!'),input('Enter Ur Number: ')),sep=' ')
# Enter Ur Number: 12345
# one two three ! !
| How to convert a number into words based on a dictionary? | First, the user inputs a number. Then I want to print a word for each digit in the number. Which word is determined by a dictionary. When a digit is not in the dictionary, then I want the program to print "!".
Here is an example of how the code should work:
Enter numbers : 12345
Result: one two three ! !
Because 4 and 5 are not in Exist our dictionary, the program has to print "!".
DictN= {
'1':'one',
'2':'two',
'3':'three'
}
InpN = input('Enter Ur Number: ')
for i,j in DictN.items():
for numb in InpN:
if numb in i:
print(j)
else:
print('!')
MY WRONG OUTPUT
one
!
!
!
!
two
!
!
!
!
three
!
Process finished with exit code 0
| [
"# Consider the input as an int\nindata = input('Enter Ur Number: ')\n\nfor x in indata: print(DictN.get(x,'!'))\n\n",
"a dict works like this:\n# the alphabetical letters are the keys, and the numbers are the values which belong to the keys\na = {'a': 1, 'b': 2}\n\n# if you want to have value from a it would be \nprint(a.get('a'))\n# or\nprint(a['a'])\n\n# which is in your example:\nDictN= {\n '1':'one',\n '2':'two',\n '3':'three'\n}\nInpN = input('Enter Ur Number: ')\nprint(DictN[InpN])\n\nin your example you could do something like this\nDictN= {\n '1':'one',\n '2':'two',\n '3':'three'\n}\nInpN = input('Enter Ur Number: ')\nfor i in InpN:\n print(InpN[i], '!')\n\nby the way you would not use these variable names in Python. This source is good for best practices:\nhttps://peps.python.org/pep-0008/\nhttps://python-reference.readthedocs.io/en/latest/docs/dict/get.html\n",
"The solution to your problem can be programmed pretty much exactly how it's written in english, like this:\nDictN = {\n '1':'one',\n '2':'two',\n '3':'three'\n}\nInpN = input('Enter Ur Number: ')\n\nfor i in InpN:\n if i in DictN:\n print(DictN[i])\n else:\n print(\"!\")\n\ngerrel93 explained how to retrieve data from a dictionary, and for many methods regarding iterables (objects such as strings and lists), dictionaries are treated as a list of their keys. The in keyword is one example of this.\n",
"Since the other answers already show you how to use a dictionary, thought I'd demonstrate an alternative way of solving your problem. Would have written this as a comment, but I don't have enough reputation :)\nThe hope is that this helps you learn about iterators and some neat builtins\ninp = input(\"Enter Ur Number: \")\n\nwords = [\"one\", \"two\", \"three\"]\ndict_n = dict(zip(range(1, len(words) + 1), words))\nprint(\" \".join([dict_n.get(int(n), \"!\") for n in inp]))\n\nBreaking it down\n>>> range(1, len(words)) # iterator\nrange(1, 3)\n\n>>> dict(zip([1,2,3], [4,5,6])) # eg: zip elements of two iterators\n{1: 4, 2: 5, 3: 6}\n\n>>> dict(zip(range(1, len(words)), words))\n{1: 'one', 2: 'two'}\n\n>>> [n for n in range(3)] # list comprehension\n[0, 1, 2]\n\n>>> \" \".join([\"1\", \"2\", \"3\"])\n'1 2 3'\n\nHope the above block helps break it down :)\n",
"The problem with your code is that you loop over the dictionary, and then check whether a certain value is in the input. You should do this in the reverse way: Loop over the input and then check whether aech digit is in the dictionary:\nDictN= {\n '1':'one',\n '2':'two',\n '3':'three'\n}\nInpN = input('Enter Ur Number: ')\n\nfor digit in InpN:\n if digit in DictN:\n print(DictN[digit], end=\" \")\n else:\n print(\"!\", end=\" \")\nprint(\"\")\n\nThe output is the following:\nEnter Ur Number: 12345\none two three ! ! \n\nSo, first we define the dictionary DicN and receive the input in InpN exactly as you did. Then we loop over the string we received as input. We do this because we want to print a single character for each digit. In the loop, we check for each digit whether it is in the dictionary. If it is, then we retreive the correct word from the dictionary. If it isn't, then we print !.\nDo also note that I used end=\" \" in the prints. This is because otherwise every word or ! would be printed on a new line. The end argument of the print function determines what value is added after the printed string. By default this is \"\\n\", a newline. That's why I changed it to a space. But this does also mean that we have to place print() after the code, because otherwise the following print call would print its text on the same line.\n",
"Alternative using map:\nDictN={'1':'one','2':'two','3':'three'}\nprint(*map(lambda i:DictN.get(i,'!'),input('Enter Ur Number: ')),sep=' ')\n\n# Enter Ur Number: 12345\n# one two three ! !\n\n"
] | [
2,
0,
0,
0,
0,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074679274_python_python_3.x.txt |
Q:
Selenium Webdriver not able to locate and click button
I am trying to create a scraper, and for accessing the page I need to click on the Accept Cookies button.
The HTML referring to the button is:
<div class="qc-cmp2-summary-buttons">
<button mode="secondary" size="large" class=" css-1hy2vtq">
<span>MORE OPTIONS</span>
</button><button mode="primary" size="large" class=" css-47sehv">
<span>AGREE</span></button></div>
The button I want to click is the second one, named "AGREE".
I tried:
driver.find_element(By.CLASS_NAME, " css-47sehv").click()
and I get error
selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified
A:
The error you received caused by a space before the class name value.
Spaces between class names are used to separate between multiple class names.
So, to use this specific class name you could try this (without the space):
driver.find_element(By.CLASS_NAME, "css-47sehv").click()
But css-47sehv seems to be a dynamically generated class name so I'd advice to use more fixed attribute for locating that element.
Try this locator instead:
driver.find_element(By.XPATH, "//button[contains(.,'AGREE')]").click()
To give you better answer we need to see that web page and all your Selenium code
A:
Try to use By.CSS_SELECTOR. Find element and copy its selector
| Selenium Webdriver not able to locate and click button | I am trying to create a scraper, and for accessing the page I need to click on the Accept Cookies button.
The HTML referring to the button is:
<div class="qc-cmp2-summary-buttons">
<button mode="secondary" size="large" class=" css-1hy2vtq">
<span>MORE OPTIONS</span>
</button><button mode="primary" size="large" class=" css-47sehv">
<span>AGREE</span></button></div>
The button I want to click is the second one, named "AGREE".
I tried:
driver.find_element(By.CLASS_NAME, " css-47sehv").click()
and I get error
selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified
| [
"The error you received caused by a space before the class name value.\nSpaces between class names are used to separate between multiple class names.\nSo, to use this specific class name you could try this (without the space):\ndriver.find_element(By.CLASS_NAME, \"css-47sehv\").click()\n\nBut css-47sehv seems to be a dynamically generated class name so I'd advice to use more fixed attribute for locating that element.\nTry this locator instead:\ndriver.find_element(By.XPATH, \"//button[contains(.,'AGREE')]\").click()\n\nTo give you better answer we need to see that web page and all your Selenium code\n",
"Try to use By.CSS_SELECTOR. Find element and copy its selector\n"
] | [
1,
0
] | [] | [] | [
"css_selectors",
"python",
"selenium_chromedriver",
"selenium_webdriver",
"xpath"
] | stackoverflow_0074679681_css_selectors_python_selenium_chromedriver_selenium_webdriver_xpath.txt |
Q:
why form.is_valid() is always false?
I tried to create a contact us form in django but i got always false when i want to use .is_valid() function.
this is my form:
from django import forms
from django.core import validators
class ContactForm(forms.Form):
first_name = forms.CharField(
widget=forms.TextInput(
attrs={'placeholder': 'نام خود را وارد کنید'}),
label="نام ",
validators=[
validators.MaxLengthValidator(100, "نام شما نمیتواند بیش از 100 کاراکتر باشد")])
last_name = forms.CharField(
widget=forms.TextInput(
attrs={'placeholder': 'نام خانوادگی خود را وارد کنید'}),
label="نام خانوادگی",
validators=[
validators.MaxLengthValidator(100, "نام خانوادگی شما نمیتواند بیش از 100 کاراکتر باشد")])
email = forms.EmailField(
widget=forms.EmailInput(
attrs={'placeholder': 'ایمیل خود را وارد کنید'}),
label="ایمیل",
validators=[
validators.MaxLengthValidator(200, "تعداد کاراکترهایایمیل شما نمیتواند بیش از ۲۰۰ کاراکتر باشد.")
])
title = forms.CharField(
widget=forms.TextInput(
attrs={'placeholder': 'عنوان پیام خود را وارد کنید'}),
label="عنوان",
validators=[
validators.MaxLengthValidator(250, "تعداد کاراکترهای شما نمیتواند بیش از 250 کاراکتر باشد.")
])
text = forms.CharField(
widget=forms.Textarea(
attrs={'placeholder': 'متن پیام خود را وارد کنید'}),
label="متن پیام",
)
def __init__(self, *args, **kwargs):
super(ContactForm, self).__init__()
for visible in self.visible_fields():
visible.field.widget.attrs['class'] = 'form_field require'
this is my view:
from django.shortcuts import render
from .forms import ContactForm
from .models import ContactUs
def contact_us(request):
contact_form = ContactForm(request.POST or None)
if contact_form.is_valid():
first_name = contact_form.cleaned_data.get('first_name')
last_name = contact_form.cleaned_data.get('last_name')
email = contact_form.cleaned_data.get('email')
title = contact_form.cleaned_data.get('title')
text = contact_form.cleaned_data.get('text')
ContactUs.objects.create(first_name=first_name, last_name=last_name, email=email, title=title, text=text)
# todo: show user success message
contact_form = ContactForm()
context = {
'form': contact_form
}
return render(request, 'contact_us/contact_us.html', context)
** this is the codes in template**
<form action="{% url 'contact' %}" id="contactform" method="post">
{% csrf_token %}
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.first_name }}
{% for error in form.first_name.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.last_name }}
{% for error in form.last_name.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.email }}
{% for error in form.email.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.title }}
{% for error in form.title.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-12 col-lg-12">
<div class="form_block">
{{ form.text }}
{% for error in form.text.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
<div class="response"></div>
</div>
</div>
<div class="col-md-12 col-lg-12">
<div class="form_block">
<button type="submit" class="clv_btn submitForm"
data-type="contact">ارسال
</button>
</div>
</div>
</form>
A:
you can use CBVs to easily save data on form valid
views.py
from django.views import generic
class ContactCreateView(generic.CreateView,):
model = Contact
fields = "__all__"
success_url = reverse_lazy(url_name)
then in in your templates
templates/contact_form.html
<form action="{% url 'contact' %}" id="contactform" method="post">
{% csrf_token %}
{% for formfield in form %}
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ formfield}}
{% for error in formfield.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
#add submit button here
urls.py
just remember to add as_view() when calling your view in url
path("", views.ContactCreateView.as_view(), name="contact"),
or using FBV you could try:
def contact_us(request):
contact_form = ContactForm(request.POST or None)
if contact_form.is_valid():
contact_form.save()
context = {
'form': contact_form
}
return render(request, 'contact_us/contact_us.html', context)
A:
Remove these lines from your forms.py:
def __init__(self, *args, **kwargs):
super(ContactForm, self).__init__()
for visible in self.visible_fields():
visible.field.widget.attrs['class'] = 'form_field require'
Fix your view based on documentation:
def contact_us(request):
if request.method == 'POST':
contact_form = ContactForm(request.POST)
if contact_form.is_valid():
ContactUs.objects.create(**contact_form.cleaned_data)
messages.success(request, 'My success message')
# you want to send to your home
return redirect('/contact')
else:
contact_form = ContactForm()
context = {
'form': contact_form
}
return render(request, 'contact_us.html', context)
I took the liberty to clean up your view by using **kwargs since you have many fields. Used Django messages to display successful message on obj creation. Here is how to display it.
| why form.is_valid() is always false? | I tried to create a contact us form in django but i got always false when i want to use .is_valid() function.
this is my form:
from django import forms
from django.core import validators
class ContactForm(forms.Form):
first_name = forms.CharField(
widget=forms.TextInput(
attrs={'placeholder': 'نام خود را وارد کنید'}),
label="نام ",
validators=[
validators.MaxLengthValidator(100, "نام شما نمیتواند بیش از 100 کاراکتر باشد")])
last_name = forms.CharField(
widget=forms.TextInput(
attrs={'placeholder': 'نام خانوادگی خود را وارد کنید'}),
label="نام خانوادگی",
validators=[
validators.MaxLengthValidator(100, "نام خانوادگی شما نمیتواند بیش از 100 کاراکتر باشد")])
email = forms.EmailField(
widget=forms.EmailInput(
attrs={'placeholder': 'ایمیل خود را وارد کنید'}),
label="ایمیل",
validators=[
validators.MaxLengthValidator(200, "تعداد کاراکترهایایمیل شما نمیتواند بیش از ۲۰۰ کاراکتر باشد.")
])
title = forms.CharField(
widget=forms.TextInput(
attrs={'placeholder': 'عنوان پیام خود را وارد کنید'}),
label="عنوان",
validators=[
validators.MaxLengthValidator(250, "تعداد کاراکترهای شما نمیتواند بیش از 250 کاراکتر باشد.")
])
text = forms.CharField(
widget=forms.Textarea(
attrs={'placeholder': 'متن پیام خود را وارد کنید'}),
label="متن پیام",
)
def __init__(self, *args, **kwargs):
super(ContactForm, self).__init__()
for visible in self.visible_fields():
visible.field.widget.attrs['class'] = 'form_field require'
this is my view:
from django.shortcuts import render
from .forms import ContactForm
from .models import ContactUs
def contact_us(request):
contact_form = ContactForm(request.POST or None)
if contact_form.is_valid():
first_name = contact_form.cleaned_data.get('first_name')
last_name = contact_form.cleaned_data.get('last_name')
email = contact_form.cleaned_data.get('email')
title = contact_form.cleaned_data.get('title')
text = contact_form.cleaned_data.get('text')
ContactUs.objects.create(first_name=first_name, last_name=last_name, email=email, title=title, text=text)
# todo: show user success message
contact_form = ContactForm()
context = {
'form': contact_form
}
return render(request, 'contact_us/contact_us.html', context)
** this is the codes in template**
<form action="{% url 'contact' %}" id="contactform" method="post">
{% csrf_token %}
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.first_name }}
{% for error in form.first_name.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.last_name }}
{% for error in form.last_name.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.email }}
{% for error in form.email.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-6 col-lg-6">
<div class="form_block">
{{ form.title }}
{% for error in form.title.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
</div>
</div>
<div class="col-md-12 col-lg-12">
<div class="form_block">
{{ form.text }}
{% for error in form.text.errors %}
<p class="text-danger">{{ error }}</p>
{% endfor %}
<div class="response"></div>
</div>
</div>
<div class="col-md-12 col-lg-12">
<div class="form_block">
<button type="submit" class="clv_btn submitForm"
data-type="contact">ارسال
</button>
</div>
</div>
</form>
| [
"you can use CBVs to easily save data on form valid\n\nviews.py\n\nfrom django.views import generic\n\nclass ContactCreateView(generic.CreateView,):\n model = Contact\n fields = \"__all__\" \n success_url = reverse_lazy(url_name)\n\nthen in in your templates\n\ntemplates/contact_form.html\n\n\n<form action=\"{% url 'contact' %}\" id=\"contactform\" method=\"post\">\n {% csrf_token %}\n {% for formfield in form %}\n <div class=\"col-md-6 col-lg-6\">\n <div class=\"form_block\">\n {{ formfield}}\n {% for error in formfield.errors %}\n <p class=\"text-danger\">{{ error }}</p>\n {% endfor %}\n </div>\n </div>\n#add submit button here\n\n\n\nurls.py\njust remember to add as_view() when calling your view in url\n\n path(\"\", views.ContactCreateView.as_view(), name=\"contact\"),\n\nor using FBV you could try:\ndef contact_us(request):\n contact_form = ContactForm(request.POST or None)\n if contact_form.is_valid():\n contact_form.save()\n \n \n context = {\n 'form': contact_form\n }\n\n return render(request, 'contact_us/contact_us.html', context)\n\n\n\n",
"Remove these lines from your forms.py:\n def __init__(self, *args, **kwargs):\n super(ContactForm, self).__init__()\n for visible in self.visible_fields():\n visible.field.widget.attrs['class'] = 'form_field require'\n\nFix your view based on documentation:\ndef contact_us(request):\n if request.method == 'POST':\n contact_form = ContactForm(request.POST)\n if contact_form.is_valid():\n ContactUs.objects.create(**contact_form.cleaned_data)\n messages.success(request, 'My success message')\n # you want to send to your home\n return redirect('/contact')\n\n else:\n contact_form = ContactForm()\n context = {\n 'form': contact_form\n }\n\n return render(request, 'contact_us.html', context)\n\nI took the liberty to clean up your view by using **kwargs since you have many fields. Used Django messages to display successful message on obj creation. Here is how to display it.\n"
] | [
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074677222_django_python.txt |
Q:
Issues creating basic password system in python
I need to create a basic password system that reads from a text file for a school project, however I can't get new passwords and usernames to append into a text file and with my current system I have the problem that any account can be accessed with any preexisting password. I've tried a couple different ways of trying to write into the text file however none have worked so far.
Here is the code I have written so far:
def login():
createusername = ''
createuserpass = ''
with open('password.txt') as f:
passfile = [(passfile.strip()) for passfile in f.readlines()]
with open('username.txt') as g:
userpass = [(userpass.strip()) for userpass in g.readlines()]
def createnewusername():
createusername = input("Enter a new username: ")
return(createusername)
def createuserpassword():
createuserpass = input("Enter a new password: ")
return(createuserpass)
haveusername = input("Do you have a login? Enter yes for yes, Enter no for no: ")
if haveusername == "yes":
username = input("Enter your username: ")
password = input("Enter your password: ")
if username in userpass:
if password in passfile:
print("Login in succesful. ""Logged into the account: " + username)
else:
print("incorrect password - restarting")
login()
else:
print("incorrect username - restarting")
login()
elif haveusername == "no":
wantlogin = input("Do you want to create a login? Enter yes for yes, Enter no for no: ")
if wantlogin == "yes":
createnewusername()
print(userpass)
if createusername in userpass:
print("This username already exists - restarting")
login()
else:
createuserpassword()
if createuserpass in passfile:
print("This password already exists - restarting")
login()
else:
#Start of part that doesnt work
with open("password.txt","a") as passcreation:
passcreation.write(createuserpass)
passcreation.write('\n')
with open("username.txt","a") as namecreation:
namecreation.write(createusername)
namecreation.write('\n')
#End of part that doesnt work
print("Restarting - Please enter your new login")
login()
elif wantlogin == "no":
print("Okay - restarting")
login()
else:
print("Login not created - restarting")
login()
else:
print("Invalid input - restarting")
test = 1
if test == 1:
login()
A:
This is not an answer to the question yet.
It is a suggestion on how to monitor state changes for the files, since that
seems to be the main issue.
Added code to create the initial files.
Added a function to be called before and after desired state changes.
# Initialize the files
print('*' * 20, 'passwords.txt', '*' * 20)
with open('passwords.txt','w') as f:
header = '*' * 20 + 'passwords.txt' + '*' * 20
f.write(header)
with open('usernames.txt','w') as f:
header = '*' * 20 + 'usernames.txt' + '*' * 20
f.write(header)
def report_files():
"""This function is used to look at the files before and after change"""
with open('passwords.txt') as f:
print(f.readlines())
with open('usernames.txt') as f:
print(f.readlines())
def login():
createusername = ''
createuserpass = ''
with open('password.txt') as f:
passfile = [(passfile.strip()) for passfile in f.readlines()]
with open('username.txt') as g:
userpass = [(userpass.strip()) for userpass in g.readlines()]
def createnewusername():
createusername = input("Enter a new username: ")
return createusername
def createuserpassword():
createuserpass = input("Enter a new password: ")
return createuserpass
haveusername = input(
"Do you have a login? Enter yes for yes, Enter no for no: ")
if haveusername == "yes":
username = input("Enter your username: ")
password = input("Enter your password: ")
report_files() # monitor state
if username in userpass:
if password in passfile:
print(
"Login in succesful. ""Logged into the account: " + username)
else:
print("incorrect password - restarting")
login()
else:
print("incorrect username - restarting")
login()
elif haveusername == "no":
wantlogin = input(
"Do you want to create a login? Enter yes for yes, Enter no for no: ")
if wantlogin == "yes":
report_files()
print(userpass)
if createusername in userpass:
print("This username already exists - restarting")
login()
else:
createuserpassword()
if createuserpass in passfile:
print("This password already exists - restarting")
login()
else:
# Start of part that doesnt work
with open("password.txt", "a") as passcreation:
passcreation.write(createuserpass)
passcreation.write('\n')
with open("username.txt", "a") as namecreation:
namecreation.write(createusername)
namecreation.write('\n')
# End of part that doesnt work
print("Restarting - Please enter your new login")
login()
report_files()
createnewusername()
elif wantlogin == "no":
print("Okay - restarting")
login()
else:
print("Login not created - restarting")
login()
else:
print("Invalid input - restarting")
test = 1
if test == 1:
login()
| Issues creating basic password system in python | I need to create a basic password system that reads from a text file for a school project, however I can't get new passwords and usernames to append into a text file and with my current system I have the problem that any account can be accessed with any preexisting password. I've tried a couple different ways of trying to write into the text file however none have worked so far.
Here is the code I have written so far:
def login():
createusername = ''
createuserpass = ''
with open('password.txt') as f:
passfile = [(passfile.strip()) for passfile in f.readlines()]
with open('username.txt') as g:
userpass = [(userpass.strip()) for userpass in g.readlines()]
def createnewusername():
createusername = input("Enter a new username: ")
return(createusername)
def createuserpassword():
createuserpass = input("Enter a new password: ")
return(createuserpass)
haveusername = input("Do you have a login? Enter yes for yes, Enter no for no: ")
if haveusername == "yes":
username = input("Enter your username: ")
password = input("Enter your password: ")
if username in userpass:
if password in passfile:
print("Login in succesful. ""Logged into the account: " + username)
else:
print("incorrect password - restarting")
login()
else:
print("incorrect username - restarting")
login()
elif haveusername == "no":
wantlogin = input("Do you want to create a login? Enter yes for yes, Enter no for no: ")
if wantlogin == "yes":
createnewusername()
print(userpass)
if createusername in userpass:
print("This username already exists - restarting")
login()
else:
createuserpassword()
if createuserpass in passfile:
print("This password already exists - restarting")
login()
else:
#Start of part that doesnt work
with open("password.txt","a") as passcreation:
passcreation.write(createuserpass)
passcreation.write('\n')
with open("username.txt","a") as namecreation:
namecreation.write(createusername)
namecreation.write('\n')
#End of part that doesnt work
print("Restarting - Please enter your new login")
login()
elif wantlogin == "no":
print("Okay - restarting")
login()
else:
print("Login not created - restarting")
login()
else:
print("Invalid input - restarting")
test = 1
if test == 1:
login()
| [
"This is not an answer to the question yet.\nIt is a suggestion on how to monitor state changes for the files, since that\nseems to be the main issue.\nAdded code to create the initial files.\nAdded a function to be called before and after desired state changes.\n# Initialize the files\nprint('*' * 20, 'passwords.txt', '*' * 20)\nwith open('passwords.txt','w') as f:\n header = '*' * 20 + 'passwords.txt' + '*' * 20\n f.write(header)\nwith open('usernames.txt','w') as f:\n header = '*' * 20 + 'usernames.txt' + '*' * 20\n f.write(header)\n\n\ndef report_files():\n \"\"\"This function is used to look at the files before and after change\"\"\"\n with open('passwords.txt') as f:\n print(f.readlines())\n with open('usernames.txt') as f:\n print(f.readlines())\n\n\ndef login():\n createusername = ''\n createuserpass = ''\n\n with open('password.txt') as f:\n passfile = [(passfile.strip()) for passfile in f.readlines()]\n\n with open('username.txt') as g:\n userpass = [(userpass.strip()) for userpass in g.readlines()]\n\n def createnewusername():\n createusername = input(\"Enter a new username: \")\n return createusername\n\n def createuserpassword():\n createuserpass = input(\"Enter a new password: \")\n return createuserpass\n\n haveusername = input(\n \"Do you have a login? Enter yes for yes, Enter no for no: \")\n if haveusername == \"yes\":\n username = input(\"Enter your username: \")\n password = input(\"Enter your password: \")\n report_files() # monitor state\n if username in userpass:\n if password in passfile:\n print(\n \"Login in succesful. \"\"Logged into the account: \" + username)\n else:\n print(\"incorrect password - restarting\")\n login()\n else:\n print(\"incorrect username - restarting\")\n login()\n elif haveusername == \"no\":\n wantlogin = input(\n \"Do you want to create a login? Enter yes for yes, Enter no for no: \")\n if wantlogin == \"yes\":\n report_files()\n print(userpass)\n if createusername in userpass:\n print(\"This username already exists - restarting\")\n login()\n else:\n createuserpassword()\n if createuserpass in passfile:\n print(\"This password already exists - restarting\")\n login()\n else:\n # Start of part that doesnt work\n with open(\"password.txt\", \"a\") as passcreation:\n passcreation.write(createuserpass)\n passcreation.write('\\n')\n with open(\"username.txt\", \"a\") as namecreation:\n namecreation.write(createusername)\n namecreation.write('\\n')\n # End of part that doesnt work\n print(\"Restarting - Please enter your new login\")\n login()\n report_files()\n createnewusername()\n elif wantlogin == \"no\":\n print(\"Okay - restarting\")\n login()\n else:\n print(\"Login not created - restarting\")\n login()\n else:\n print(\"Invalid input - restarting\")\n\n\ntest = 1\nif test == 1:\n login()\n\n"
] | [
0
] | [] | [] | [
"passwords",
"python",
"text_files"
] | stackoverflow_0074679154_passwords_python_text_files.txt |
Q:
Python Dataframe Split the list into 2 columns on a single space
Im looking for help Splitting this list into 2 columns. The split needs to happen between the two words inside the comas.
[JJL108995.161270128.23630-02.YF.JABI , NORMAL ROTATION MODE , +27.0 DARKNESS , 8 IPS PRINT SPEED
, 8 IPS SLEW SPEED , 2 IPS BACKFEED SPEED , +040 TEAR OFF , APPLICATOR PRINT MODE , MODE 2 APPLICATOR PORT
, PULSE MODE START PRINT SIG , NON-CONTINUOUS MEDIA TYPE , WEB SENSOR TYPE , DIRECT-THERMAL PRINT METHOD
, 1200 PRINT WIDTH , 1877 LABEL LENGTH , 8.0IN 202MM MAXIMUM LENGTH , MEDIA DISABLED EARLY WARNING , MAINT. OFF EARLY WARNING
, NOT CONNECTED USB COMM. , READY EXTERNAL 5V , BIDIRECTIONAL PARALLEL COMM. , RS232 SERIAL COMM. , 9600 BAUD , 8 BITS DATA BITS
, NONE PARITY , XON/XOFF HOST HANDSHAKE , NONE PROTOCOL , 000 NETWORK ID , NORMAL MODE COMMUNICATIONS , <~> 7EH CONTROL PREFIX
, <^> 5EH FORMAT PREFIX , <,> 2CH DELIMITER CHAR , ZPL II ZPL MODE , INACTIVE COMMAND OVERRIDE , HIGH RIBBON TENSION , NO MOTION MEDIA POWER UP
, LENGTH HEAD CLOSE , AFTER BACKFEED , +000 LABEL TOP , +0020 LEFT POSITION , 1486 HEAD RESISTOR , ENABLED ERROR ON PAUSE , DISABLED RIBBON LOW MODE
, ACTIVE HIGH RIB LOW OUTPUT , DISABLED REPRINT MODE , 076 WEB S. , 079 MEDIA S. , 065 RIBBON S. , 025 MARK S. , 025 MARK MED S. , 103 TRANS GAIN
, 025 TRANS BASE , 196 TRANS BRIGHT , 181 RIBBON GAIN , 000 MARK GAIN , DPCSWFX. MODES ENABLED , .......M MODES DISABLED , 1248 12/MM FULL RESOLUTION
, SP53-004402C <- FIRMWARE , 1.3 XML SCHEMA , V52 ---------- 19 HARDWARE ID , CUSTOMIZED CONFIGURATION , 9560k............R: RAM
, 59392k...........E: ONBOARD FLASH , NONE FORMAT CONVERT , 021 PAX170 RTS P32 INTERFACE , 015 POWER SUPPLY P33 INTERFACE
, *** APPLICATOR P34 INTERFACE , FW VERSION IDLE DISPLAY , 10/03/19 RTC DATE , 22:40 RTC TIME , DISABLED ZBI , 2.1 ZBI VERSION
, 110,222,932 IN NONRESET CNTR , 110,222,932 IN RESET CNTR1 , 110,222,932 IN RESET CNTR2 , 279,990,514 CM NONRESET CNTR , 279,990,514 CM RESET CNTR1
, 279,990,514 CM RESET CNTR2 , ALL ITEMS PASSWORD LEVEL]
Here is how im looking to split it:
A:
I went about it a different way . turns out the pre element can be predictable in length.
import pandas as pd
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('LINK')
soup = bs(r.content, 'lxml')
pre = soup.select_one('pre').text
results = []
for line in pre.split('\n')[1:-1]:
if '--' not in line:
row = [line[i:i+20].strip() for i in range(0, len(line), 22)]
results.append(row)
df = pd.DataFrame(results)
print(df)
| Python Dataframe Split the list into 2 columns on a single space | Im looking for help Splitting this list into 2 columns. The split needs to happen between the two words inside the comas.
[JJL108995.161270128.23630-02.YF.JABI , NORMAL ROTATION MODE , +27.0 DARKNESS , 8 IPS PRINT SPEED
, 8 IPS SLEW SPEED , 2 IPS BACKFEED SPEED , +040 TEAR OFF , APPLICATOR PRINT MODE , MODE 2 APPLICATOR PORT
, PULSE MODE START PRINT SIG , NON-CONTINUOUS MEDIA TYPE , WEB SENSOR TYPE , DIRECT-THERMAL PRINT METHOD
, 1200 PRINT WIDTH , 1877 LABEL LENGTH , 8.0IN 202MM MAXIMUM LENGTH , MEDIA DISABLED EARLY WARNING , MAINT. OFF EARLY WARNING
, NOT CONNECTED USB COMM. , READY EXTERNAL 5V , BIDIRECTIONAL PARALLEL COMM. , RS232 SERIAL COMM. , 9600 BAUD , 8 BITS DATA BITS
, NONE PARITY , XON/XOFF HOST HANDSHAKE , NONE PROTOCOL , 000 NETWORK ID , NORMAL MODE COMMUNICATIONS , <~> 7EH CONTROL PREFIX
, <^> 5EH FORMAT PREFIX , <,> 2CH DELIMITER CHAR , ZPL II ZPL MODE , INACTIVE COMMAND OVERRIDE , HIGH RIBBON TENSION , NO MOTION MEDIA POWER UP
, LENGTH HEAD CLOSE , AFTER BACKFEED , +000 LABEL TOP , +0020 LEFT POSITION , 1486 HEAD RESISTOR , ENABLED ERROR ON PAUSE , DISABLED RIBBON LOW MODE
, ACTIVE HIGH RIB LOW OUTPUT , DISABLED REPRINT MODE , 076 WEB S. , 079 MEDIA S. , 065 RIBBON S. , 025 MARK S. , 025 MARK MED S. , 103 TRANS GAIN
, 025 TRANS BASE , 196 TRANS BRIGHT , 181 RIBBON GAIN , 000 MARK GAIN , DPCSWFX. MODES ENABLED , .......M MODES DISABLED , 1248 12/MM FULL RESOLUTION
, SP53-004402C <- FIRMWARE , 1.3 XML SCHEMA , V52 ---------- 19 HARDWARE ID , CUSTOMIZED CONFIGURATION , 9560k............R: RAM
, 59392k...........E: ONBOARD FLASH , NONE FORMAT CONVERT , 021 PAX170 RTS P32 INTERFACE , 015 POWER SUPPLY P33 INTERFACE
, *** APPLICATOR P34 INTERFACE , FW VERSION IDLE DISPLAY , 10/03/19 RTC DATE , 22:40 RTC TIME , DISABLED ZBI , 2.1 ZBI VERSION
, 110,222,932 IN NONRESET CNTR , 110,222,932 IN RESET CNTR1 , 110,222,932 IN RESET CNTR2 , 279,990,514 CM NONRESET CNTR , 279,990,514 CM RESET CNTR1
, 279,990,514 CM RESET CNTR2 , ALL ITEMS PASSWORD LEVEL]
Here is how im looking to split it:
| [
"I went about it a different way . turns out the pre element can be predictable in length.\nimport pandas as pd\nimport requests\nfrom bs4 import BeautifulSoup as bs\n \nr = requests.get('LINK')\nsoup = bs(r.content, 'lxml')\npre = soup.select_one('pre').text\nresults = []\n \nfor line in pre.split('\\n')[1:-1]:\n if '--' not in line:\n row = [line[i:i+20].strip() for i in range(0, len(line), 22)]\n results.append(row)\n \n df = pd.DataFrame(results)\n print(df)\n\n"
] | [
0
] | [] | [] | [
"mysql",
"pandas",
"parsing",
"python",
"selenium"
] | stackoverflow_0074679276_mysql_pandas_parsing_python_selenium.txt |
Q:
Invalid Python SDK in PyCharm
Since this morning, I'm no longer able to run projects in PyCharm.
When generating a new virtual environment, I get an "Invalid Python SDK" error.
Cannot set up a python SDK at Python 3.11... The SDK seems invalid.
What I noticed:
No matter what base interpreter I select (3.8, 3.9, 3.10) Pycharm always generates a Python 3.11 interpreter.
I did completely uninstall PyCharm, as well as all my python installations and reinstalled everything.
I also went through the "Repair IDE" option in PyCharm.
I also removed and recreated all virtual environments.
When I run "cmd" and type 'python' then python 3.10.1 opens without a problem.
This morning, I installed a new antivirus software that did some checks and deleted some "unnecessary files" - maybe it is related (antivirus software is uninstalled again).
A:
I had the same problem on Linux. Solved it by invalidating caches as suggested here:
https://stackoverflow.com/a/45099651/3990607
In pycharm click on File menu, then choose Invalidate caches..., tick all 4 boxes and then restart PyCharm. Solved the problem for me.
A:
Dealt with the same issue despite using python and pycharm without issue for months. Recently kept giving me the error despite changing the PATH variable of my system and even manually pathing within pycharm. After hours of reinstalling pycharm, python and even jumping around versions with no success it turned out it was because my python directory had a space in it that it just randomly decided to break.
For anyone who has tried what seems like everything to no avail ensure that NO part of the path to your python directory contains spaces
A:
Had the same issue just today. I was able to resolve it by uninstalling python 3.10.1 and then reinstalling it under directory "C:/Program Files" instead of the default directory where it goes.
There are many other fixes also suggested by people all over the internet such as:
Installing an older version of Pycharm i.e. 2021.2
Allowing the pycharmProjects folder in windows defender
But the change of installation directory is what worked for me.
A:
Python 3.10 version installed through windows store didn't have any spaces in default directory names (as my username doesn't have a space within itself).
I id invalidated caches through the file menu. However, patapouf_ai had suggested doing it for Linux.
The problem was resolved for me after installing and then reinstalling it through windows' remove and store, and it seems it's been caused by changing windows' user account control level to "never notify." The other possibility is that somehow python 3.10 has stopped functioning without a good reason and lost recognition by windows (not updated or modified by any means).
A:
using IDLE
import sys
print(sys.executable)
#output is a path to where the interpreter is, copy path
PyCharm / file / settings / Project / Python Interpreter / show all / paste path
A:
I had the same issue. I got around it by installing an older version of PyCharm.
A:
Check if your python executable is named python.exe!
I had the same problem and I solved it by going into C:\Users<user>\AppData\Local\Programs\Python\Python310. My python executable was named python3.exe, but Pycharm needs python.exe for some unknown reason. So I
copied python3.exe,
pasted it in the same directory and
renamed it to python.exe, and all started working magically. Maybe just renaming will also work.
A:
I had this same error on windows, none of the answers in this thread worked.
But i found a solution so i will share it.
Go to %appdata%/Jetbrains/ and search for jdk.table, backup the file and delete it (this will delete all your interpreters configs) close all your pycharm instances, and start them again. After that just add your interpreter like you normally would.
This is what worked for me.
| Invalid Python SDK in PyCharm | Since this morning, I'm no longer able to run projects in PyCharm.
When generating a new virtual environment, I get an "Invalid Python SDK" error.
Cannot set up a python SDK at Python 3.11... The SDK seems invalid.
What I noticed:
No matter what base interpreter I select (3.8, 3.9, 3.10) Pycharm always generates a Python 3.11 interpreter.
I did completely uninstall PyCharm, as well as all my python installations and reinstalled everything.
I also went through the "Repair IDE" option in PyCharm.
I also removed and recreated all virtual environments.
When I run "cmd" and type 'python' then python 3.10.1 opens without a problem.
This morning, I installed a new antivirus software that did some checks and deleted some "unnecessary files" - maybe it is related (antivirus software is uninstalled again).
| [
"I had the same problem on Linux. Solved it by invalidating caches as suggested here:\nhttps://stackoverflow.com/a/45099651/3990607\nIn pycharm click on File menu, then choose Invalidate caches..., tick all 4 boxes and then restart PyCharm. Solved the problem for me.\n",
"Dealt with the same issue despite using python and pycharm without issue for months. Recently kept giving me the error despite changing the PATH variable of my system and even manually pathing within pycharm. After hours of reinstalling pycharm, python and even jumping around versions with no success it turned out it was because my python directory had a space in it that it just randomly decided to break.\nFor anyone who has tried what seems like everything to no avail ensure that NO part of the path to your python directory contains spaces\n",
"Had the same issue just today. I was able to resolve it by uninstalling python 3.10.1 and then reinstalling it under directory \"C:/Program Files\" instead of the default directory where it goes.\nThere are many other fixes also suggested by people all over the internet such as:\n\nInstalling an older version of Pycharm i.e. 2021.2\nAllowing the pycharmProjects folder in windows defender\n\nBut the change of installation directory is what worked for me.\n",
"Python 3.10 version installed through windows store didn't have any spaces in default directory names (as my username doesn't have a space within itself).\nI id invalidated caches through the file menu. However, patapouf_ai had suggested doing it for Linux.\nThe problem was resolved for me after installing and then reinstalling it through windows' remove and store, and it seems it's been caused by changing windows' user account control level to \"never notify.\" The other possibility is that somehow python 3.10 has stopped functioning without a good reason and lost recognition by windows (not updated or modified by any means).\n",
"using IDLE\nimport sys\nprint(sys.executable)\n#output is a path to where the interpreter is, copy path\nPyCharm / file / settings / Project / Python Interpreter / show all / paste path\n",
"I had the same issue. I got around it by installing an older version of PyCharm.\n",
"Check if your python executable is named python.exe!\nI had the same problem and I solved it by going into C:\\Users<user>\\AppData\\Local\\Programs\\Python\\Python310. My python executable was named python3.exe, but Pycharm needs python.exe for some unknown reason. So I\n\ncopied python3.exe,\npasted it in the same directory and\nrenamed it to python.exe, and all started working magically. Maybe just renaming will also work.\n\n",
"I had this same error on windows, none of the answers in this thread worked.\nBut i found a solution so i will share it.\nGo to %appdata%/Jetbrains/ and search for jdk.table, backup the file and delete it (this will delete all your interpreters configs) close all your pycharm instances, and start them again. After that just add your interpreter like you normally would.\nThis is what worked for me.\n"
] | [
7,
4,
0,
0,
0,
0,
0,
0
] | [
"on Pycharm, click on up right corner search button --> wrire swştch python interpreter --> add new interpreter --> add local interpreter --> hit OK\n"
] | [
-1
] | [
"pycharm",
"python"
] | stackoverflow_0070664467_pycharm_python.txt |
Q:
Python-CSV write column(s) at given indices to new file
I am trying to write the column(s) in a csv file where a word is present. the example shown here is for "car".
NOTE: CANNOT USE PANDAS
here is the sample in_file:
12,life,car,good,exellent
10,gift,truck,great,great
11,time,car,great,perfect
the desired output for out_file is:
car
truck
car
This is the current code:
def project_columns(in_file, out_file, indices):
with open(in_file) as f:
reader = csv.reader(f)
data = list(reader)
with open(out_file, 'w') as o:
writer = csv.writer(o)
for i in indices:
writer.writerow(data[i])
the variable indices contains the indices of the column. for "car" indices = [2,2].
out_file currently contains:
11,time,car,great,perfect
11,time,car,great,perfect
How should I fix this to get the desired output?
A:
cat car.csv
12,life,car,good,exellent
10,gift,truck,great,great
11,time,car,great,perfect
import csv
with open('car.csv') as car_file:
r = csv.reader(car_file)
with open('out.csv', 'w') as out_file:
w = csv.writer(out_file)
for row in r:
w.writerow([row[2]])
cat out.csv
car
truck
car
| Python-CSV write column(s) at given indices to new file | I am trying to write the column(s) in a csv file where a word is present. the example shown here is for "car".
NOTE: CANNOT USE PANDAS
here is the sample in_file:
12,life,car,good,exellent
10,gift,truck,great,great
11,time,car,great,perfect
the desired output for out_file is:
car
truck
car
This is the current code:
def project_columns(in_file, out_file, indices):
with open(in_file) as f:
reader = csv.reader(f)
data = list(reader)
with open(out_file, 'w') as o:
writer = csv.writer(o)
for i in indices:
writer.writerow(data[i])
the variable indices contains the indices of the column. for "car" indices = [2,2].
out_file currently contains:
11,time,car,great,perfect
11,time,car,great,perfect
How should I fix this to get the desired output?
| [
"cat car.csv \n12,life,car,good,exellent\n10,gift,truck,great,great\n11,time,car,great,perfect\n\nimport csv\n\nwith open('car.csv') as car_file:\n r = csv.reader(car_file)\n with open('out.csv', 'w') as out_file:\n w = csv.writer(out_file)\n for row in r:\n w.writerow([row[2]])\n\ncat out.csv \ncar\ntruck\ncar\n\n\n\n\n"
] | [
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0074671720_csv_python.txt |
Q:
python: How can I check if user input is in dataframe
I am building a S&P500 app using streamlit, the functionality of the app prompts user to either choose number of plots to be shown from the slider or to type a specific symbol to show its plot, however, I am facing problems in trying to check if the symbol exists in the pandas series which contains all the symbols,(note both the user input and the symbol in the series is of type string) can anybody please tell me how can I fix it
import streamlit as st
import pandas as pd
import base64
import matplotlib.pyplot as plt
import yfinance as yf
@st.cache
def load_data():
url = "https://en.wikipedia.org/wiki/List_of_S%26P_500_companies"
html = pd.read_html(url, header=0)
df = html[0]
return df
df = load_data()
df = df[0]
sector_unique = sorted( df['GICS Sector'].unique() )
selected_sector = st.sidebar.multiselect('Sector', sector_unique, sector_unique)
df_selected_sector = df[(df['GICS Sector'].isin(selected_sector))]
data = yf.download(
tickers = list(df_selected_sector[:10].Symbol),
period = "ytd",
interval = "1d",
group_by = 'ticker',
auto_adjust = True,
prepost = True,
threads = True,
proxy = None
)
num_company = st.sidebar.slider('Number of companies for plots', 1, 10)
spec_symbol = st.sidebar.text_input("OR choose a specific symbol ")
if spec_symbol:
if(spec_symbol == (a for a in df['Symbol'].items())):
spec_data = yf.download(
tickers = spec_symbol,
period = "ytd",
interval = "1d",
group_by = 'ticker',
auto_adjust = True,
prepost = True,
threads = True,
proxy = None
)
else:
st.sidebar.write("Symbol not found")
despite the user input symbol being valid or not, it keeps giving me the else statement("symbol not found")
here is a sample of the output of df["Symbol"] to make it clearer:
Symbol
0 MMM
1 AOS
2 ABT
3 ABBV
4 ABMD
5 ACN
6 ATVI
7 ADM
8 ADBE
A:
You should use pandas.Series.tolist (that returns a list) instead of pandas.Series.items (that returns an iterable).
Replace this :
if(spec_symbol == (a for a in df['Symbol'].items())):
By this :
if(spec_symbol == (a for a in df['Symbol'].tolist())):
Or simply :
if spec_symbol in df['Symbol'].tolist():
| python: How can I check if user input is in dataframe | I am building a S&P500 app using streamlit, the functionality of the app prompts user to either choose number of plots to be shown from the slider or to type a specific symbol to show its plot, however, I am facing problems in trying to check if the symbol exists in the pandas series which contains all the symbols,(note both the user input and the symbol in the series is of type string) can anybody please tell me how can I fix it
import streamlit as st
import pandas as pd
import base64
import matplotlib.pyplot as plt
import yfinance as yf
@st.cache
def load_data():
url = "https://en.wikipedia.org/wiki/List_of_S%26P_500_companies"
html = pd.read_html(url, header=0)
df = html[0]
return df
df = load_data()
df = df[0]
sector_unique = sorted( df['GICS Sector'].unique() )
selected_sector = st.sidebar.multiselect('Sector', sector_unique, sector_unique)
df_selected_sector = df[(df['GICS Sector'].isin(selected_sector))]
data = yf.download(
tickers = list(df_selected_sector[:10].Symbol),
period = "ytd",
interval = "1d",
group_by = 'ticker',
auto_adjust = True,
prepost = True,
threads = True,
proxy = None
)
num_company = st.sidebar.slider('Number of companies for plots', 1, 10)
spec_symbol = st.sidebar.text_input("OR choose a specific symbol ")
if spec_symbol:
if(spec_symbol == (a for a in df['Symbol'].items())):
spec_data = yf.download(
tickers = spec_symbol,
period = "ytd",
interval = "1d",
group_by = 'ticker',
auto_adjust = True,
prepost = True,
threads = True,
proxy = None
)
else:
st.sidebar.write("Symbol not found")
despite the user input symbol being valid or not, it keeps giving me the else statement("symbol not found")
here is a sample of the output of df["Symbol"] to make it clearer:
Symbol
0 MMM
1 AOS
2 ABT
3 ABBV
4 ABMD
5 ACN
6 ATVI
7 ADM
8 ADBE
| [
"You should use pandas.Series.tolist (that returns a list) instead of pandas.Series.items (that returns an iterable).\nReplace this :\nif(spec_symbol == (a for a in df['Symbol'].items())):\n\nBy this :\nif(spec_symbol == (a for a in df['Symbol'].tolist())):\n\nOr simply :\nif spec_symbol in df['Symbol'].tolist():\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python",
"streamlit"
] | stackoverflow_0074679765_pandas_python_streamlit.txt |
Q:
How to read excel line by line in pandas
I want to ask how can How to read excel line by line in pandas. I want it to be in a loop that will get line by line information for facebook login with selenium. Hope everyone is easygoing because I'm a newbie
import pandas as pd
pd.options.display.max_rows = 28
data = pd.read_excel(r'file.xlsx')
#load data into a DataFrame object:
df = pd.DataFrame(data)
username = pd.DataFrame(f1,columns=['Name'])
password = pd.DataFrame(f1,columns=['Pass'])
for i in df:
print('Current row:', i)
A:
Is it important that you read your Excel file line-by-line? Or is it also okay for you to read the entirety of your Excel file into a Dataframe and just iterate through that?
A:
since you're new to programming, a good counsel I can give is to read and search the documentation when doubts first appear.
Many tools even have tutorials to help you start coding and search to help you find basic code.
Check this link, I think it will help a lot!
https://pandas.pydata.org/docs/getting_started/intro_tutorials/02_read_write.html?highlight=read%20excel
| How to read excel line by line in pandas | I want to ask how can How to read excel line by line in pandas. I want it to be in a loop that will get line by line information for facebook login with selenium. Hope everyone is easygoing because I'm a newbie
import pandas as pd
pd.options.display.max_rows = 28
data = pd.read_excel(r'file.xlsx')
#load data into a DataFrame object:
df = pd.DataFrame(data)
username = pd.DataFrame(f1,columns=['Name'])
password = pd.DataFrame(f1,columns=['Pass'])
for i in df:
print('Current row:', i)
| [
"Is it important that you read your Excel file line-by-line? Or is it also okay for you to read the entirety of your Excel file into a Dataframe and just iterate through that?\n",
"since you're new to programming, a good counsel I can give is to read and search the documentation when doubts first appear.\nMany tools even have tutorials to help you start coding and search to help you find basic code.\nCheck this link, I think it will help a lot!\nhttps://pandas.pydata.org/docs/getting_started/intro_tutorials/02_read_write.html?highlight=read%20excel\n"
] | [
0,
0
] | [] | [] | [
"excel",
"pandas",
"python",
"python_3.x",
"selenium"
] | stackoverflow_0074679803_excel_pandas_python_python_3.x_selenium.txt |
Q:
Is there a way to simplify the last three lines of code?
race = "The rabbit will run with the turtle in the race."
first_r = race.find("r")
last_r = race.rfind("r")
result1 = race[:first_r + 1] + race[first_r + 1:last_r].replace("r", "R") + race[last_r:]
print(result1)
A:
One way are regular expressions:
import re
race = "The rabbit will run with the turtle in the race."
m = re.match(r'([^r]*r)(.*)(r[^r]*)$', race)
result1 = m.group(1) + m.group(2).replace('r', 'R') + m.group(3)
print(result1)
| Is there a way to simplify the last three lines of code? | race = "The rabbit will run with the turtle in the race."
first_r = race.find("r")
last_r = race.rfind("r")
result1 = race[:first_r + 1] + race[first_r + 1:last_r].replace("r", "R") + race[last_r:]
print(result1)
| [
"One way are regular expressions:\nimport re\n\nrace = \"The rabbit will run with the turtle in the race.\"\n\nm = re.match(r'([^r]*r)(.*)(r[^r]*)$', race)\n\nresult1 = m.group(1) + m.group(2).replace('r', 'R') + m.group(3)\n\nprint(result1)\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074679782_python.txt |
Q:
Python Panda problems with group by and regularexpression
A Table Sample like bellows-
Product Price
P1,Luxary product 2000
P2: Cosmetics product 1700
P1::Plastic product 600
P3/P1,Mobile phone 3300
P2:headphones 200
P3,Trimmer 150
P2,Camera 2200
P2/Airpods 250
P3;;phone case 100
P2/P1:Mirrors 800
Water Bottel P2 2011 60
From Product column, how can i extract hidden sings (- P1, P2 and P3) sometimes there is more than one signs it is okay if just extract the first sign.
Then Group by them(Signs) with Price colums and Print from high price to low price ?
Output:
P2 - 5210
P3 - 3550
P1 - 2600
A:
Assuming that your "hidden signs" always contain two characters, simply create new columns containing these prefixes:
df['Prefix'] = df['Product'].str[:2]
You may then group by prefix:
df.groupby('Prefix').sum()
A:
Here is a proposition using pandas.Series.str.extract :
out = (
df
.assign(Signs= df["Product"].str.extract("(P\d)", expand=False))
.groupby("Signs", as_index=False)["Price"].sum()
.sort_values(by="Price", ascending=False, ignore_index=True)
)
# Output :
print(out)
Signs Price
0 P2 5210
1 P3 3550
2 P1 2600
| Python Panda problems with group by and regularexpression | A Table Sample like bellows-
Product Price
P1,Luxary product 2000
P2: Cosmetics product 1700
P1::Plastic product 600
P3/P1,Mobile phone 3300
P2:headphones 200
P3,Trimmer 150
P2,Camera 2200
P2/Airpods 250
P3;;phone case 100
P2/P1:Mirrors 800
Water Bottel P2 2011 60
From Product column, how can i extract hidden sings (- P1, P2 and P3) sometimes there is more than one signs it is okay if just extract the first sign.
Then Group by them(Signs) with Price colums and Print from high price to low price ?
Output:
P2 - 5210
P3 - 3550
P1 - 2600
| [
"Assuming that your \"hidden signs\" always contain two characters, simply create new columns containing these prefixes:\ndf['Prefix'] = df['Product'].str[:2]\n\nYou may then group by prefix:\ndf.groupby('Prefix').sum()\n\n",
"Here is a proposition using pandas.Series.str.extract :\nout = (\n df\n .assign(Signs= df[\"Product\"].str.extract(\"(P\\d)\", expand=False))\n .groupby(\"Signs\", as_index=False)[\"Price\"].sum()\n .sort_values(by=\"Price\", ascending=False, ignore_index=True)\n )\n\n# Output :\nprint(out)\n\n Signs Price\n0 P2 5210\n1 P3 3550\n2 P1 2600\n\n"
] | [
0,
0
] | [] | [] | [
"group_by",
"pandas",
"python"
] | stackoverflow_0074679857_group_by_pandas_python.txt |
Q:
Delete all vowels from a string using a list-comprehension
I'm trying to solve a challenge on Codewars.
Trolls are attacking your comment section!
A common way to deal with this situation is to remove all of the
vowels from the trolls' comments, neutralizing the threat.
Your task is to write a function that takes a string and return a new
string with all vowels removed.
For example, the string "This website is for losers LOL!" would become
"Ths wbst s fr lsrs LL!".
Note: for this kata y isn't considered a vowel.
This is working:
def disemvowel(string):
output = []
for char in string:
if char not in "aeiouAEIOU":
output.extend(char)
return "".join(output)
But I want to do it in one line with a list comprehension. I tried this:
return "".join([[].extend(char) for char in string if char not in "aeiouAEIOU"])
... but I'm getting
TypeError: sequence item 0: expected str instance, NoneType found
A:
You're trying to make a list within your list-comprehension; you can just use the existing list:
return "".join([char for char in x if char not in "aeiouAEIOU"])
Note that we could even omit the list comprehension and just use a generator expression (by omitting the square brackets), but join() works internally by converting the sequence to a list anyway, so in this case using a list-comprehension is actually quicker.
A:
def disemvowel(string_):
vowels = 'a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U'
for a in vowels:
string_ = string_.replace(a, '')
return string_
A:
I did in one liner but I don't know it's the correct way, although it is working and have passed all the tests. Any guidance is appreciated. Thanks.
def disemvowel(string_):
return string_.replace('a','').replace('e','').replace('i','').replace('o','').replace('u','').replace('A','').replace('E','').replace('I','').replace('O','').replace('U','')
| Delete all vowels from a string using a list-comprehension | I'm trying to solve a challenge on Codewars.
Trolls are attacking your comment section!
A common way to deal with this situation is to remove all of the
vowels from the trolls' comments, neutralizing the threat.
Your task is to write a function that takes a string and return a new
string with all vowels removed.
For example, the string "This website is for losers LOL!" would become
"Ths wbst s fr lsrs LL!".
Note: for this kata y isn't considered a vowel.
This is working:
def disemvowel(string):
output = []
for char in string:
if char not in "aeiouAEIOU":
output.extend(char)
return "".join(output)
But I want to do it in one line with a list comprehension. I tried this:
return "".join([[].extend(char) for char in string if char not in "aeiouAEIOU"])
... but I'm getting
TypeError: sequence item 0: expected str instance, NoneType found
| [
"You're trying to make a list within your list-comprehension; you can just use the existing list:\nreturn \"\".join([char for char in x if char not in \"aeiouAEIOU\"])\n\nNote that we could even omit the list comprehension and just use a generator expression (by omitting the square brackets), but join() works internally by converting the sequence to a list anyway, so in this case using a list-comprehension is actually quicker.\n",
"def disemvowel(string_):\n vowels = 'a', 'e', 'i', 'o', 'u', 'A', 'E', 'I', 'O', 'U'\n for a in vowels:\n string_ = string_.replace(a, '')\n return string_\n\n",
"I did in one liner but I don't know it's the correct way, although it is working and have passed all the tests. Any guidance is appreciated. Thanks.\ndef disemvowel(string_):\n return string_.replace('a','').replace('e','').replace('i','').replace('o','').replace('u','').replace('A','').replace('E','').replace('I','').replace('O','').replace('U','')\n\n"
] | [
4,
0,
0
] | [] | [] | [
"list",
"list_comprehension",
"python",
"python_3.x",
"string"
] | stackoverflow_0060169364_list_list_comprehension_python_python_3.x_string.txt |
Q:
Can we reduce the time complexity here?
I have an AoC problem where I have been given the data below:
data = """2-4,6-8
2-3,4-5
5-7,7-9
2-8,3-7
6-6,4-6
2-6,4-8"""
I need to find the number of pairs which fully contain another pair. For example, 2-8 fully contains 3-7, and 6-6 is fully contained by 4-6.
I have solved it using the below code:
def aoc_part1(self, data):
counter = 0
for lines_data in data.splitlines():
lines_data = lines_data.strip()
first_range, second_range = self.__get_first_second_list_of_elements(lines_data)
check_first_side_if_returns_true = all(item in first_range for item in second_range)
check_second_side_if_returns_true = all(item in second_range for item in first_range)
if check_first_side_if_returns_true or check_second_side_if_returns_true:
counter += 1
return counter
def __get_first_second_list_of_elements(self, data):
first_elf, second_elf = data.split(",")[0], data.split(",")[1]
first_range_start, first_range_end = map(int, first_elf.split("-"))
second_range_start, second_range_end = map(int, second_elf.split("-"))
first_range = list(range(first_range_start, first_range_end + 1))
second_range = list(range(second_range_start, second_range_end + 1))
return first_range, second_range
I was just wondering about the time complexity here. I think it should be a brute force here because for every iteration all will run another loop. How can I optimize this solution in order to get linear time complexity?
first_range and second_range are of int types. check_first_side_if_returns_true and check_second_side_if_returns_true are the boolean variables that check if the list is entirely contained or not. Based on that, it returns True or False.
A:
Your solution looks pretty complicated. Why not do something like:
data = """2-4,6-8
2-3,4-5
5-7,7-9
2-8,3-7
6-6,4-6
2-6,4-8
"""
def included(line):
(a1, b1), (a2, b2) = (map(int, pair.split("-")) for pair in line.strip().split(","))
return (a1 <= a2 and b2 <= b1) or (a2 <= a1 and b1 <= b2)
print(sum(included(line) for line in data.splitlines()))
I did some timing with my AoC-input for day 4 (1,000 lines):
from timeit import timeit
# Extract the interval boundaries for the pairs
boundaries = [
[tuple(map(int, pair.split("-"))) for pair in line.strip().split(",")]
for line in data.splitlines()
]
# Version 1 with simple comparison of boundaries
def test1(boundaries):
def included(pairs):
(a1, b1), (a2, b2) = pairs
return (a1 <= a2 and b2 <= b1) or (a2 <= a1 and b1 <= b2)
return sum(included(pairs) for pairs in boundaries)
# Version 2 with range-subset test
def test2(boundaries):
def included(pairs):
(a1, b1), (a2, b2) = pairs
numbers1, numbers2 = set(range(a1, b1 + 1)), set(range(a2, b2 + 1))
return numbers1 <= numbers2 or numbers2 <= numbers1
return sum(included(pairs) for pairs in boundaries)
# Test for identical result
print(test1(boundaries) == test2(boundaries))
# Timing
for i in 1, 2:
t = timeit(f"test{i}(boundaries)", globals=globals(), number=1_000)
print(f"Duration version {i}: {t:.1f} seconds")
Result here, on a mediocre machine (repl.it):
Duration version 1: 0.4 seconds
Duration version 2: 5.4 seconds
A:
You're prob. making it overcomplicated in the current approach. If you split the pairs to two sets - eg. a, and b then you could easily do a set ops. to check if there is overlapping. That should be faster than yours.
Something like this one-line:
# some input reading, and split to a, b sets.
# count = 0
if set(range(a, b + 1)) & set(range(x, y + 1)):
count += 1 # that's part1 answer.
# part 2
for line in open('04.in'):
a, b, x, y = map(int, line.replace(",", "-").split("-"))
if set(range(a, b + 1)) & set(range(x, y + 1)):
ans += 1
There are questions about the memory efficiency about this approach earlier, I've run some profiling and this is the result to share - it's confirmed there should be NO problem given this puzzle's input size.
Filename: day04.py
Line # Mem usage Increment Occurrences Line Contents
=============================================================
27 43.758 MiB 43.758 MiB 1 @profile
28 def part2(file):
29 43.762 MiB 0.004 MiB 1 ans = 0
30
31 43.770 MiB 0.000 MiB 1001 for line in open(file):
32 43.770 MiB 0.004 MiB 1000 a, b, x, y = map(int, line.replace(",", "-").split("-"))
33 43.770 MiB 0.000 MiB 1000 if set(range(a, b + 1)) & set(range(x, y + 1)):
34 43.770 MiB 0.004 MiB 847 ans += 1
35
36 43.770 MiB 0.000 MiB 1 return ans
| Can we reduce the time complexity here? | I have an AoC problem where I have been given the data below:
data = """2-4,6-8
2-3,4-5
5-7,7-9
2-8,3-7
6-6,4-6
2-6,4-8"""
I need to find the number of pairs which fully contain another pair. For example, 2-8 fully contains 3-7, and 6-6 is fully contained by 4-6.
I have solved it using the below code:
def aoc_part1(self, data):
counter = 0
for lines_data in data.splitlines():
lines_data = lines_data.strip()
first_range, second_range = self.__get_first_second_list_of_elements(lines_data)
check_first_side_if_returns_true = all(item in first_range for item in second_range)
check_second_side_if_returns_true = all(item in second_range for item in first_range)
if check_first_side_if_returns_true or check_second_side_if_returns_true:
counter += 1
return counter
def __get_first_second_list_of_elements(self, data):
first_elf, second_elf = data.split(",")[0], data.split(",")[1]
first_range_start, first_range_end = map(int, first_elf.split("-"))
second_range_start, second_range_end = map(int, second_elf.split("-"))
first_range = list(range(first_range_start, first_range_end + 1))
second_range = list(range(second_range_start, second_range_end + 1))
return first_range, second_range
I was just wondering about the time complexity here. I think it should be a brute force here because for every iteration all will run another loop. How can I optimize this solution in order to get linear time complexity?
first_range and second_range are of int types. check_first_side_if_returns_true and check_second_side_if_returns_true are the boolean variables that check if the list is entirely contained or not. Based on that, it returns True or False.
| [
"Your solution looks pretty complicated. Why not do something like:\ndata = \"\"\"2-4,6-8\n2-3,4-5\n5-7,7-9\n2-8,3-7\n6-6,4-6\n2-6,4-8\n\"\"\"\n\ndef included(line):\n (a1, b1), (a2, b2) = (map(int, pair.split(\"-\")) for pair in line.strip().split(\",\"))\n return (a1 <= a2 and b2 <= b1) or (a2 <= a1 and b1 <= b2)\n\nprint(sum(included(line) for line in data.splitlines()))\n\nI did some timing with my AoC-input for day 4 (1,000 lines):\nfrom timeit import timeit\n\n# Extract the interval boundaries for the pairs\nboundaries = [\n [tuple(map(int, pair.split(\"-\"))) for pair in line.strip().split(\",\")]\n for line in data.splitlines()\n]\n\n# Version 1 with simple comparison of boundaries\ndef test1(boundaries):\n def included(pairs):\n (a1, b1), (a2, b2) = pairs\n return (a1 <= a2 and b2 <= b1) or (a2 <= a1 and b1 <= b2)\n \n return sum(included(pairs) for pairs in boundaries)\n\n# Version 2 with range-subset test\ndef test2(boundaries):\n def included(pairs):\n (a1, b1), (a2, b2) = pairs\n numbers1, numbers2 = set(range(a1, b1 + 1)), set(range(a2, b2 + 1))\n return numbers1 <= numbers2 or numbers2 <= numbers1\n \n return sum(included(pairs) for pairs in boundaries)\n\n# Test for identical result\nprint(test1(boundaries) == test2(boundaries))\n\n# Timing\nfor i in 1, 2:\n t = timeit(f\"test{i}(boundaries)\", globals=globals(), number=1_000)\n print(f\"Duration version {i}: {t:.1f} seconds\")\n\nResult here, on a mediocre machine (repl.it):\nDuration version 1: 0.4 seconds\nDuration version 2: 5.4 seconds\n\n",
"You're prob. making it overcomplicated in the current approach. If you split the pairs to two sets - eg. a, and b then you could easily do a set ops. to check if there is overlapping. That should be faster than yours.\nSomething like this one-line:\n # some input reading, and split to a, b sets.\n # count = 0\n\n if set(range(a, b + 1)) & set(range(x, y + 1)):\n count += 1 # that's part1 answer.\n\n\n# part 2\nfor line in open('04.in'):\n a, b, x, y = map(int, line.replace(\",\", \"-\").split(\"-\"))\n if set(range(a, b + 1)) & set(range(x, y + 1)):\n ans += 1\n\nThere are questions about the memory efficiency about this approach earlier, I've run some profiling and this is the result to share - it's confirmed there should be NO problem given this puzzle's input size.\nFilename: day04.py\n\nLine # Mem usage Increment Occurrences Line Contents\n=============================================================\n 27 43.758 MiB 43.758 MiB 1 @profile\n 28 def part2(file):\n 29 43.762 MiB 0.004 MiB 1 ans = 0\n 30\n 31 43.770 MiB 0.000 MiB 1001 for line in open(file):\n 32 43.770 MiB 0.004 MiB 1000 a, b, x, y = map(int, line.replace(\",\", \"-\").split(\"-\"))\n 33 43.770 MiB 0.000 MiB 1000 if set(range(a, b + 1)) & set(range(x, y + 1)):\n 34 43.770 MiB 0.004 MiB 847 ans += 1\n 35\n 36 43.770 MiB 0.000 MiB 1 return ans\n\n"
] | [
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074675785_python_python_3.x.txt |
Q:
How I can assign value to variable using a button?
I'm trying to write a graphic calculator using buttons.
How I can assign value to variable using a button?
I wrote the code:
from tkinter import *
a=0
def button_0a():
a=0
return 0
button0= Button(kalkulator, text="0", command=przycisk_0a)
button0.grid(row=1, column=0)
Of course, it is only a fragment of the code, but it enough to describe my problem. It changes only variable a, but next time i would like to change variable b using the same button.
A:
przycisk_0a is your callback function bind to the button0 button, so you must define your function, but you have defined the button0 button which is nonsense.
It must be like this:
def przycisk_0a():
A:
This changes the value of a and then changes the value of b the next time you press the button.
from tkinter import *
kalkulator = Tk()
a=0
b=0
def button_0a():
b=0
return 0
def button_0a():
a=0
button0.configure(command = button_0b)
return 0
button0= Button(kalkulator, text="0", command=button_0a)
button0.grid(row=1, column=0)
kalkulator.mainloop()
| How I can assign value to variable using a button? | I'm trying to write a graphic calculator using buttons.
How I can assign value to variable using a button?
I wrote the code:
from tkinter import *
a=0
def button_0a():
a=0
return 0
button0= Button(kalkulator, text="0", command=przycisk_0a)
button0.grid(row=1, column=0)
Of course, it is only a fragment of the code, but it enough to describe my problem. It changes only variable a, but next time i would like to change variable b using the same button.
| [
"przycisk_0a is your callback function bind to the button0 button, so you must define your function, but you have defined the button0 button which is nonsense. \nIt must be like this:\ndef przycisk_0a():\n\n",
"This changes the value of a and then changes the value of b the next time you press the button.\nfrom tkinter import *\n\nkalkulator = Tk()\n\na=0\nb=0\n\ndef button_0a():\n b=0\n return 0\n\ndef button_0a():\n a=0\n button0.configure(command = button_0b)\n return 0\n\nbutton0= Button(kalkulator, text=\"0\", command=button_0a)\n\nbutton0.grid(row=1, column=0)\n\nkalkulator.mainloop()\n\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_3.x",
"tkinter"
] | stackoverflow_0038443208_python_python_3.x_tkinter.txt |
Q:
Unable to update the discount command to subclass value since bas class value is set to 0
I've been trying to see if I can change the discount of an item that is necessary for computation. My challenge is that, I cannot update the discount since it was set to 0.
class Dog:
def __init__(self, food, amount, cost, discount=0):
self.food = food
self.amount = amount
self.cost = cost
self.discount = discount
if self.discount == 0:
self.cost = self.amount *100
else:
self.cost = self.amount * 100 * (1-self.discount)
class Malamute(Dog):
def __init__(self, food, amount, cost, behavior, discount=0):
super().__init__(food, amount, cost, discount=0)
self.behavior = behavior
if self.behavior == "very good":
self.discount = 0.20
if self.behavior == "good":
self.discount = 0.10
if self.behavior == "bad":
self.discount = 0
class Golden(Dog):
def __init__(self, food, amount, cost, damage, discount=0):
super().__init__(food, amount, cost, discount=0)
self.damage = damage
self.discount = -self.damage
class Golden_Malamute(Malamute,Golden):
def __init__(self, food, amount, cost, behavior, damage, discount=0):
Malamute().__init__(self,food, amount, cost, behavior, discount=0)
Golden().__init__(self,food, amount, cost, damage, discount=0)
self.discount=discount
Brownie = Dog("Pellet", 10, 0,)
print("Brownie", Brownie.cost)
Mala=Malamute("Pellet",10,0,"good")
print("Mala",Mala.cost)
Goldie=Golden("Pellet",10,0, 0.10)
print("Goldei",Goldie.cost)
#Blackie=Golden_Malamute("Pellet", 10, 5, "good", 0.05)
#print("Blackie", Blackie.cost)
When there should be a discount, it does not directly apply since the discoutn is set to zero. I am unbale to shift the commant to other sub classes as there are instances where dog itslef will be called and if a subclass is called, it will have to undergo two processes.
A:
You might need to try the technique of walking through your program and speaking it out loud.
For example, this is how I read your listing and how I can detect an issue.
I create a new GoldenDog
The GoldenDog calls the super.__init
The super init calculates the cost based on the discount of zero
I then run the rest of the GoldenDog.__init
I set the discount to 0.10
The problem is you are setting the discount after the cost has been calculated.
To solve this, you ought to calculate the cost when a caller asks for the cost. That way, it will be able to use the current value of the discount (rather than only being able to use the discount that applied when created).
Alternatively, you need to pass the discount to the super.__init call.
class Golden(Dog):
def __init__(self, food, amount, cost, damage, discount=0):
super().__init__(food, amount, cost, discount=-damage)
self.damage = damage
Your next task will be to fix the discount logic as it currently increases the price, rather than reducing it.
| Unable to update the discount command to subclass value since bas class value is set to 0 | I've been trying to see if I can change the discount of an item that is necessary for computation. My challenge is that, I cannot update the discount since it was set to 0.
class Dog:
def __init__(self, food, amount, cost, discount=0):
self.food = food
self.amount = amount
self.cost = cost
self.discount = discount
if self.discount == 0:
self.cost = self.amount *100
else:
self.cost = self.amount * 100 * (1-self.discount)
class Malamute(Dog):
def __init__(self, food, amount, cost, behavior, discount=0):
super().__init__(food, amount, cost, discount=0)
self.behavior = behavior
if self.behavior == "very good":
self.discount = 0.20
if self.behavior == "good":
self.discount = 0.10
if self.behavior == "bad":
self.discount = 0
class Golden(Dog):
def __init__(self, food, amount, cost, damage, discount=0):
super().__init__(food, amount, cost, discount=0)
self.damage = damage
self.discount = -self.damage
class Golden_Malamute(Malamute,Golden):
def __init__(self, food, amount, cost, behavior, damage, discount=0):
Malamute().__init__(self,food, amount, cost, behavior, discount=0)
Golden().__init__(self,food, amount, cost, damage, discount=0)
self.discount=discount
Brownie = Dog("Pellet", 10, 0,)
print("Brownie", Brownie.cost)
Mala=Malamute("Pellet",10,0,"good")
print("Mala",Mala.cost)
Goldie=Golden("Pellet",10,0, 0.10)
print("Goldei",Goldie.cost)
#Blackie=Golden_Malamute("Pellet", 10, 5, "good", 0.05)
#print("Blackie", Blackie.cost)
When there should be a discount, it does not directly apply since the discoutn is set to zero. I am unbale to shift the commant to other sub classes as there are instances where dog itslef will be called and if a subclass is called, it will have to undergo two processes.
| [
"You might need to try the technique of walking through your program and speaking it out loud.\nFor example, this is how I read your listing and how I can detect an issue.\n\nI create a new GoldenDog\nThe GoldenDog calls the super.__init\nThe super init calculates the cost based on the discount of zero\nI then run the rest of the GoldenDog.__init\nI set the discount to 0.10\n\nThe problem is you are setting the discount after the cost has been calculated.\nTo solve this, you ought to calculate the cost when a caller asks for the cost. That way, it will be able to use the current value of the discount (rather than only being able to use the discount that applied when created).\nAlternatively, you need to pass the discount to the super.__init call.\nclass Golden(Dog):\n def __init__(self, food, amount, cost, damage, discount=0):\n super().__init__(food, amount, cost, discount=-damage)\n self.damage = damage\n\nYour next task will be to fix the discount logic as it currently increases the price, rather than reducing it.\n"
] | [
0
] | [] | [] | [
"attributes",
"class",
"object",
"oop",
"python"
] | stackoverflow_0074677411_attributes_class_object_oop_python.txt |
Q:
How to display two different return values from if/elif in Python?
I'm reading a csv file and appending the data into a list and later using another function, I'm calculating these numbers and try to return two values using if/elif statements. To display the result I have created a procedure called displayData(numbers) and here I'm struggling to show the calculated values from previous function called seyrogus(numbers) against the original csv values.
Currently, it's showing all only "4" as result. Where I'm doing wrong?
My code so far
def getData():
numbers = []
file = open("numbers.csv","r")
for line in file:
data = line.strip()
numbers.append(int(data.strip()))
return numbers
def seyrogus(numbers):
for counter in range(len(numbers)):
if numbers[counter] % 2 != 0:
return (int(numbers[counter] * 3) + 1)
elif numbers[counter] % 2 == 0:
return (int(numbers[counter] / 2))
def displayData(numbers):
print("Original Numbers \t Converted Numbers")
for counter in range(len(numbers)):
print(f"{numbers[counter]} \t \t \t {seyrogus(numbers)}")
def main():
numbers = getData()
displayData(numbers)
main()
output
Original Numbers Converted Numbers
1 4
2 4
3 4
4 4
5 4
6 4
7 4
8 4
9 4
10 4
11 4
12 4
13 4
14 4
15 4
16 4
17 4
18 4
19 4
20 4
21 4
22 4
23 4
24 4
25 4
26 4
27 4
28 4
29 4
30 4
31 4
32 4
33 4
34 4
35 4
36 4
37 4
38 4
39 4
40 4
41 4
42 4
43 4
44 4
45 4
46 4
47 4
48 4
49 4
50 4
51 4
52 4
53 4
54 4
55 4
56 4
57 4
58 4
59 4
60 4
61 4
62 4
63 4
64 4
65 4
66 4
67 4
68 4
69 4
70 4
71 4
72 4
73 4
74 4
75 4
76 4
77 4
78 4
79 4
80 4
81 4
82 4
83 4
84 4
85 4
86 4
87 4
88 4
89 4
90 4
91 4
92 4
93 4
94 4
95 4
96 4
97 4
98 4
99 4
100 4
csv file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
A:
Your seyrogus function needs to return a list rather than returning a single value. The reason you're only getting 4 as the result is that every time you call it, it iterates over numbers from the beginning and then returns the first converted value rather than iterating over the entire list.
Both getData and seyrogus can be implemented very simply as list comprehensions. You then need to iterate over both numbers and seyrogus(numbers) in parallel in displayData; an easy way of doing that is the zip function.
def getData():
with open("numbers.csv") as file:
return [int(line.strip()) for line in file]
def seyrogus(numbers):
return [n * 3 + 1 if n % 2 else n // 2 for n in numbers]
def displayData(numbers):
print("Original Numbers \t Converted Numbers")
for original, converted in zip(numbers, seyrogus(numbers)):
print(f"{original} \t \t \t {converted}")
def main():
displayData(getData())
main()
prints:
Original Numbers Converted Numbers
1 4
2 1
3 10
4 2
5 16
6 3
7 22
8 4
9 28
10 5
11 34
etc.
| How to display two different return values from if/elif in Python? | I'm reading a csv file and appending the data into a list and later using another function, I'm calculating these numbers and try to return two values using if/elif statements. To display the result I have created a procedure called displayData(numbers) and here I'm struggling to show the calculated values from previous function called seyrogus(numbers) against the original csv values.
Currently, it's showing all only "4" as result. Where I'm doing wrong?
My code so far
def getData():
numbers = []
file = open("numbers.csv","r")
for line in file:
data = line.strip()
numbers.append(int(data.strip()))
return numbers
def seyrogus(numbers):
for counter in range(len(numbers)):
if numbers[counter] % 2 != 0:
return (int(numbers[counter] * 3) + 1)
elif numbers[counter] % 2 == 0:
return (int(numbers[counter] / 2))
def displayData(numbers):
print("Original Numbers \t Converted Numbers")
for counter in range(len(numbers)):
print(f"{numbers[counter]} \t \t \t {seyrogus(numbers)}")
def main():
numbers = getData()
displayData(numbers)
main()
output
Original Numbers Converted Numbers
1 4
2 4
3 4
4 4
5 4
6 4
7 4
8 4
9 4
10 4
11 4
12 4
13 4
14 4
15 4
16 4
17 4
18 4
19 4
20 4
21 4
22 4
23 4
24 4
25 4
26 4
27 4
28 4
29 4
30 4
31 4
32 4
33 4
34 4
35 4
36 4
37 4
38 4
39 4
40 4
41 4
42 4
43 4
44 4
45 4
46 4
47 4
48 4
49 4
50 4
51 4
52 4
53 4
54 4
55 4
56 4
57 4
58 4
59 4
60 4
61 4
62 4
63 4
64 4
65 4
66 4
67 4
68 4
69 4
70 4
71 4
72 4
73 4
74 4
75 4
76 4
77 4
78 4
79 4
80 4
81 4
82 4
83 4
84 4
85 4
86 4
87 4
88 4
89 4
90 4
91 4
92 4
93 4
94 4
95 4
96 4
97 4
98 4
99 4
100 4
csv file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
| [
"Your seyrogus function needs to return a list rather than returning a single value. The reason you're only getting 4 as the result is that every time you call it, it iterates over numbers from the beginning and then returns the first converted value rather than iterating over the entire list.\nBoth getData and seyrogus can be implemented very simply as list comprehensions. You then need to iterate over both numbers and seyrogus(numbers) in parallel in displayData; an easy way of doing that is the zip function.\ndef getData():\n with open(\"numbers.csv\") as file:\n return [int(line.strip()) for line in file]\n\ndef seyrogus(numbers):\n return [n * 3 + 1 if n % 2 else n // 2 for n in numbers]\n\ndef displayData(numbers):\n print(\"Original Numbers \\t Converted Numbers\")\n for original, converted in zip(numbers, seyrogus(numbers)):\n print(f\"{original} \\t \\t \\t {converted}\")\n\ndef main():\n displayData(getData())\n\nmain()\n\nprints:\nOriginal Numbers Converted Numbers\n1 4\n2 1\n3 10\n4 2\n5 16\n6 3\n7 22\n8 4\n9 28\n10 5\n11 34\n\netc.\n"
] | [
4
] | [] | [] | [
"python"
] | stackoverflow_0074679896_python.txt |
Q:
How can I convert Pine-script to Python? (QQE signal)
enter image description here
Pine-script code
//@version=4
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © colinmck
study("QQE signals", overlay=true)
RSI_Period = input(14, title='RSI Length')
SF = input(5, title='RSI Smoothing')
QQE = input(4.238, title='Fast QQE Factor')
ThreshHold = input(10, title="Thresh-hold")
src = close
Wilders_Period = RSI_Period * 2 - 1
Rsi = rsi(src, RSI_Period)
RsiMa = ema(Rsi, SF)
AtrRsi = abs(RsiMa[1] - RsiMa)
MaAtrRsi = ema(AtrRsi, Wilders_Period)
dar = ema(MaAtrRsi, Wilders_Period) * QQE
longband = 0.0
shortband = 0.0
trend = 0
DeltaFastAtrRsi = dar
RSIndex = RsiMa
newshortband = RSIndex + DeltaFastAtrRsi
newlongband = RSIndex - DeltaFastAtrRsi
longband := RSIndex[1] > longband[1] and RSIndex > longband[1] ? max(longband[1], newlongband) : newlongband
shortband := RSIndex[1] < shortband[1] and RSIndex < shortband[1] ? min(shortband[1], newshortband) : newshortband
cross_1 = cross(longband[1], RSIndex)
trend := cross(RSIndex, shortband[1]) ? 1 : cross_1 ? -1 : nz(trend[1], 1)
FastAtrRsiTL = trend == 1 ? longband : shortband
// Find all the QQE Crosses
QQExlong = 0
QQExlong := nz(QQExlong[1])
QQExshort = 0
QQExshort := nz(QQExshort[1])
QQExlong := FastAtrRsiTL < RSIndex ? QQExlong + 1 : 0
QQExshort := FastAtrRsiTL > RSIndex ? QQExshort + 1 : 0
//Conditions
qqeLong = QQExlong == 1 ? FastAtrRsiTL[1] - 50 : na
qqeShort = QQExshort == 1 ? FastAtrRsiTL[1] - 50 : na
// Plotting
plotshape(qqeLong, title="QQE long", text="Long", textcolor=color.white, style=shape.labelup, location=location.belowbar, color=color.green, transp=0, size=size.tiny)
plotshape(qqeShort, title="QQE short", text="Short", textcolor=color.white, style=shape.labeldown, location=location.abovebar, color=color.red, transp=0, size=size.tiny)
// Alerts
alertcondition(qqeLong, title="Long", message="Long")
alertcondition(qqeShort, title="Short", message="Short")
python code
import pandas as pd
import numpy as np
import talib as ta
import math
import ccxt
RSI_Period = 6
Wilders_Period = RSI_Period * 2 - 1
SF = 5
QQE = 3
ThresHold = 3
data = client.klines(symbol='BTCUSDT', interval='3m', limit=1000) ## binance API data
df = pd.DataFrame(data)# DATA
Rsi = ta.RSI(df['close'], RSI_Period) ## RSI
RsiMa = ta.EMA(Rsi, SF) ## EMA
AtrRsi = abs(RsiMa[-1] - RsiMa)
MaAtrRsi = ta.EMA(AtrRsi, Wilders_Period) ## EMA
dar = ta.EMA(MaAtrRsi, Wilders_Period) * QQE
It is incomplete.
I'm not trying to implement a graph, I simply want to run longs and shorts in real time.
I want to alert in python console whether it is long or short.
Is there a way to convert it to python?
I want to continuously fetch the data and determine when it is long and when it is short.
A:
#qqe signal
df["RSI_Period"]=ta.rsi (df['close'],14)
SF = 5
QQE = 4.238
ThreshHold = input(10, title="Thresh-hold")
src = df['close']
Wilders_Period = df["RSI2"] * 2 - 1
Rsi = rsi(src, RSI_Period)
RsiMa = ta.ema(Rsi, SF)
AtrRsi = abs(RsiMa[1] - RsiMa)
MaAtrRsi = ta.ema(AtrRsi, Wilders_Period)
dar = ema(MaAtrRsi, Wilders_Period) * QQE
longband = 0.0
shortband = 0.0
trend = 0
DeltaFastAtrRsi = dar
RSIndex = RsiMa
newshortband = RSIndex + DeltaFastAtrRsi
newlongband = RSIndex - DeltaFastAtrRsi
if longband == RSIndex[-1] > longband[-1] and RSIndex > longband[-1] ? max(longband[-1], newlongband) : newlongband
shortband == RSIndex[-1] < shortband[-1] and RSIndex < shortband[-1] ? min(shortband[-1], newshortband) : newshortband
cross_1 = cross(longband[1], RSIndex)
trend == cross(RSIndex, shortband[1]) ? 1 : cross_1 ? -1 : nz(trend[1], 1)
FastAtrRsiTL = trend == 1 ? longband : shortband
# Find all the QQE Crosses
QQExlong = 0
QQExlong == nz(QQExlong[-1])
QQExshort = 0
QQExshort == nz(QQExshort[-1])
QQExlong == FastAtrRsiTL < RSIndex ? QQExlong + 1 : 0
QQExshort == FastAtrRsiTL > RSIndex ? QQExshort + 1 : 0
#Conditions
qqeLong = QQExlong == 1 ? FastAtrRsiTL[-1] - 50 : na
qqeShort = QQExshort == 1 ? FastAtrRsiTL[-1] - 50 : na
? I couldn't find what to use instead.
A:
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © blackcat1402
//@version=4
study("[blackcat] L3 Banker Fund Flow Trend Oscillator", overlay=false)
//functions
xrf(values, length) =>
r_val = float(na)
if length >= 1
for i = 0 to length by 1
if na(r_val) or not na(values[i])
r_val := values[i]
r_val
r_val
xsa(src,len,wei) =>
sumf = 0.0
ma = 0.0
out = 0.0
sumf := nz(sumf[1]) - nz(src[len]) + src
ma := na(src[len]) ? na : sumf/len
out := na(out[1]) ? ma : (src*wei+out[1]*(len-wei))/len
out
//set up a simple model of banker fund flow trend
fundtrend = ((3*xsa((close- lowest(low,27))/(highest(high,27)-lowest(low,27))*100,5,1)-2*xsa(xsa((close-lowest(low,27))/(highest(high,27)-lowest(low,27))*100,5,1),3,1)-50)*1.032+50)
//define typical price for banker fund
typ = (2*close+high+low+open)/5
//lowest low with mid term fib # 34
lol = lowest(low,34)
//highest high with mid term fib # 34
hoh = highest(high,34)
//define banker fund flow bull bear line
bullbearline = ema((typ-lol)/(hoh-lol)*100,13)
//define banker entry signal
bankerentry = crossover(fundtrend,bullbearline) and bullbearline<25
//banker fund entry with yellow candle
plotcandle(0,50,0,50,color=bankerentry ? color.new(color.yellow,0):na)
//banker increase position with green candle
plotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend>bullbearline ? color.new(color.green,0):na)
//banker decrease position with white candle
plotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend<(xrf(fundtrend*0.95,1)) ? color.new(color.white,0):na)
//banker fund exit/quit with red candle
plotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend<bullbearline ? color.new(color.red,0):na)
//banker fund Weak rebound with blue candle
plotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend<bullbearline and fundtrend>(xrf(fundtrend*0.95,1)) ? color.new(color.blue,0):na)
//overbought and oversold threshold lines
h1 = hline(80,color=color.red, linestyle=hline.style_dotted)
h2 = hline(20, color=color.yellow, linestyle=hline.style_dotted)
h3 = hline(10,color=color.lime, linestyle=hline.style_dotted)
h4 = hline(90, color=color.fuchsia, linestyle=hline.style_dotted)
fill(h2,h3,color=color.yellow,transp=70)
fill(h1,h4,color=color.fuchsia,transp=70)
alertcondition(bankerentry, title='Alert on Yellow Candle', message='Yellow Candle!')
alertcondition(fundtrend>bullbearline, title='Alert on Green Candle', message='Green Candle!')
alertcondition(fundtrend<(xrf(fundtrend*0.95,1)), title='Alert on White Candle', message='White Candle!')
alertcondition(fundtrend<bullbearline, title='Alert on Red Candle', message='Red Candle!')
alertcondition(fundtrend<bullbearline and fundtrend>(xrf(fundtrend*0.95,1)), title='Alert on Blue Candle', message='Blue Candle!')
Can you help me convert this to python?
| How can I convert Pine-script to Python? (QQE signal) | enter image description here
Pine-script code
//@version=4
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// © colinmck
study("QQE signals", overlay=true)
RSI_Period = input(14, title='RSI Length')
SF = input(5, title='RSI Smoothing')
QQE = input(4.238, title='Fast QQE Factor')
ThreshHold = input(10, title="Thresh-hold")
src = close
Wilders_Period = RSI_Period * 2 - 1
Rsi = rsi(src, RSI_Period)
RsiMa = ema(Rsi, SF)
AtrRsi = abs(RsiMa[1] - RsiMa)
MaAtrRsi = ema(AtrRsi, Wilders_Period)
dar = ema(MaAtrRsi, Wilders_Period) * QQE
longband = 0.0
shortband = 0.0
trend = 0
DeltaFastAtrRsi = dar
RSIndex = RsiMa
newshortband = RSIndex + DeltaFastAtrRsi
newlongband = RSIndex - DeltaFastAtrRsi
longband := RSIndex[1] > longband[1] and RSIndex > longband[1] ? max(longband[1], newlongband) : newlongband
shortband := RSIndex[1] < shortband[1] and RSIndex < shortband[1] ? min(shortband[1], newshortband) : newshortband
cross_1 = cross(longband[1], RSIndex)
trend := cross(RSIndex, shortband[1]) ? 1 : cross_1 ? -1 : nz(trend[1], 1)
FastAtrRsiTL = trend == 1 ? longband : shortband
// Find all the QQE Crosses
QQExlong = 0
QQExlong := nz(QQExlong[1])
QQExshort = 0
QQExshort := nz(QQExshort[1])
QQExlong := FastAtrRsiTL < RSIndex ? QQExlong + 1 : 0
QQExshort := FastAtrRsiTL > RSIndex ? QQExshort + 1 : 0
//Conditions
qqeLong = QQExlong == 1 ? FastAtrRsiTL[1] - 50 : na
qqeShort = QQExshort == 1 ? FastAtrRsiTL[1] - 50 : na
// Plotting
plotshape(qqeLong, title="QQE long", text="Long", textcolor=color.white, style=shape.labelup, location=location.belowbar, color=color.green, transp=0, size=size.tiny)
plotshape(qqeShort, title="QQE short", text="Short", textcolor=color.white, style=shape.labeldown, location=location.abovebar, color=color.red, transp=0, size=size.tiny)
// Alerts
alertcondition(qqeLong, title="Long", message="Long")
alertcondition(qqeShort, title="Short", message="Short")
python code
import pandas as pd
import numpy as np
import talib as ta
import math
import ccxt
RSI_Period = 6
Wilders_Period = RSI_Period * 2 - 1
SF = 5
QQE = 3
ThresHold = 3
data = client.klines(symbol='BTCUSDT', interval='3m', limit=1000) ## binance API data
df = pd.DataFrame(data)# DATA
Rsi = ta.RSI(df['close'], RSI_Period) ## RSI
RsiMa = ta.EMA(Rsi, SF) ## EMA
AtrRsi = abs(RsiMa[-1] - RsiMa)
MaAtrRsi = ta.EMA(AtrRsi, Wilders_Period) ## EMA
dar = ta.EMA(MaAtrRsi, Wilders_Period) * QQE
It is incomplete.
I'm not trying to implement a graph, I simply want to run longs and shorts in real time.
I want to alert in python console whether it is long or short.
Is there a way to convert it to python?
I want to continuously fetch the data and determine when it is long and when it is short.
| [
"#qqe signal\n df[\"RSI_Period\"]=ta.rsi (df['close'],14)\n SF = 5\n QQE = 4.238\n ThreshHold = input(10, title=\"Thresh-hold\")\n\n src = df['close']\n Wilders_Period = df[\"RSI2\"] * 2 - 1\n\n Rsi = rsi(src, RSI_Period)\n RsiMa = ta.ema(Rsi, SF)\n AtrRsi = abs(RsiMa[1] - RsiMa)\n MaAtrRsi = ta.ema(AtrRsi, Wilders_Period)\n dar = ema(MaAtrRsi, Wilders_Period) * QQE\n\n longband = 0.0\n shortband = 0.0\n trend = 0\n\n DeltaFastAtrRsi = dar\n RSIndex = RsiMa\n newshortband = RSIndex + DeltaFastAtrRsi\n newlongband = RSIndex - DeltaFastAtrRsi\n if longband == RSIndex[-1] > longband[-1] and RSIndex > longband[-1] ? max(longband[-1], newlongband) : newlongband\n shortband == RSIndex[-1] < shortband[-1] and RSIndex < shortband[-1] ? min(shortband[-1], newshortband) : newshortband\n cross_1 = cross(longband[1], RSIndex)\n trend == cross(RSIndex, shortband[1]) ? 1 : cross_1 ? -1 : nz(trend[1], 1)\n FastAtrRsiTL = trend == 1 ? longband : shortband\n\n # Find all the QQE Crosses\n\n QQExlong = 0\n QQExlong == nz(QQExlong[-1])\n QQExshort = 0\n QQExshort == nz(QQExshort[-1])\n QQExlong == FastAtrRsiTL < RSIndex ? QQExlong + 1 : 0\n QQExshort == FastAtrRsiTL > RSIndex ? QQExshort + 1 : 0\n\n #Conditions\n\n qqeLong = QQExlong == 1 ? FastAtrRsiTL[-1] - 50 : na\n qqeShort = QQExshort == 1 ? FastAtrRsiTL[-1] - 50 : na\n \n\n? I couldn't find what to use instead.\n",
"// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/\n// © blackcat1402\n//@version=4\n\nstudy(\"[blackcat] L3 Banker Fund Flow Trend Oscillator\", overlay=false)\n\n//functions\nxrf(values, length) =>\n r_val = float(na)\n if length >= 1\n for i = 0 to length by 1\n if na(r_val) or not na(values[i])\n r_val := values[i]\n r_val\n r_val\n\nxsa(src,len,wei) =>\n sumf = 0.0\n ma = 0.0\n out = 0.0\n sumf := nz(sumf[1]) - nz(src[len]) + src\n ma := na(src[len]) ? na : sumf/len\n out := na(out[1]) ? ma : (src*wei+out[1]*(len-wei))/len\n out\n \n//set up a simple model of banker fund flow trend \nfundtrend = ((3*xsa((close- lowest(low,27))/(highest(high,27)-lowest(low,27))*100,5,1)-2*xsa(xsa((close-lowest(low,27))/(highest(high,27)-lowest(low,27))*100,5,1),3,1)-50)*1.032+50)\n//define typical price for banker fund\ntyp = (2*close+high+low+open)/5\n//lowest low with mid term fib # 34\nlol = lowest(low,34)\n//highest high with mid term fib # 34\nhoh = highest(high,34)\n//define banker fund flow bull bear line\nbullbearline = ema((typ-lol)/(hoh-lol)*100,13)\n//define banker entry signal\nbankerentry = crossover(fundtrend,bullbearline) and bullbearline<25\n\n//banker fund entry with yellow candle\nplotcandle(0,50,0,50,color=bankerentry ? color.new(color.yellow,0):na)\n\n//banker increase position with green candle\nplotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend>bullbearline ? color.new(color.green,0):na)\n\n//banker decrease position with white candle\nplotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend<(xrf(fundtrend*0.95,1)) ? color.new(color.white,0):na)\n\n//banker fund exit/quit with red candle\nplotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend<bullbearline ? color.new(color.red,0):na)\n\n//banker fund Weak rebound with blue candle\nplotcandle(fundtrend,bullbearline,fundtrend,bullbearline,color=fundtrend<bullbearline and fundtrend>(xrf(fundtrend*0.95,1)) ? color.new(color.blue,0):na)\n\n//overbought and oversold threshold lines\nh1 = hline(80,color=color.red, linestyle=hline.style_dotted)\nh2 = hline(20, color=color.yellow, linestyle=hline.style_dotted)\nh3 = hline(10,color=color.lime, linestyle=hline.style_dotted)\nh4 = hline(90, color=color.fuchsia, linestyle=hline.style_dotted)\nfill(h2,h3,color=color.yellow,transp=70)\nfill(h1,h4,color=color.fuchsia,transp=70)\n\nalertcondition(bankerentry, title='Alert on Yellow Candle', message='Yellow Candle!')\nalertcondition(fundtrend>bullbearline, title='Alert on Green Candle', message='Green Candle!')\nalertcondition(fundtrend<(xrf(fundtrend*0.95,1)), title='Alert on White Candle', message='White Candle!')\nalertcondition(fundtrend<bullbearline, title='Alert on Red Candle', message='Red Candle!')\nalertcondition(fundtrend<bullbearline and fundtrend>(xrf(fundtrend*0.95,1)), title='Alert on Blue Candle', message='Blue Candle!')\n\nCan you help me convert this to python?\n"
] | [
0,
0
] | [] | [] | [
"pine_script",
"python",
"tradingview_api"
] | stackoverflow_0074604279_pine_script_python_tradingview_api.txt |
Q:
Im a python beginner, is there any way i can repeat this simple calculator code infinity times?
x=int(input("please type in any number: "))
y=input("please type operation: +,-,*,/: ")
z=int(input("please type in your 2nd number: "))
if(y=="+"):
print("your answer is: ", x+z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="-"):
print("your answer is: ", x-z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="*"):
print("your answer is: ", x*z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="/"):
print("your answer is: ", x/z)
print("thanks for using this calculator!")
print("goodbye")
Nothing yet, i dont know what do do, people keep saying online that something like
While true()
Restart
Idk it was smth like that
A:
Start the loop
while True:
x=int(input("please type in any number: "))
y=input("please type operation: +,-,*,/: ")
z=int(input("please type in your 2nd number: "))
if(y=="+"):
print("your answer is: ", x+z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="-"):
print("your answer is: ", x-z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="*"):
print("your answer is: ", x*z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="/"):
print("your answer is: ", x/z)
print("thanks for using this calculator!")
print("goodbye")
| Im a python beginner, is there any way i can repeat this simple calculator code infinity times? | x=int(input("please type in any number: "))
y=input("please type operation: +,-,*,/: ")
z=int(input("please type in your 2nd number: "))
if(y=="+"):
print("your answer is: ", x+z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="-"):
print("your answer is: ", x-z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="*"):
print("your answer is: ", x*z)
print("thanks for using this calculator!")
print("goodbye")
elif(y=="/"):
print("your answer is: ", x/z)
print("thanks for using this calculator!")
print("goodbye")
Nothing yet, i dont know what do do, people keep saying online that something like
While true()
Restart
Idk it was smth like that
| [
"Start the loop\nwhile True:\n x=int(input(\"please type in any number: \"))\n y=input(\"please type operation: +,-,*,/: \")\n z=int(input(\"please type in your 2nd number: \"))\n if(y==\"+\"):\n print(\"your answer is: \", x+z)\n print(\"thanks for using this calculator!\")\n print(\"goodbye\")\n elif(y==\"-\"):\n print(\"your answer is: \", x-z)\n print(\"thanks for using this calculator!\")\n print(\"goodbye\")\n elif(y==\"*\"):\n print(\"your answer is: \", x*z)\n print(\"thanks for using this calculator!\")\n print(\"goodbye\")\n elif(y==\"/\"):\n print(\"your answer is: \", x/z)\n print(\"thanks for using this calculator!\")\n print(\"goodbye\")\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074679948_python.txt |
Q:
Working with a python array full of array
Ok so I have an array of arrays. I'm currently wondering if I'm better to export all of it in my mysql database and do the sorting once there, or work with the array itself.
Here is part of the array:
datas = [['Anonymous User-b82a42', 'DYDXUSDT', 'Short', 20, 258.2, 2.332, 2.333, -0.26, -0.8573, '2022-11-16 14:02:28'], ['Anonymous User-b82a42', 'OCEANUSDT', 'Long', 20, 4732.0, 0.13113, 0.13145, 1.51, 4.8688, '2022-11-16 09:04:04'], ['Anonymous User-b82a42', 'CHZUSDT', 'Short', 20, 2684.0, 0.22187, 0.22637, -12.08, -39.7579, '2022-11-16 11:10:17'], ['Anonymous User-b82a42', 'DUSKUSDT', 'Long', 20, 6636.0, 0.09043, 0.09007, -2.38, -7.9724, '2022-11-16 12:40:17'], ['Anonymous User-b82a42', 'CTSIUSDT', 'Long', 20, 5614.0, 0.1062, 0.1058, -2.22, -7.4594, '2022-11-16 13:47:25']...]
Here is the things I need to do:
For the same symbol data[1], get the biggest leverage data[3] and remove/don't save the others
If 2 symbol data[1] have the same direction data[2], but not the same leverage data[3], keep only the biggest
If 2 symbol data[1] have the opposite direction data[2] but same leverage data[3], delete/skip/don't save both
The thing I face is it seems a lot to processes from the array itself.
And the case I have multiple same symbol data[1], if I do a for each trade loop, I will maybe delete trades that are valid compare to others but in this loop with this trade it's not.
What should I do? I understand sorted() can be used but I can't find the way to achieve the things I need to do and I wonder if I better save all to mysql and use sql query to achieve it.
A:
I solved this storing in 2 database tables and comparing them after.
| Working with a python array full of array | Ok so I have an array of arrays. I'm currently wondering if I'm better to export all of it in my mysql database and do the sorting once there, or work with the array itself.
Here is part of the array:
datas = [['Anonymous User-b82a42', 'DYDXUSDT', 'Short', 20, 258.2, 2.332, 2.333, -0.26, -0.8573, '2022-11-16 14:02:28'], ['Anonymous User-b82a42', 'OCEANUSDT', 'Long', 20, 4732.0, 0.13113, 0.13145, 1.51, 4.8688, '2022-11-16 09:04:04'], ['Anonymous User-b82a42', 'CHZUSDT', 'Short', 20, 2684.0, 0.22187, 0.22637, -12.08, -39.7579, '2022-11-16 11:10:17'], ['Anonymous User-b82a42', 'DUSKUSDT', 'Long', 20, 6636.0, 0.09043, 0.09007, -2.38, -7.9724, '2022-11-16 12:40:17'], ['Anonymous User-b82a42', 'CTSIUSDT', 'Long', 20, 5614.0, 0.1062, 0.1058, -2.22, -7.4594, '2022-11-16 13:47:25']...]
Here is the things I need to do:
For the same symbol data[1], get the biggest leverage data[3] and remove/don't save the others
If 2 symbol data[1] have the same direction data[2], but not the same leverage data[3], keep only the biggest
If 2 symbol data[1] have the opposite direction data[2] but same leverage data[3], delete/skip/don't save both
The thing I face is it seems a lot to processes from the array itself.
And the case I have multiple same symbol data[1], if I do a for each trade loop, I will maybe delete trades that are valid compare to others but in this loop with this trade it's not.
What should I do? I understand sorted() can be used but I can't find the way to achieve the things I need to do and I wonder if I better save all to mysql and use sql query to achieve it.
| [
"I solved this storing in 2 database tables and comparing them after.\n"
] | [
0
] | [] | [] | [
"arrays",
"mysql",
"python",
"sorting"
] | stackoverflow_0074465997_arrays_mysql_python_sorting.txt |
Q:
Quit button assistance needed
I'm trying to make a code for this topic i'm doing and I've manage to get some of it done but when it comes to quiting my tkinter menu it doesn't close unless I manually close it, I've got the the button for the option to close it but it doesn't work. Can anyone help with my problem. Here's my code below.
import sys
import tkinter
from tkinter import*
import time
global v
global popJ
popJ = 0
def genInput(): #Allows the user to input the data
gen = Toplevel()
gen.wm_title("Data Input")
v = IntVar()
ent1 = Entry(gen, textvariable = v).pack()
ent1Txt = Label(gen, text = 'Please input Juvenile Populations')
ent1Txt.pack()
v2 = StringVar()
ent2 = Entry(gen, textvariable = v2)
ent2Txt = Label(gen, text = 'Please input Adult Populations')
ent2.pack()
ent2Txt.pack()
v3 = StringVar()
ent3 = Entry(gen, textvariable = v3)
ent3Txt = Label(gen, text = 'Please input Senile Populations')
ent3.pack()
ent3Txt.pack()
v4 = StringVar()
ent4 = Entry(gen, textvariable = v4)
ent4Txt = Label(gen, text = 'Please input Survival rates for Juveniles')
ent4.pack()
ent4Txt.pack()
v5 = StringVar()
ent5 = Entry(gen, textvariable = v5)
ent5Txt = Label(gen, text = 'Please input Survival rates for Adults')
ent5.pack()
ent5Txt.pack()
v6 = StringVar()
ent6 = Entry(gen, textvariable = v6)
ent6Txt = Label(gen, text = 'Please input Survival rates for Seniles')
ent6.pack()
ent6Txt.pack()
v7 = StringVar()
ent7 = Entry(gen, textvariable = v7)
ent7Txt = Label(gen, text = 'Please input the birth rate')
ent7.pack()
ent7Txt.pack()
v8 = StringVar()
ent8 = Entry(gen, textvariable = v8)
ent8Txt = Label(gen, text = 'Number of Generations')
ent8.pack()
ent8Txt.pack()
def quit1(): # Needs to be here or it breaks the program
gen.destroy()
return
def submit():
global popJ
popJ = v.get()
popJtxt = Label(gen, text= v.get()).pack()
return
submit1= Button(gen, text="Submit")
submit1.pack()
submit1.configure(command = submit)
return1 = Button(gen, text = 'Return to Menu')
return1.pack(pady=30)
return1.configure(command = quit1)
return
def genView(): # should display the data
disp = Toplevel()
disp.wm_title('Displaying data Values')
popJuvenilesTxt = Label (disp, text = popJ)
popJuvenilesTxt.grid(row =1, column = 1)
def menu(): # creates the gui menu
menu = Tk()
menu.wm_title("Greenfly model")
genInp = Button(menu,text = "Set Generation Values")
genVew = Button(menu,text = 'Dysplay Generation Values')
modelCal = Button(menu,text = 'Run model')
exportData = Button(menu,text = 'Export Data')
quitProgram = Button(menu,text = 'Quit')
genTxt = Label(menu, text= 'Input the Generation values')
genvTxt = Label (menu, text = 'View the current generation values')
modelTxt = Label (menu, text = 'Run the model')
exportTxt = Label (menu, text = 'Export data')
quitTxt = Label (menu, text= 'Exit the program')
genInp.grid(row=1, column=1)
genVew.grid(row=2, column=1)
modelCal.grid(row=3, column=1)
exportData.grid(row=4 , column=1)
quitProgram.grid(row=5, column=1)
genTxt.grid(row=1, column = 2)
genvTxt.grid(row=2, column = 2)
modelTxt.grid(row=3, column = 2)
exportTxt.grid(row=4, column = 2)
quitTxt.grid(row=5, column = 2)
genInp.configure(command = genInput)
genVew.configure(command = genView)
menu.mainloop()
menu()
A:
For Tkinter you can just pass gen.quit to the command of a button widget, like so:
close = Button(gen, text = 'Close', command = gen.quit).pack()
A:
You can use sys.exit() to close the program.
close(gen, text="Close", command = lambda: sys.exit()).pack()
| Quit button assistance needed | I'm trying to make a code for this topic i'm doing and I've manage to get some of it done but when it comes to quiting my tkinter menu it doesn't close unless I manually close it, I've got the the button for the option to close it but it doesn't work. Can anyone help with my problem. Here's my code below.
import sys
import tkinter
from tkinter import*
import time
global v
global popJ
popJ = 0
def genInput(): #Allows the user to input the data
gen = Toplevel()
gen.wm_title("Data Input")
v = IntVar()
ent1 = Entry(gen, textvariable = v).pack()
ent1Txt = Label(gen, text = 'Please input Juvenile Populations')
ent1Txt.pack()
v2 = StringVar()
ent2 = Entry(gen, textvariable = v2)
ent2Txt = Label(gen, text = 'Please input Adult Populations')
ent2.pack()
ent2Txt.pack()
v3 = StringVar()
ent3 = Entry(gen, textvariable = v3)
ent3Txt = Label(gen, text = 'Please input Senile Populations')
ent3.pack()
ent3Txt.pack()
v4 = StringVar()
ent4 = Entry(gen, textvariable = v4)
ent4Txt = Label(gen, text = 'Please input Survival rates for Juveniles')
ent4.pack()
ent4Txt.pack()
v5 = StringVar()
ent5 = Entry(gen, textvariable = v5)
ent5Txt = Label(gen, text = 'Please input Survival rates for Adults')
ent5.pack()
ent5Txt.pack()
v6 = StringVar()
ent6 = Entry(gen, textvariable = v6)
ent6Txt = Label(gen, text = 'Please input Survival rates for Seniles')
ent6.pack()
ent6Txt.pack()
v7 = StringVar()
ent7 = Entry(gen, textvariable = v7)
ent7Txt = Label(gen, text = 'Please input the birth rate')
ent7.pack()
ent7Txt.pack()
v8 = StringVar()
ent8 = Entry(gen, textvariable = v8)
ent8Txt = Label(gen, text = 'Number of Generations')
ent8.pack()
ent8Txt.pack()
def quit1(): # Needs to be here or it breaks the program
gen.destroy()
return
def submit():
global popJ
popJ = v.get()
popJtxt = Label(gen, text= v.get()).pack()
return
submit1= Button(gen, text="Submit")
submit1.pack()
submit1.configure(command = submit)
return1 = Button(gen, text = 'Return to Menu')
return1.pack(pady=30)
return1.configure(command = quit1)
return
def genView(): # should display the data
disp = Toplevel()
disp.wm_title('Displaying data Values')
popJuvenilesTxt = Label (disp, text = popJ)
popJuvenilesTxt.grid(row =1, column = 1)
def menu(): # creates the gui menu
menu = Tk()
menu.wm_title("Greenfly model")
genInp = Button(menu,text = "Set Generation Values")
genVew = Button(menu,text = 'Dysplay Generation Values')
modelCal = Button(menu,text = 'Run model')
exportData = Button(menu,text = 'Export Data')
quitProgram = Button(menu,text = 'Quit')
genTxt = Label(menu, text= 'Input the Generation values')
genvTxt = Label (menu, text = 'View the current generation values')
modelTxt = Label (menu, text = 'Run the model')
exportTxt = Label (menu, text = 'Export data')
quitTxt = Label (menu, text= 'Exit the program')
genInp.grid(row=1, column=1)
genVew.grid(row=2, column=1)
modelCal.grid(row=3, column=1)
exportData.grid(row=4 , column=1)
quitProgram.grid(row=5, column=1)
genTxt.grid(row=1, column = 2)
genvTxt.grid(row=2, column = 2)
modelTxt.grid(row=3, column = 2)
exportTxt.grid(row=4, column = 2)
quitTxt.grid(row=5, column = 2)
genInp.configure(command = genInput)
genVew.configure(command = genView)
menu.mainloop()
menu()
| [
"For Tkinter you can just pass gen.quit to the command of a button widget, like so:\nclose = Button(gen, text = 'Close', command = gen.quit).pack()\n\n",
"You can use sys.exit() to close the program.\nclose(gen, text=\"Close\", command = lambda: sys.exit()).pack()\n\n"
] | [
0,
0
] | [] | [] | [
"button",
"python",
"tkinter"
] | stackoverflow_0039767084_button_python_tkinter.txt |
Q:
Is there any feasible solution to read WOT battle results .dat files?
I am new here to try to solve one of my interesting questions in World of Tanks. I heard that every battle data is reserved in the client's disk in the Wargaming.net folder because I want to make a batch of data analysis for our clan's battle performances.
image
It is said that these .dat files are a kind of json files, so I tried to use a couple of lines of Python code to read but failed.
import json
f = open('ex.dat', 'r', encoding='unicode_escape')
content = f.read()
a = json.loads(content)
print(type(a))
print(a)
f.close()
The code is very simple and obviously fails to make it. Well, could anyone tell me the truth about that?
Added on Feb. 9th, 2022
After I tried another set of codes via Jupyter Notebook, it seems like something can be shown from the .dat files
import struct
import numpy as np
import matplotlib.pyplot as plt
import io
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
fbuff = io.BufferedReader(f)
N = len(fbuff.read())
print('byte length: ', N)
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
data =struct.unpack('b'*N, f.read(1*N))
The result is a set of tuple but I have no idea how to deal with it now.
A:
Here's how you can parse some parts of it.
import pickle
import zlib
file = '4402905758116487.dat'
cache_file = open(file, 'rb') # This can be improved to not keep the file opened.
# Converting pickle items from python2 to python3 you need to use the "bytes" encoding or "latin1".
legacyBattleResultVersion, brAllDataRaw = pickle.load(cache_file, encoding='bytes', errors='ignore')
arenaUniqueID, brAccount, brVehicleRaw, brOtherDataRaw = brAllDataRaw
# The data stored inside the pickled file will be a compressed pickle again.
vehicle_data = pickle.loads(zlib.decompress(brVehicleRaw), encoding='latin1')
account_data = pickle.loads(zlib.decompress(brAccount), encoding='latin1')
brCommon, brPlayersInfo, brPlayersVehicle, brPlayersResult = pickle.loads(zlib.decompress(brOtherDataRaw), encoding='latin1')
# Lastly you can print all of these and see a lot of data inside.
The response contains a mixture of more binary files as well as some data captured from the replays.
This is not a complete solution but it's a decent start to parsing these files.
A:
After loading the pickle files like gabzo mentioned, you will see that it is simply a list of values and without knowing what the value is referring to, its hard to make sense of it. The identifiers for the values can be extracted from your game installation:
import zipfile
WOT_PKG_PATH = "Your/Game/Path/res/packages/scripts.pkg"
BATTLE_RESULTS_PATH = "scripts/common/battle_results/"
archive = zipfile.ZipFile(WOT_PKG_PATH, 'r')
for file in archive.namelist():
if file.startswith(BATTLE_RESULTS_PATH):
archive.extract(file)
You can then decompile the python files(uncompyle6) and then go through the code to see the identifiers for the values.
One thing to note is that the list of values for the main pickle objects (like brAccount from gabzo's code) always has a checksum as the first value. You can use this to check whether you have the right order and the correct identifiers for the values. The way these checksums are generated can be seen in the decompiled python files.
I have been tackling this problem for some time and I have a solution that is works here (albeit in Rust): https://github.com/dacite/wot-battle-results-parser.
Run ./wot_datfile_parser_cli --help after downloading the binary for options. Note that you can just run the binary without any options to get a folder of .json files for the .dat files currently in the cache folder
A:
First you can look at the replay file itself in a text editor. But it won't show the code at the beginning of the file that has to be cleaned out. Then there is a ton of info that you have to read in and figure out but it is the stats for each player in the game. THEN it comes to the part that has to do with the actual replay. You don't need that stuff.
You can grab the player IDs and tank IDs from WoT developer area API if you want.
| Is there any feasible solution to read WOT battle results .dat files? | I am new here to try to solve one of my interesting questions in World of Tanks. I heard that every battle data is reserved in the client's disk in the Wargaming.net folder because I want to make a batch of data analysis for our clan's battle performances.
image
It is said that these .dat files are a kind of json files, so I tried to use a couple of lines of Python code to read but failed.
import json
f = open('ex.dat', 'r', encoding='unicode_escape')
content = f.read()
a = json.loads(content)
print(type(a))
print(a)
f.close()
The code is very simple and obviously fails to make it. Well, could anyone tell me the truth about that?
Added on Feb. 9th, 2022
After I tried another set of codes via Jupyter Notebook, it seems like something can be shown from the .dat files
import struct
import numpy as np
import matplotlib.pyplot as plt
import io
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
fbuff = io.BufferedReader(f)
N = len(fbuff.read())
print('byte length: ', N)
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
data =struct.unpack('b'*N, f.read(1*N))
The result is a set of tuple but I have no idea how to deal with it now.
| [
"Here's how you can parse some parts of it.\nimport pickle\nimport zlib\n\nfile = '4402905758116487.dat'\ncache_file = open(file, 'rb') # This can be improved to not keep the file opened.\n\n# Converting pickle items from python2 to python3 you need to use the \"bytes\" encoding or \"latin1\". \nlegacyBattleResultVersion, brAllDataRaw = pickle.load(cache_file, encoding='bytes', errors='ignore')\n\narenaUniqueID, brAccount, brVehicleRaw, brOtherDataRaw = brAllDataRaw\n\n# The data stored inside the pickled file will be a compressed pickle again. \nvehicle_data = pickle.loads(zlib.decompress(brVehicleRaw), encoding='latin1')\naccount_data = pickle.loads(zlib.decompress(brAccount), encoding='latin1')\nbrCommon, brPlayersInfo, brPlayersVehicle, brPlayersResult = pickle.loads(zlib.decompress(brOtherDataRaw), encoding='latin1')\n\n\n# Lastly you can print all of these and see a lot of data inside. \n\nThe response contains a mixture of more binary files as well as some data captured from the replays.\nThis is not a complete solution but it's a decent start to parsing these files.\n",
"After loading the pickle files like gabzo mentioned, you will see that it is simply a list of values and without knowing what the value is referring to, its hard to make sense of it. The identifiers for the values can be extracted from your game installation:\nimport zipfile\n\nWOT_PKG_PATH = \"Your/Game/Path/res/packages/scripts.pkg\"\nBATTLE_RESULTS_PATH = \"scripts/common/battle_results/\"\n\narchive = zipfile.ZipFile(WOT_PKG_PATH, 'r')\n\nfor file in archive.namelist():\n if file.startswith(BATTLE_RESULTS_PATH):\n archive.extract(file)\n\nYou can then decompile the python files(uncompyle6) and then go through the code to see the identifiers for the values.\nOne thing to note is that the list of values for the main pickle objects (like brAccount from gabzo's code) always has a checksum as the first value. You can use this to check whether you have the right order and the correct identifiers for the values. The way these checksums are generated can be seen in the decompiled python files.\nI have been tackling this problem for some time and I have a solution that is works here (albeit in Rust): https://github.com/dacite/wot-battle-results-parser.\nRun ./wot_datfile_parser_cli --help after downloading the binary for options. Note that you can just run the binary without any options to get a folder of .json files for the .dat files currently in the cache folder\n",
"First you can look at the replay file itself in a text editor. But it won't show the code at the beginning of the file that has to be cleaned out. Then there is a ton of info that you have to read in and figure out but it is the stats for each player in the game. THEN it comes to the part that has to do with the actual replay. You don't need that stuff.\nYou can grab the player IDs and tank IDs from WoT developer area API if you want.\n"
] | [
0,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0071003839_python.txt |
Q:
How to concatenate dataframes considering column orders
I want to combine two dataframes:
df1=pd.DataFrame({'A':['a','a',],'B':['b','b']})
df2=pd.DataFrame({'B':['b','b'],'A':['a','a']})
pd.concat([df1,df2],ignore_index=True)
result:
But I want the output to be like this (I want the same code as SQL's union/union all):
A:
Another way is to use numpy to stack the two dataframes and then use pd.DataFrame constructor:
pd.DataFrame(np.vstack([df1.values,df2.values]), columns = df1.columns)
Output:
A B
0 a b
1 a b
2 b a
3 b a
A:
Here is a proposition to do an SQL UNION ALL with pandas by using pandas.concat :
list_dfs = [df1, df2]
out = (
pd.concat([pd.DataFrame(sub_df.to_numpy()) for sub_df in list_dfs],
ignore_index=True)
.set_axis(df1.columns, axis=1)
)
# Output :
print(out)
A B
0 a b
1 a b
2 b a
3 b a
| How to concatenate dataframes considering column orders | I want to combine two dataframes:
df1=pd.DataFrame({'A':['a','a',],'B':['b','b']})
df2=pd.DataFrame({'B':['b','b'],'A':['a','a']})
pd.concat([df1,df2],ignore_index=True)
result:
But I want the output to be like this (I want the same code as SQL's union/union all):
| [
"Another way is to use numpy to stack the two dataframes and then use pd.DataFrame constructor:\npd.DataFrame(np.vstack([df1.values,df2.values]), columns = df1.columns)\n\nOutput:\n A B\n0 a b\n1 a b\n2 b a\n3 b a\n\n",
"Here is a proposition to do an SQL UNION ALL with pandas by using pandas.concat :\nlist_dfs = [df1, df2]\n\nout = (\n pd.concat([pd.DataFrame(sub_df.to_numpy()) for sub_df in list_dfs], \n ignore_index=True)\n .set_axis(df1.columns, axis=1)\n )\n\n# Output :\nprint(out)\n\n A B\n0 a b\n1 a b\n2 b a\n3 b a\n\n"
] | [
1,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074677671_pandas_python.txt |
Q:
What does it really mean real time object detection?
So here is the context.
I created an script in python, YOLOv4, OpenCV, CUDA and CUDNN, for object detection and object tracking to count the objects in a video. I intend to use it in real time, but what real time really means? The video I'm using is 1min long and 60FPS originally, but the video after processing is 30FPS on average and takes 3mins to finish. So comparing both videos side by side, one is clearly faster. 30FPS is industry standard for movies and stuff. I'm trying to wrap my head around what real time truly means.
Imagine I need to use this information for traffic lights management or use this to lift a bridge for a passing boat, it should be done automatically. It's time sensitive or the chaos would be visible. In these cases, what it trully means to be real time?
A:
First, learn what "real-time" means. Wikipedia: https://en.wikipedia.org/wiki/Real-time_computing
Understand the terms "hard" and "soft" real-time. Understand which aspects of your environment are soft and which require hard real-time.
Understand the response times that your environment requires. Understand the time scales.
This does not involve fuzzy terms like "quick" or "significant" or "accurate". It involves actual quantifiable time spans that depend on your task and its environment, acceptable error rates, ...
You did not share any details about your environment. I find it unlikely that you even need 30 fps for any application involving a road intersection.
You only need enough frame rate so you don't miss objects of interest, and you have fine enough data to track multiple objects with identity without mistaking them for each other.
Example: assume a car moving at 200 km/h. If your camera takes a frame every 1/30 second, the car moves 1.85 meters between frames.
How's your motion blur? What's the camera's exposure time? I'd recommend something on the order of a millisecond or better, giving motion blur of 0.05m
How's your tracking? Can it deal with objects "jumping" that far between frames? Does it generate object identity information that is usable for matching (association)?
A:
Real-time refers to the fact that a system is able to process and respond to data as it is received, without any significant delay. In the context of your object detection and tracking script, real-time would mean that the system is able to process and respond to new frames of the video as they are received, without a significant delay. This would allow the system to accurately count the objects in the video in near-real-time as the video is being played.
In the case of traffic lights management or lifting a bridge for a passing boat, real-time would mean that the system is able to quickly and accurately process data from sensors and other sources, and use that information to make decisions and take actions in a timely manner. This is important in these scenarios because any significant delay in processing and responding to data could have serious consequences, such as traffic accidents or collisions.
Overall, real-time systems are designed to process and respond to data quickly and accurately, in order to support time-sensitive applications and scenarios.
| What does it really mean real time object detection? | So here is the context.
I created an script in python, YOLOv4, OpenCV, CUDA and CUDNN, for object detection and object tracking to count the objects in a video. I intend to use it in real time, but what real time really means? The video I'm using is 1min long and 60FPS originally, but the video after processing is 30FPS on average and takes 3mins to finish. So comparing both videos side by side, one is clearly faster. 30FPS is industry standard for movies and stuff. I'm trying to wrap my head around what real time truly means.
Imagine I need to use this information for traffic lights management or use this to lift a bridge for a passing boat, it should be done automatically. It's time sensitive or the chaos would be visible. In these cases, what it trully means to be real time?
| [
"First, learn what \"real-time\" means. Wikipedia: https://en.wikipedia.org/wiki/Real-time_computing\nUnderstand the terms \"hard\" and \"soft\" real-time. Understand which aspects of your environment are soft and which require hard real-time.\nUnderstand the response times that your environment requires. Understand the time scales.\nThis does not involve fuzzy terms like \"quick\" or \"significant\" or \"accurate\". It involves actual quantifiable time spans that depend on your task and its environment, acceptable error rates, ...\nYou did not share any details about your environment. I find it unlikely that you even need 30 fps for any application involving a road intersection.\nYou only need enough frame rate so you don't miss objects of interest, and you have fine enough data to track multiple objects with identity without mistaking them for each other.\nExample: assume a car moving at 200 km/h. If your camera takes a frame every 1/30 second, the car moves 1.85 meters between frames.\n\nHow's your motion blur? What's the camera's exposure time? I'd recommend something on the order of a millisecond or better, giving motion blur of 0.05m\nHow's your tracking? Can it deal with objects \"jumping\" that far between frames? Does it generate object identity information that is usable for matching (association)?\n\n",
"Real-time refers to the fact that a system is able to process and respond to data as it is received, without any significant delay. In the context of your object detection and tracking script, real-time would mean that the system is able to process and respond to new frames of the video as they are received, without a significant delay. This would allow the system to accurately count the objects in the video in near-real-time as the video is being played.\nIn the case of traffic lights management or lifting a bridge for a passing boat, real-time would mean that the system is able to quickly and accurately process data from sensors and other sources, and use that information to make decisions and take actions in a timely manner. This is important in these scenarios because any significant delay in processing and responding to data could have serious consequences, such as traffic accidents or collisions.\nOverall, real-time systems are designed to process and respond to data quickly and accurately, in order to support time-sensitive applications and scenarios.\n"
] | [
1,
0
] | [] | [] | [
"computer_vision",
"object_detection",
"object_tracking",
"python",
"real_time"
] | stackoverflow_0074677722_computer_vision_object_detection_object_tracking_python_real_time.txt |
Q:
UnicodeDecodeError: 'utf8' codec can't decode byte 0x9c
I have a socket server that is supposed to receive UTF-8 valid characters from clients.
The problem is some clients (mainly hackers) are sending all the wrong kind of data over it.
I can easily distinguish the genuine client, but I am logging to files all the data sent so I can analyze it later.
Sometimes I get characters like this œ that cause the UnicodeDecodeError error.
I need to be able to make the string UTF-8 with or without those characters.
Update:
For my particular case the socket service was an MTA and thus I only expect to receive ASCII commands such as:
EHLO example.com
MAIL FROM: <[email protected]>
...
I was logging all of this in JSON.
Then some folks out there without good intentions decided to send all kind of junk.
That is why for my specific case it is perfectly OK to strip the non ASCII characters.
A:
http://docs.python.org/howto/unicode.html#the-unicode-type
str = unicode(str, errors='replace')
or
str = unicode(str, errors='ignore')
Note: This will strip out (ignore) the characters in question returning the string without them.
For me this is ideal case since I'm using it as protection against non-ASCII input which is not allowed by my application.
Alternatively: Use the open method from the codecs module to read in the file:
import codecs
with codecs.open(file_name, 'r', encoding='utf-8',
errors='ignore') as fdata:
A:
Changing the engine from C to Python did the trick for me.
Engine is C:
pd.read_csv(gdp_path, sep='\t', engine='c')
'utf-8' codec can't decode byte 0x92 in position 18: invalid start byte
Engine is Python:
pd.read_csv(gdp_path, sep='\t', engine='python')
No errors for me.
A:
This type of issue crops up for me now that I've moved to Python 3. I had no idea Python 2 was simply steam rolling any issues with file encoding.
I found this nice explanation of the differences and how to find a solution after none of the above worked for me.
http://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html
In short, to make Python 3 behave as similarly as possible to Python 2 use:
with open(filename, encoding="latin-1") as datafile:
# work on datafile here
However, read the article, there is no one size fits all solution.
A:
the first,Using get_encoding_type to get the files type of encode:
import os
from chardet import detect
# get file encoding type
def get_encoding_type(file):
with open(file, 'rb') as f:
rawdata = f.read()
return detect(rawdata)['encoding']
the second, opening the files with the type:
open(current_file, 'r', encoding = get_encoding_type, errors='ignore')
A:
>>> '\x9c'.decode('cp1252')
u'\u0153'
>>> print '\x9c'.decode('cp1252')
œ
A:
I had same problem with UnicodeDecodeError and i solved it with this line.
Don't know if is the best way but it worked for me.
str = str.decode('unicode_escape').encode('utf-8')
A:
This solution works nice when using Latin American accents, such as 'ñ'.
I have solved this problem just by adding
df = pd.read_csv(fileName,encoding='latin1')
A:
Just in case of someone has the same problem. I'am using vim with YouCompleteMe, failed to start ycmd with this error message, what I did is: export LC_CTYPE="en_US.UTF-8", the problem is gone.
A:
What can you do if you need to make a change to a file, but don’t know the file’s encoding? If you know the encoding is ASCII-compatible and only want to examine or modify the ASCII parts, you can open the file with the surrogateescape error handler:
with open(fname, 'r', encoding="ascii", errors="surrogateescape") as f:
data = f.read()
A:
If as you say you simply want to permit pure 7-bit ASCII, just discard any bytes which are not. There is no straightforward way to guess what the remote end intended them to represent anyway, without an explicitly specified encoding.
while bytes := socket.read_line_bytes():
try:
string = bytes.decode('us-ascii')
except UnicodeDecodeError as exc:
logger.warning('[%s] - rejected non-ASCII input %s' % (client, bytes.decode('us-ascii', errors='backslashreplace'))
socket.write(b'421 communication error - non-ASCII content rejected\r\n')
continue
...
A:
I had the same error.
For me, Python complained about the byte "0x87". I looked it up on https://bytetool.web.app/en/ascii/code/0x87/ where it told me that this byte belong to the codec Windows-1252.
I then only added this line to the beginning of my Python file:
#-*- encoding: Windows-1252 -*-"
And all errors were gone. Before I had added this line, I had tried Pandas to import the file like this:
Df = pd.read_csv(data, sep=",", engine='python', header=0, encoding='Windows-1252')
but this returned me an error. So I changed it back to this:
Df = pd.read_csv(data, sep=",", engine='python', header=0)
| UnicodeDecodeError: 'utf8' codec can't decode byte 0x9c | I have a socket server that is supposed to receive UTF-8 valid characters from clients.
The problem is some clients (mainly hackers) are sending all the wrong kind of data over it.
I can easily distinguish the genuine client, but I am logging to files all the data sent so I can analyze it later.
Sometimes I get characters like this œ that cause the UnicodeDecodeError error.
I need to be able to make the string UTF-8 with or without those characters.
Update:
For my particular case the socket service was an MTA and thus I only expect to receive ASCII commands such as:
EHLO example.com
MAIL FROM: <[email protected]>
...
I was logging all of this in JSON.
Then some folks out there without good intentions decided to send all kind of junk.
That is why for my specific case it is perfectly OK to strip the non ASCII characters.
| [
"http://docs.python.org/howto/unicode.html#the-unicode-type\nstr = unicode(str, errors='replace')\n\nor\nstr = unicode(str, errors='ignore')\n\nNote: This will strip out (ignore) the characters in question returning the string without them.\nFor me this is ideal case since I'm using it as protection against non-ASCII input which is not allowed by my application.\nAlternatively: Use the open method from the codecs module to read in the file:\nimport codecs\nwith codecs.open(file_name, 'r', encoding='utf-8',\n errors='ignore') as fdata:\n\n",
"Changing the engine from C to Python did the trick for me.\nEngine is C:\npd.read_csv(gdp_path, sep='\\t', engine='c')\n\n\n'utf-8' codec can't decode byte 0x92 in position 18: invalid start byte\n\nEngine is Python:\npd.read_csv(gdp_path, sep='\\t', engine='python')\n\nNo errors for me.\n",
"This type of issue crops up for me now that I've moved to Python 3. I had no idea Python 2 was simply steam rolling any issues with file encoding. \nI found this nice explanation of the differences and how to find a solution after none of the above worked for me. \nhttp://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html\nIn short, to make Python 3 behave as similarly as possible to Python 2 use:\nwith open(filename, encoding=\"latin-1\") as datafile:\n # work on datafile here\n\nHowever, read the article, there is no one size fits all solution. \n",
"the first,Using get_encoding_type to get the files type of encode:\nimport os \nfrom chardet import detect\n\n# get file encoding type\ndef get_encoding_type(file):\n with open(file, 'rb') as f:\n rawdata = f.read()\n return detect(rawdata)['encoding']\n\nthe second, opening the files with the type:\nopen(current_file, 'r', encoding = get_encoding_type, errors='ignore')\n\n",
">>> '\\x9c'.decode('cp1252')\nu'\\u0153'\n>>> print '\\x9c'.decode('cp1252')\nœ\n\n",
"I had same problem with UnicodeDecodeError and i solved it with this line.\nDon't know if is the best way but it worked for me.\nstr = str.decode('unicode_escape').encode('utf-8')\n\n",
"This solution works nice when using Latin American accents, such as 'ñ'.\nI have solved this problem just by adding\ndf = pd.read_csv(fileName,encoding='latin1')\n\n",
"Just in case of someone has the same problem. I'am using vim with YouCompleteMe, failed to start ycmd with this error message, what I did is: export LC_CTYPE=\"en_US.UTF-8\", the problem is gone.\n",
"What can you do if you need to make a change to a file, but don’t know the file’s encoding? If you know the encoding is ASCII-compatible and only want to examine or modify the ASCII parts, you can open the file with the surrogateescape error handler:\nwith open(fname, 'r', encoding=\"ascii\", errors=\"surrogateescape\") as f:\n data = f.read()\n\n",
"If as you say you simply want to permit pure 7-bit ASCII, just discard any bytes which are not. There is no straightforward way to guess what the remote end intended them to represent anyway, without an explicitly specified encoding.\nwhile bytes := socket.read_line_bytes():\n try:\n string = bytes.decode('us-ascii')\n except UnicodeDecodeError as exc:\n logger.warning('[%s] - rejected non-ASCII input %s' % (client, bytes.decode('us-ascii', errors='backslashreplace'))\n socket.write(b'421 communication error - non-ASCII content rejected\\r\\n')\n continue\n ...\n\n",
"I had the same error.\nFor me, Python complained about the byte \"0x87\". I looked it up on https://bytetool.web.app/en/ascii/code/0x87/ where it told me that this byte belong to the codec Windows-1252.\nI then only added this line to the beginning of my Python file:\n#-*- encoding: Windows-1252 -*-\"\n\nAnd all errors were gone. Before I had added this line, I had tried Pandas to import the file like this:\nDf = pd.read_csv(data, sep=\",\", engine='python', header=0, encoding='Windows-1252')\n\nbut this returned me an error. So I changed it back to this:\nDf = pd.read_csv(data, sep=\",\", engine='python', header=0)\n\n"
] | [
420,
132,
76,
38,
37,
30,
18,
3,
2,
1,
0
] | [
"\ndjango-storage is implicitly supported read byte file in text mode till django-storage == 1.8\nRemoved support in https://github.com/jschneier/django-storages/pull/657\nNeed to specify the binary mode for reading byte files.\n\n"
] | [
-1
] | [
"linux",
"python",
"python_unicode"
] | stackoverflow_0012468179_linux_python_python_unicode.txt |
Q:
python formulas returning 0s
so I have basic formulas setup to recive numbers and then covert them but when running the program the converted formulas aren't calculating
dollars = 0
pounds = 0
tempF = 0
tempC = 0
globe = "\U0001F30D"
euros = dollars*.95
kilograms = pounds/2.2
tempF = tempC* 9/5+32
print ("How many U.S dollars can you afford to spend on your trip?: ")
dollars = float(input())
print("How many pounds of chocoloate will you be buying?:")
pounds = float(input())
print("What is the tempature in degrees Celsius on the European news?:")
tempC = float(input())
print ("ITINERARY NOTES")
print ("------------------------------------------------------")
print (globe + " you have {:.2f} euros to spend." .format(euros))
print (globe + " Plan to buy {:.2f} of chocolate for family and friends".format(kilograms))
print (globe + " The tempature in Europe is {} degrees F, So dress appropriately.".format(tempF))
How many U.S dollars can you afford to spend on your trip?:
100
How many pounds of chocoloate will you be buying?:
5
What is the tempature in degrees Celsius on the European news?:
15
ITINERARY NOTES
------------------------------------------------------
you have 0.00 euros to spend.
Plan to buy 0.00 of chocolate for family and friends
The temperature in Europe is 32.0 degrees F, So dress appropriately.
A:
Try calculating the results after you input the data, not before that
A:
How I would do it
def get_input():
print ("How many U.S dollars can you afford to spend on your trip?: ")
dollars = float(input())
print("How many pounds of chocoloate will you be buying?:")
pounds = float(input())
print("What is the tempature in degrees Celsius on the European news?:")
tempC = float(input())
return [dollars, pounds, tempC]
def dollars_to_euros(dollars):
return 0.95 * dollars
def pounds_to_kilograms(pounds):
return pounds / 2.2
def tempC_to_tempF(tempC):
return tempC * 9 / 5 + 32
globe = "\U0001F30D"
[dollars, pounds, tempC] = get_input()
#move formula calculation after receiving the input
euros = dollars_to_euros(dollars)
kilograms = pounds_to_kilograms(pounds)
tempF = tempC_to_tempF(tempC)
print ("ITINERARY NOTES")
print ("------------------------------------------------------")
print (globe + " you have {:.2f} euros to spend." .format(euros))
print (globe + " Plan to buy {:.2f} of chocolate for family and friends".format(kilograms))
print (globe + " The tempature in Europe is {} degrees F, So dress appropriately.".format(tempF))
Minimal changes to your existing snippet to make it work:
dollars = 0
pounds = 0
tempF = 0
tempC = 0
globe = "\U0001F30D"
print ("How many U.S dollars can you afford to spend on your trip?: ")
dollars = float(input())
print("How many pounds of chocoloate will you be buying?:")
pounds = float(input())
print("What is the tempature in degrees Celsius on the European news?:")
tempC = float(input())
#move formula calculation after receiving the input
euros = dollars*.95
kilograms = pounds/2.2
tempF = tempC* 9/5+32
print ("ITINERARY NOTES")
print ("------------------------------------------------------")
print (globe + " you have {:.2f} euros to spend." .format(euros))
print (globe + " Plan to buy {:.2f} of chocolate for family and friends".format(kilograms))
print (globe + " The tempature in Europe is {} degrees F, So dress appropriately.".format(tempF))
| python formulas returning 0s | so I have basic formulas setup to recive numbers and then covert them but when running the program the converted formulas aren't calculating
dollars = 0
pounds = 0
tempF = 0
tempC = 0
globe = "\U0001F30D"
euros = dollars*.95
kilograms = pounds/2.2
tempF = tempC* 9/5+32
print ("How many U.S dollars can you afford to spend on your trip?: ")
dollars = float(input())
print("How many pounds of chocoloate will you be buying?:")
pounds = float(input())
print("What is the tempature in degrees Celsius on the European news?:")
tempC = float(input())
print ("ITINERARY NOTES")
print ("------------------------------------------------------")
print (globe + " you have {:.2f} euros to spend." .format(euros))
print (globe + " Plan to buy {:.2f} of chocolate for family and friends".format(kilograms))
print (globe + " The tempature in Europe is {} degrees F, So dress appropriately.".format(tempF))
How many U.S dollars can you afford to spend on your trip?:
100
How many pounds of chocoloate will you be buying?:
5
What is the tempature in degrees Celsius on the European news?:
15
ITINERARY NOTES
------------------------------------------------------
you have 0.00 euros to spend.
Plan to buy 0.00 of chocolate for family and friends
The temperature in Europe is 32.0 degrees F, So dress appropriately.
| [
"Try calculating the results after you input the data, not before that\n",
"How I would do it\ndef get_input():\n print (\"How many U.S dollars can you afford to spend on your trip?: \")\n dollars = float(input())\n\n print(\"How many pounds of chocoloate will you be buying?:\")\n pounds = float(input())\n\n print(\"What is the tempature in degrees Celsius on the European news?:\")\n tempC = float(input())\n return [dollars, pounds, tempC]\n\ndef dollars_to_euros(dollars):\n return 0.95 * dollars\n\ndef pounds_to_kilograms(pounds):\n return pounds / 2.2\n\ndef tempC_to_tempF(tempC):\n return tempC * 9 / 5 + 32\n\nglobe = \"\\U0001F30D\"\n\n[dollars, pounds, tempC] = get_input()\n\n#move formula calculation after receiving the input\neuros = dollars_to_euros(dollars)\nkilograms = pounds_to_kilograms(pounds)\ntempF = tempC_to_tempF(tempC)\n\n\nprint (\"ITINERARY NOTES\")\nprint (\"------------------------------------------------------\")\n\nprint (globe + \" you have {:.2f} euros to spend.\" .format(euros))\nprint (globe + \" Plan to buy {:.2f} of chocolate for family and friends\".format(kilograms))\nprint (globe + \" The tempature in Europe is {} degrees F, So dress appropriately.\".format(tempF))\n\nMinimal changes to your existing snippet to make it work:\ndollars = 0\npounds = 0\ntempF = 0\ntempC = 0\nglobe = \"\\U0001F30D\"\n\n\nprint (\"How many U.S dollars can you afford to spend on your trip?: \")\ndollars = float(input())\n\nprint(\"How many pounds of chocoloate will you be buying?:\")\npounds = float(input())\n\nprint(\"What is the tempature in degrees Celsius on the European news?:\")\ntempC = float(input())\n\n#move formula calculation after receiving the input\neuros = dollars*.95\nkilograms = pounds/2.2\ntempF = tempC* 9/5+32 \n\n\nprint (\"ITINERARY NOTES\")\nprint (\"------------------------------------------------------\")\n\nprint (globe + \" you have {:.2f} euros to spend.\" .format(euros))\nprint (globe + \" Plan to buy {:.2f} of chocolate for family and friends\".format(kilograms))\nprint (globe + \" The tempature in Europe is {} degrees F, So dress appropriately.\".format(tempF))\n\n"
] | [
1,
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074679904_python_python_3.x.txt |
Q:
Getting "Failed to convert a NumPy array to a Tensor (Unsupported object type list)."
From the whole week I'm training my AI model but it is facing some this issue of Failed to convert Numpy array to a tensor my I'm using the dataset I created for this model containing 100k+ movie plots but again and again its showing the same issue when I call "model.fit(...)"
Error
This is the code I'm using
# Importing the dataset
filename = "MoviePlots.csv"
data = pd.read_csv(filename, encoding= 'unicode_escape')
# Keeping only the neccessary columns
data = data[['Plot']]
# Keep only rows where 'Plot' is a string
data = data[data['Plot'].apply(lambda x: isinstance(x, str))]
# Clean the data
data['Plot'] = data['Plot'].apply(lambda x: x.lower())
data['Plot'] = data['Plot'].apply((lambda x: re.sub('[^a-zA-z0-9\s]', '', x)))
# Create the tokenizer
tokenizer = Tokenizer(num_words=5000, split=" ")
tokenizer.fit_on_texts(data['Plot'].values)
# Save the tokenizer
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Create the sequences
X = tokenizer.texts_to_sequences(data['Plot'].values)
Y = pad_sequences(X)
# Create the model
model = Sequential()
model.add(Embedding(5000, 256, input_length=Y.shape[1]))
model.add(Bidirectional(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))
model.add(LSTM(256, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(5000, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
# Train the model
model.fit(X, X, epochs=500, batch_size=256, verbose=1)
I have tried several other methods but the issue remains the same
epochs=500
model.fit(X, X, verbose=2)
Any help will be really appreciated! Thanks!!!
A:
there are many possible ways one of them is to create as a dataset as your error message indicated a mismatched datatype for model.fit()
Sample: Transform input word by vocab and match their string bytes, or tokenize them.
import tensorflow as tf
import tensorflow_text as tft
import json
input_word = tf.constant(' \'Cause it\'s easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Oh, easy as an ice cream sundae ')
vocab = [ "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "_",
"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"]
layer = tf.keras.layers.StringLookup(vocabulary=vocab)
sequences_mapping_string = layer(tf.strings.bytes_split(input_word))
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Method 1 create label from map it with vocaburary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
print( 'input_word: ' + str(input_word) )
print( " " )
print( tf.strings.bytes_split(input_word) )
print( sequences_mapping_string )
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Method 2 create label from it tokenizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
text = "Cause its easy as an ice cream sundae Slipping outta your hand"
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000, oov_token='oov', filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n', lower=True,)
tokenizer.fit_on_texts([text])
i_count = tf.strings.split([text])[0].shape[0] + 1
aDict = json.loads(tokenizer.to_json())
text_input = tf.constant([''], shape=())
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Class / Functions
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
def auto_paddings( data, max_sequences=15 ):
data = tf.constant( data, shape=(data.shape[0], 1) )
paddings = tf.constant([[1, 15 - data.shape[0] - 1], [0, 0]])
padd_data = tf.pad( data, paddings, "CONSTANT" )
padd_data = tf.constant( padd_data, shape=(15, 1) ).numpy()
return padd_data
input_word = tf.zeros([1, 15, 1], dtype=tf.int64)
input_label = tf.ones([1, 1, 1], dtype=tf.int64)
for i in range(i_count):
word = json.loads(aDict['config']['index_word'])[str(i + 1)]
i_word = layer(tf.strings.bytes_split(word))
padd_data = tf.constant(auto_paddings( i_word, 15 ), shape=(1, 15, 1))
index = json.loads(aDict['config']['word_index'])[word]
if i > 0:
input_word = tf.experimental.numpy.vstack([input_word, padd_data])
input_label = tf.experimental.numpy.vstack([input_label, tf.constant(index, shape=(1, 1, 1))])
dataset = tf.data.Dataset.from_tensors(( input_word, input_label ))
for d in dataset:
print(d)
print( " ==================================================== " )
Output: Input word as a string
input_word: tf.Tensor(b" 'Cause it's easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Oh, easy as an ice cream sundae ", shape=(), dtype=string)
Output: String to bytes as a splitters.
tf.Tensor(
[b' ' b"'" b'C' b'a' b'u' b's' b'e' b' ' b'i' b't' b"'" b's' b' ' b'e'
b'a' b's' b'y' b' ' b'a' b's' b' ' b'a' b'n' b' ' b'i' b'c' b'e' b' '
...
b'n' b'd' b'a' b'e' b' '], shape=(327,), dtype=string)
Output: Sequence mapping a string to phones.
tf.Tensor(
[ 0 0 30 1 21 19 5 0 9 20 0 19 0 5 1 19 25 0 1 19 0 1 14 0
9 3 5 0 3 18 5 1 13 0 19 21 14 4 1 5 0 46 12 9 16 16 9 14
...
5 0 3 18 5 1 13 0 19 21 14 4 1 5 0], shape=(327,), dtype=int64)
Output: A string input, required of list conversion or array-like none repeats.
Cause its easy as an ice cream sundae Slipping outta your hand
Output: A dataset creates from input_word and name label.
(<tf.Tensor: shape=(13, 15, 1), dtype=int64, numpy=
array([[[ 0],
[ 0],
...
[ 0]]], dtype=int64)>, <tf.Tensor: shape=(13, 1, 1), dtype=int64, numpy=
array([[[ 1]],
[[[ 2]]
...
[[13]]], dtype=int64)>)
====================================================
Application: Word input compares process from slide X windows channel.
dataset = tf.data.Dataset.from_tensors( tf.strings.bytes_split(input_word) )
window_size = 6
dataset = dataset.map(lambda x: tft.sliding_window(x, width=window_size, axis=0)).flat_map(tf.data.Dataset.from_tensor_slices)
Application: Wireless breaks.
mapping_vocab = [ "_", "I", "l", "o", "v", "e", "c", "a", "t", "s" ]
string_matching = [ 27, 9, 12, 15, 22, 5, 3, 1, 20, 19 ]
string_matching_reverse = [ 1/27, 1/9, 1/12, 1/15, 1/22, 1/5, 1/3, 1/1, 1/20, 1/19 ]
print( tf.math.multiply( tf.constant(string_matching, dtype=tf.float32), tf.constant(string_matching_reverse, dtype=tf.float32 ), name=None ) )
Output: encode and decodes, each number represents bytes you may replace with trained parameters.
encode: tf.Tensor([[27 27 27 9 12 15 22 5 3 1 20 19]], shape=(1, 12), dtype=int64)
decode: tf.Tensor([[b'_' b'_' b'_' b'I' b'l' b'o' b'v' b'e' b'c' b'a' b't' b's']], shape=(1, 12), dtype=string)
tf.Tensor([1. 1. 1. 1. 1. 1. 1. 1. 1. 1.], shape=(10,), dtype=float32)
| Getting "Failed to convert a NumPy array to a Tensor (Unsupported object type list)." | From the whole week I'm training my AI model but it is facing some this issue of Failed to convert Numpy array to a tensor my I'm using the dataset I created for this model containing 100k+ movie plots but again and again its showing the same issue when I call "model.fit(...)"
Error
This is the code I'm using
# Importing the dataset
filename = "MoviePlots.csv"
data = pd.read_csv(filename, encoding= 'unicode_escape')
# Keeping only the neccessary columns
data = data[['Plot']]
# Keep only rows where 'Plot' is a string
data = data[data['Plot'].apply(lambda x: isinstance(x, str))]
# Clean the data
data['Plot'] = data['Plot'].apply(lambda x: x.lower())
data['Plot'] = data['Plot'].apply((lambda x: re.sub('[^a-zA-z0-9\s]', '', x)))
# Create the tokenizer
tokenizer = Tokenizer(num_words=5000, split=" ")
tokenizer.fit_on_texts(data['Plot'].values)
# Save the tokenizer
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Create the sequences
X = tokenizer.texts_to_sequences(data['Plot'].values)
Y = pad_sequences(X)
# Create the model
model = Sequential()
model.add(Embedding(5000, 256, input_length=Y.shape[1]))
model.add(Bidirectional(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(LSTM(256, return_sequences=True, dropout=0.1, recurrent_dropout=0.1))
model.add(LSTM(256, dropout=0.1, recurrent_dropout=0.1))
model.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense(5000, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
# Train the model
model.fit(X, X, epochs=500, batch_size=256, verbose=1)
I have tried several other methods but the issue remains the same
epochs=500
model.fit(X, X, verbose=2)
Any help will be really appreciated! Thanks!!!
| [
"there are many possible ways one of them is to create as a dataset as your error message indicated a mismatched datatype for model.fit()\nSample: Transform input word by vocab and match their string bytes, or tokenize them.\nimport tensorflow as tf\nimport tensorflow_text as tft\n\nimport json\n\ninput_word = tf.constant(' \\'Cause it\\'s easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Oh, easy as an ice cream sundae ')\nvocab = [ \"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\", \"j\", \"k\", \"l\", \"m\", \"n\", \"o\", \"p\", \"q\", \"r\", \"s\", \"t\", \"u\", \"v\", \"w\", \"x\", \"y\", \"z\", \"_\", \n\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\", \"I\", \"J\", \"K\", \"L\", \"M\", \"N\", \"O\", \"P\", \"Q\", \"R\", \"S\", \"T\", \"U\", \"V\", \"W\", \"X\", \"Y\", \"Z\"]\nlayer = tf.keras.layers.StringLookup(vocabulary=vocab)\nsequences_mapping_string = layer(tf.strings.bytes_split(input_word))\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Method 1 create label from map it with vocaburary\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nprint( 'input_word: ' + str(input_word) )\nprint( \" \" )\nprint( tf.strings.bytes_split(input_word) )\nprint( sequences_mapping_string )\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Method 2 create label from it tokenizer\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\ntext = \"Cause its easy as an ice cream sundae Slipping outta your hand\"\ntokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000, oov_token='oov', filters='!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n', lower=True,)\ntokenizer.fit_on_texts([text])\n\ni_count = tf.strings.split([text])[0].shape[0] + 1\naDict = json.loads(tokenizer.to_json())\ntext_input = tf.constant([''], shape=())\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Class / Functions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\ndef auto_paddings( data, max_sequences=15 ):\n data = tf.constant( data, shape=(data.shape[0], 1) )\n paddings = tf.constant([[1, 15 - data.shape[0] - 1], [0, 0]])\n padd_data = tf.pad( data, paddings, \"CONSTANT\" )\n padd_data = tf.constant( padd_data, shape=(15, 1) ).numpy()\n return padd_data\n\n\ninput_word = tf.zeros([1, 15, 1], dtype=tf.int64)\ninput_label = tf.ones([1, 1, 1], dtype=tf.int64)\n\nfor i in range(i_count):\n word = json.loads(aDict['config']['index_word'])[str(i + 1)]\n i_word = layer(tf.strings.bytes_split(word))\n padd_data = tf.constant(auto_paddings( i_word, 15 ), shape=(1, 15, 1))\n \n index = json.loads(aDict['config']['word_index'])[word]\n\n if i > 0:\n input_word = tf.experimental.numpy.vstack([input_word, padd_data])\n input_label = tf.experimental.numpy.vstack([input_label, tf.constant(index, shape=(1, 1, 1))])\n\n\ndataset = tf.data.Dataset.from_tensors(( input_word, input_label ))\nfor d in dataset:\n print(d)\n\nprint( \" ==================================================== \" )\n\nOutput: Input word as a string\ninput_word: tf.Tensor(b\" 'Cause it's easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Oh, easy as an ice cream sundae \", shape=(), dtype=string)\n\nOutput: String to bytes as a splitters.\ntf.Tensor(\n[b' ' b\"'\" b'C' b'a' b'u' b's' b'e' b' ' b'i' b't' b\"'\" b's' b' ' b'e'\n b'a' b's' b'y' b' ' b'a' b's' b' ' b'a' b'n' b' ' b'i' b'c' b'e' b' '\n ...\n b'n' b'd' b'a' b'e' b' '], shape=(327,), dtype=string)\n\nOutput: Sequence mapping a string to phones.\ntf.Tensor(\n[ 0 0 30 1 21 19 5 0 9 20 0 19 0 5 1 19 25 0 1 19 0 1 14 0\n 9 3 5 0 3 18 5 1 13 0 19 21 14 4 1 5 0 46 12 9 16 16 9 14\n ...\n 5 0 3 18 5 1 13 0 19 21 14 4 1 5 0], shape=(327,), dtype=int64)\n\nOutput: A string input, required of list conversion or array-like none repeats.\nCause its easy as an ice cream sundae Slipping outta your hand\n\nOutput: A dataset creates from input_word and name label.\n(<tf.Tensor: shape=(13, 15, 1), dtype=int64, numpy=\n array([[[ 0],\n [ 0],\n ...\n [ 0]]], dtype=int64)>, <tf.Tensor: shape=(13, 1, 1), dtype=int64, numpy=\n array([[[ 1]],\n [[[ 2]]\n ...\n [[13]]], dtype=int64)>)\n ====================================================\n\nApplication: Word input compares process from slide X windows channel.\ndataset = tf.data.Dataset.from_tensors( tf.strings.bytes_split(input_word) )\nwindow_size = 6\ndataset = dataset.map(lambda x: tft.sliding_window(x, width=window_size, axis=0)).flat_map(tf.data.Dataset.from_tensor_slices)\n\nApplication: Wireless breaks.\nmapping_vocab = [ \"_\", \"I\", \"l\", \"o\", \"v\", \"e\", \"c\", \"a\", \"t\", \"s\" ]\nstring_matching = [ 27, 9, 12, 15, 22, 5, 3, 1, 20, 19 ]\nstring_matching_reverse = [ 1/27, 1/9, 1/12, 1/15, 1/22, 1/5, 1/3, 1/1, 1/20, 1/19 ]\n\nprint( tf.math.multiply( tf.constant(string_matching, dtype=tf.float32), tf.constant(string_matching_reverse, dtype=tf.float32 ), name=None ) )\n\nOutput: encode and decodes, each number represents bytes you may replace with trained parameters.\nencode: tf.Tensor([[27 27 27 9 12 15 22 5 3 1 20 19]], shape=(1, 12), dtype=int64)\ndecode: tf.Tensor([[b'_' b'_' b'_' b'I' b'l' b'o' b'v' b'e' b'c' b'a' b't' b's']], shape=(1, 12), dtype=string)\ntf.Tensor([1. 1. 1. 1. 1. 1. 1. 1. 1. 1.], shape=(10,), dtype=float32)\n\n"
] | [
0
] | [] | [] | [
"artificial_intelligence",
"deep_learning",
"neural_network",
"python",
"tensorflow"
] | stackoverflow_0074677664_artificial_intelligence_deep_learning_neural_network_python_tensorflow.txt |
Q:
Neural Networks Extending Learning Domain
I have a simple function f : R->R, f(x) = x2 + a, and would like to create a neural network to learn that function, as entirely as it can. Currently, I have a pytorch implementation that takes in inputs of a limited range of course, from x0 to xN with a particular number of points. Each epoch, the training data is randomly perturbed, in efforts to not only learn the relationship on the same grid points each time.
Currently, it does a great job of learning on the function on the range it is trained on, but is it at all feasible to train in such a way that can extend this learning beyond what it is trained on? Currently the behavior outside the training range seems dependent on the activation function. For example, with ReLU, the true function (orange) compared to the networks prediction (blue) are below:
I understand that if I transform the input vector to higher dimensions that contain higher powers of x, it may work out pretty well, but for a generalized case and how I plan to implement this in the future it won't work as well on non-polynomial functions.
One thought that came to mind is from support vector machines and the choice of a kernel, and how the radial basis kernel gets around this generalization issue, but I'm not sure if this can be applied here without the inner product properties of svm.
A:
What you want is called extrapolation (as opposed to interpolation which is predicting a value that is inside the trained domain / range). There is never a good solution for extrapolation and using higher powers can give you a better fit for a specific problem, but if you change the fitted curve slightly (either change its x and y-intercept, one of the powers, etc) the extrapolation will be pretty bad again.
This is also why neural networks use a large data set (to maximize their input range and rely on interpolation) and why over-training / over fitting (which is what you're trying to do) is a bad idea; it never works well in the general case.
| Neural Networks Extending Learning Domain | I have a simple function f : R->R, f(x) = x2 + a, and would like to create a neural network to learn that function, as entirely as it can. Currently, I have a pytorch implementation that takes in inputs of a limited range of course, from x0 to xN with a particular number of points. Each epoch, the training data is randomly perturbed, in efforts to not only learn the relationship on the same grid points each time.
Currently, it does a great job of learning on the function on the range it is trained on, but is it at all feasible to train in such a way that can extend this learning beyond what it is trained on? Currently the behavior outside the training range seems dependent on the activation function. For example, with ReLU, the true function (orange) compared to the networks prediction (blue) are below:
I understand that if I transform the input vector to higher dimensions that contain higher powers of x, it may work out pretty well, but for a generalized case and how I plan to implement this in the future it won't work as well on non-polynomial functions.
One thought that came to mind is from support vector machines and the choice of a kernel, and how the radial basis kernel gets around this generalization issue, but I'm not sure if this can be applied here without the inner product properties of svm.
| [
"What you want is called extrapolation (as opposed to interpolation which is predicting a value that is inside the trained domain / range). There is never a good solution for extrapolation and using higher powers can give you a better fit for a specific problem, but if you change the fitted curve slightly (either change its x and y-intercept, one of the powers, etc) the extrapolation will be pretty bad again.\nThis is also why neural networks use a large data set (to maximize their input range and rely on interpolation) and why over-training / over fitting (which is what you're trying to do) is a bad idea; it never works well in the general case.\n"
] | [
0
] | [] | [] | [
"python",
"pytorch"
] | stackoverflow_0074679929_python_pytorch.txt |
Q:
TypeError: 'list' object is not callable, on a function
I am struggling to understand why is python throwing this error, to a function:
Traceback (most recent call last):
File "/home/arksdf/Repos/alura/Iesb_DeepLearning/tentativas/teste.py", line 40, in <module>
model, train_loss, valid_loss = r.classificacao(optimizer, criterion)
File "/home/arksdf/Repos/alura/Iesb_DeepLearning/tentativas/Runner.py", line 21, in classificacao
model, train_loss, valid_loss = t.train(self.model, optimizer, criterion)
TypeError: 'list' object is not callable
I'm pretty sure the only thing I'm calling in this line is a function, and functions, as far as I know, are callable
class Classificador():
def __init__(self, dataset, model, epochs = 2000, batch_size = 25, early_stopping_epochs = 60):
self.dataset = dataset
self.model = model
self.epochs = epochs
self.early_stopping_epochs = early_stopping_epochs # quantas épocas sem melhoria serão toleradas antes de parar o treinamento
self.batch_size = batch_size
def classificacao(self, optimizer, criterion):
t = Treinamento(self.dataset, self.epochs, self.batch_size, self.early_stopping_epochs)
model, train_loss, valid_loss = t.train(self.model, optimizer, criterion)
return model, train_loss, valid_loss
So why is it throwing this specific error? What bugs me is that it just don't send me to any specific line inside t.train it just says that the call is wrong
This is the whole Treinamento class, in case there is something that might help find what I missed (is quite big tho)
import torch
import numpy as np
from tqdm import tqdm
from datetime import datetime
from sklearn.model_selection import KFold
from Reader import *
class Treinamento():
def __init__(self, dataset,
n_epochs=10,
batch_size=1,
early_stopping_epochs=10):
read = Reader(dataset)
self.train, self.valid = read.read()
self.X_train, self.X_test, self.y_train, self.y_test = self.train
self.X_valid, self.X_test, self.y_valid, self.y_test = self.valid
self.n_epochs = n_epochs
self.batch_size = batch_size
self.early_stopping_epochs = early_stopping_epochs
# UTILS
def get_batches(self, data, batch_size=1):
batches = []
data_size = len(data)
for start_idx in range(0, data_size, batch_size):
end_idx = min(data_size, start_idx + batch_size)
batches.append(data[start_idx:end_idx])
return batches
def load_best_model(self, model, best_epoch, best_valid_loss, best_train_loss, epochs_without_improv):
# Load best model
model.load_state_dict(torch.load('best_model'))
model.eval()
# Print logs
if epochs_without_improv >= self.early_stopping_epochs:
print('Training interrupted by early stopping!')
else:
print('Training finished by epochs!')
print(f'Total epochs run: {epoch + 1}')
print(f'Best model found at epoch {best_epoch + 1} with valid loss {best_valid_loss} and training loss {best_train_loss}')
###############################################################################################################################
#################################### LOSSES ###################################################################################
###############################################################################################################################
def train_loss(self, X_train, y_train, optimizer, criterion, model):
model.train()
acc_train_loss = 0.0
for index, (original_data, original_target) in enumerate(zip(self.get_batches(X_train, self.batch_size),
self.get_batches(y_train, self.batch_size))):
# Format data to tensor
target = (original_target == 1).nonzero(as_tuple=True)[1]
data = original_data.float() # Esse '.float()' é necessário para arrumar o tipo do dado
# target = target.cuda()
# data = data.cuda()
optimizer.zero_grad()
# model.forward(data)
predicted = model(data)
loss = criterion(predicted, target)
# Backprop
loss.backward()
optimizer.step()
acc_train_loss += loss.item()
return acc_train_loss
def valid_loss(self, X_valid, y_valid, criterion, model):
model.eval()
acc_valid_loss = 0.0
for index, (original_data, original_target) in enumerate(zip(self.get_batches(X_valid, self.batch_size),
self.get_batches(y_valid, self.batch_size))):
# Format data to tensor
target = (original_target == 1).nonzero(as_tuple=True)[1]
data = original_data.float() # Esse '.float()' é necessário para arrumar o tipo do dado
# target = target.cuda()
# data = data.cuda()
# model.forward(data)
predicted = model(data)
loss = criterion(predicted, target)
acc_valid_loss += loss.item()
return acc_valid_loss
###############################################################################################################################
#################################### TREINAMENTOS #############################################################################
###############################################################################################################################
def train(self, model, optimizer, criterion):
init = datetime.now()
best_epoch = None
best_valid_loss = np.Inf
best_train_loss = None
epochs_without_improv = 0
train_loss = []
valid_loss = []
for epoch in tqdm(range(self.n_epochs)):
###################
# early stopping? #
###################
if epochs_without_improv >= self.early_stopping_epochs:
break
###################
# train the model #
###################
acc_train_loss = self.train_loss(torch.from_numpy(self.X_train),
torch.from_numpy(self.y_train.to_numpy()),
optimizer,
criterion,
model)
train_loss.append(acc_train_loss)
###################
# valid the model #
###################
acc_valid_loss = self.valid_loss(torch.from_numpy(self.X_valid),
torch.from_numpy(self.y_valid.to_numpy()),
criterion,
model)
valid_loss.append(acc_valid_loss)
#####################
# Update best model #
#####################
if acc_valid_loss < best_valid_loss:
torch.save(model.state_dict(), 'best_model') # save best model
best_epoch = epoch
best_valid_loss = acc_valid_loss
best_train_loss = acc_train_loss
epochs_without_improv = 0
else:
epochs_without_improv += 1
self.load_best_model(model, best_epoch, best_valid_loss, best_train_loss, epochs_without_improv)
end = datetime.now()
print(f'Total training time: {end - init}')
return model, train_loss, valid_loss
def train_cross_validation(self, model, optimizer, criterion):
init = datetime.now()
best_epoch = None
best_valid_loss = np.Inf
best_train_loss = None
epochs_without_improv = 0
train_loss = []
valid_loss = []
kf = KFold(n_splits=4, random_state=1, shuffle=True)
split = kf.split(self.train)
for idx, (train_idx, valid_idx) in enumerate(split):
print('Index {}'.format(idx + 1))
y_cros_train, y_cros_valid = y_train.iloc[train_idx], y_test.iloc[valid_idx]
X_cros_train, X_cros_valid = X_train[train_idx,:], X_test[valid_idx,:]
for epoch in tqdm(range(self.n_epochs)):
if epochs_without_improv >= self.early_stopping_epochs:
break
###################
# train the model #
###################
acc_train_loss = self.cross_train(torch.from_numpy(X_cros_train),
torch.from_numpy(y_cros_train.to_numpy()),
optimizer,
criterion,
model)
train_loss.append(acc_train_loss)
###################
# valid the model #
###################
acc_valid_loss = self.cross_valid(torch.from_numpy(X_cros_valid),
torch.from_numpy(y_cros_valid.to_numpy()),
criterion,
model)
valid_loss.append(acc_valid_loss)
if acc_valid_loss < best_valid_loss:
torch.save(model.state_dict(), 'best_model') # save best model
best_epoch = epoch
best_valid_loss = acc_valid_loss
best_train_loss = acc_train_loss
epochs_without_improv = 0
else:
epochs_without_improv += 1
self.load_best_model(model, best_epoch, best_valid_loss, best_train_loss, epochs_without_improv)
end = datetime.now()
print(f'Total training time: {end - init}')
return model, train_loss, valid_loss
A:
As @Michael Butcher answered there was a variable with the same name as my function, train, renaming the function fixed the issue.
| TypeError: 'list' object is not callable, on a function | I am struggling to understand why is python throwing this error, to a function:
Traceback (most recent call last):
File "/home/arksdf/Repos/alura/Iesb_DeepLearning/tentativas/teste.py", line 40, in <module>
model, train_loss, valid_loss = r.classificacao(optimizer, criterion)
File "/home/arksdf/Repos/alura/Iesb_DeepLearning/tentativas/Runner.py", line 21, in classificacao
model, train_loss, valid_loss = t.train(self.model, optimizer, criterion)
TypeError: 'list' object is not callable
I'm pretty sure the only thing I'm calling in this line is a function, and functions, as far as I know, are callable
class Classificador():
def __init__(self, dataset, model, epochs = 2000, batch_size = 25, early_stopping_epochs = 60):
self.dataset = dataset
self.model = model
self.epochs = epochs
self.early_stopping_epochs = early_stopping_epochs # quantas épocas sem melhoria serão toleradas antes de parar o treinamento
self.batch_size = batch_size
def classificacao(self, optimizer, criterion):
t = Treinamento(self.dataset, self.epochs, self.batch_size, self.early_stopping_epochs)
model, train_loss, valid_loss = t.train(self.model, optimizer, criterion)
return model, train_loss, valid_loss
So why is it throwing this specific error? What bugs me is that it just don't send me to any specific line inside t.train it just says that the call is wrong
This is the whole Treinamento class, in case there is something that might help find what I missed (is quite big tho)
import torch
import numpy as np
from tqdm import tqdm
from datetime import datetime
from sklearn.model_selection import KFold
from Reader import *
class Treinamento():
def __init__(self, dataset,
n_epochs=10,
batch_size=1,
early_stopping_epochs=10):
read = Reader(dataset)
self.train, self.valid = read.read()
self.X_train, self.X_test, self.y_train, self.y_test = self.train
self.X_valid, self.X_test, self.y_valid, self.y_test = self.valid
self.n_epochs = n_epochs
self.batch_size = batch_size
self.early_stopping_epochs = early_stopping_epochs
# UTILS
def get_batches(self, data, batch_size=1):
batches = []
data_size = len(data)
for start_idx in range(0, data_size, batch_size):
end_idx = min(data_size, start_idx + batch_size)
batches.append(data[start_idx:end_idx])
return batches
def load_best_model(self, model, best_epoch, best_valid_loss, best_train_loss, epochs_without_improv):
# Load best model
model.load_state_dict(torch.load('best_model'))
model.eval()
# Print logs
if epochs_without_improv >= self.early_stopping_epochs:
print('Training interrupted by early stopping!')
else:
print('Training finished by epochs!')
print(f'Total epochs run: {epoch + 1}')
print(f'Best model found at epoch {best_epoch + 1} with valid loss {best_valid_loss} and training loss {best_train_loss}')
###############################################################################################################################
#################################### LOSSES ###################################################################################
###############################################################################################################################
def train_loss(self, X_train, y_train, optimizer, criterion, model):
model.train()
acc_train_loss = 0.0
for index, (original_data, original_target) in enumerate(zip(self.get_batches(X_train, self.batch_size),
self.get_batches(y_train, self.batch_size))):
# Format data to tensor
target = (original_target == 1).nonzero(as_tuple=True)[1]
data = original_data.float() # Esse '.float()' é necessário para arrumar o tipo do dado
# target = target.cuda()
# data = data.cuda()
optimizer.zero_grad()
# model.forward(data)
predicted = model(data)
loss = criterion(predicted, target)
# Backprop
loss.backward()
optimizer.step()
acc_train_loss += loss.item()
return acc_train_loss
def valid_loss(self, X_valid, y_valid, criterion, model):
model.eval()
acc_valid_loss = 0.0
for index, (original_data, original_target) in enumerate(zip(self.get_batches(X_valid, self.batch_size),
self.get_batches(y_valid, self.batch_size))):
# Format data to tensor
target = (original_target == 1).nonzero(as_tuple=True)[1]
data = original_data.float() # Esse '.float()' é necessário para arrumar o tipo do dado
# target = target.cuda()
# data = data.cuda()
# model.forward(data)
predicted = model(data)
loss = criterion(predicted, target)
acc_valid_loss += loss.item()
return acc_valid_loss
###############################################################################################################################
#################################### TREINAMENTOS #############################################################################
###############################################################################################################################
def train(self, model, optimizer, criterion):
init = datetime.now()
best_epoch = None
best_valid_loss = np.Inf
best_train_loss = None
epochs_without_improv = 0
train_loss = []
valid_loss = []
for epoch in tqdm(range(self.n_epochs)):
###################
# early stopping? #
###################
if epochs_without_improv >= self.early_stopping_epochs:
break
###################
# train the model #
###################
acc_train_loss = self.train_loss(torch.from_numpy(self.X_train),
torch.from_numpy(self.y_train.to_numpy()),
optimizer,
criterion,
model)
train_loss.append(acc_train_loss)
###################
# valid the model #
###################
acc_valid_loss = self.valid_loss(torch.from_numpy(self.X_valid),
torch.from_numpy(self.y_valid.to_numpy()),
criterion,
model)
valid_loss.append(acc_valid_loss)
#####################
# Update best model #
#####################
if acc_valid_loss < best_valid_loss:
torch.save(model.state_dict(), 'best_model') # save best model
best_epoch = epoch
best_valid_loss = acc_valid_loss
best_train_loss = acc_train_loss
epochs_without_improv = 0
else:
epochs_without_improv += 1
self.load_best_model(model, best_epoch, best_valid_loss, best_train_loss, epochs_without_improv)
end = datetime.now()
print(f'Total training time: {end - init}')
return model, train_loss, valid_loss
def train_cross_validation(self, model, optimizer, criterion):
init = datetime.now()
best_epoch = None
best_valid_loss = np.Inf
best_train_loss = None
epochs_without_improv = 0
train_loss = []
valid_loss = []
kf = KFold(n_splits=4, random_state=1, shuffle=True)
split = kf.split(self.train)
for idx, (train_idx, valid_idx) in enumerate(split):
print('Index {}'.format(idx + 1))
y_cros_train, y_cros_valid = y_train.iloc[train_idx], y_test.iloc[valid_idx]
X_cros_train, X_cros_valid = X_train[train_idx,:], X_test[valid_idx,:]
for epoch in tqdm(range(self.n_epochs)):
if epochs_without_improv >= self.early_stopping_epochs:
break
###################
# train the model #
###################
acc_train_loss = self.cross_train(torch.from_numpy(X_cros_train),
torch.from_numpy(y_cros_train.to_numpy()),
optimizer,
criterion,
model)
train_loss.append(acc_train_loss)
###################
# valid the model #
###################
acc_valid_loss = self.cross_valid(torch.from_numpy(X_cros_valid),
torch.from_numpy(y_cros_valid.to_numpy()),
criterion,
model)
valid_loss.append(acc_valid_loss)
if acc_valid_loss < best_valid_loss:
torch.save(model.state_dict(), 'best_model') # save best model
best_epoch = epoch
best_valid_loss = acc_valid_loss
best_train_loss = acc_train_loss
epochs_without_improv = 0
else:
epochs_without_improv += 1
self.load_best_model(model, best_epoch, best_valid_loss, best_train_loss, epochs_without_improv)
end = datetime.now()
print(f'Total training time: {end - init}')
return model, train_loss, valid_loss
| [
"As @Michael Butcher answered there was a variable with the same name as my function, train, renaming the function fixed the issue.\n"
] | [
1
] | [] | [] | [
"function",
"list",
"python"
] | stackoverflow_0074679583_function_list_python.txt |
Q:
Align a 3D line A to the line B
I want to align a line A (blue), which is defined with a 3D start (S) and 3D end point (E) to the other 3D line B (red), so that the line A (does not matter, how it is originally positioned) is parallel to the line B, as shown in Fig.B
I know that I have to calculate the angle between two them for that I do:
def calcAngleBtw2Lines(self, vec1S, vec1E, vec2S, vec2E):
# Substract the end point (E) from the start point (S) of the line
vec1 = np.subtract(vec1E, vec1S)
vec2 = np.subtract(vec2E, vec2S)
# Dot product to get the cosine of the rotation angle
dotProduct = np.dot(vec1, vec2)
# Normalize the vectors to find the unit vectors
vec1Unit = np.linalg.norm(vec1)
vec2Unit = np.linalg.norm(vec2)
# Find the angle between vectors
angle = np.degrees(np.arccos(dotProduct / (vec1Unit * vec2Unit)))
print("angle: ", angle)
return np.round(angle, 1)
But I am not sure, whether the steps are correct. If they are parallel to each other, the angle between them should be 0
Edit:
The length of both lines are equal. The line B is stationary. To make line A parallel to the B, the S and E of A can be moved at the same time.
A:
Ok, here's a working answer. Note that if A and B have the same lengths, part of the code is unnecessary (but I'll leave it anyway to make it more portable):
import numpy as np
def makeAparalleltoB(pointSA, pointEA, pointSB, pointEB):
# pointSA... are np.arrays of the 3 coordinates
# Calculating the coordinates of the vectors
vecA = pointEA - pointSA
vecB = pointEB - pointSB
# Calculating the lengths of the vectors
# Unnecessary if we know that A and B have the same lengths
vecANorm = np.linalg.norm(vecA)
vecBNorm = np.linalg.norm(vecB)
# Calculating the coordinates of a vector collinear to B, of the same length as A
newvecA = vecB * vecANorm/vecBNorm
# Returning new coordinates for the endpoint of A
return pointSA + newvecA
Example:
a = np.array([1,1,1])
b = np.array([2,3,4])
c = np.array([0,0,0])
d = np.array([1,1,1])
print(makeAparalleltoB(a, b, c, d))
# [3.1602469 3.1602469 3.1602469]
If we know that A and B have the same length, then it's even simpler: we simply make it so SB, EB, EA, SA is a parallelogram:
newpointEA = pointSA + pointEB - pointSB
| Align a 3D line A to the line B | I want to align a line A (blue), which is defined with a 3D start (S) and 3D end point (E) to the other 3D line B (red), so that the line A (does not matter, how it is originally positioned) is parallel to the line B, as shown in Fig.B
I know that I have to calculate the angle between two them for that I do:
def calcAngleBtw2Lines(self, vec1S, vec1E, vec2S, vec2E):
# Substract the end point (E) from the start point (S) of the line
vec1 = np.subtract(vec1E, vec1S)
vec2 = np.subtract(vec2E, vec2S)
# Dot product to get the cosine of the rotation angle
dotProduct = np.dot(vec1, vec2)
# Normalize the vectors to find the unit vectors
vec1Unit = np.linalg.norm(vec1)
vec2Unit = np.linalg.norm(vec2)
# Find the angle between vectors
angle = np.degrees(np.arccos(dotProduct / (vec1Unit * vec2Unit)))
print("angle: ", angle)
return np.round(angle, 1)
But I am not sure, whether the steps are correct. If they are parallel to each other, the angle between them should be 0
Edit:
The length of both lines are equal. The line B is stationary. To make line A parallel to the B, the S and E of A can be moved at the same time.
| [
"Ok, here's a working answer. Note that if A and B have the same lengths, part of the code is unnecessary (but I'll leave it anyway to make it more portable):\nimport numpy as np\n\ndef makeAparalleltoB(pointSA, pointEA, pointSB, pointEB):\n# pointSA... are np.arrays of the 3 coordinates\n\n # Calculating the coordinates of the vectors\n vecA = pointEA - pointSA\n vecB = pointEB - pointSB\n\n # Calculating the lengths of the vectors\n # Unnecessary if we know that A and B have the same lengths\n vecANorm = np.linalg.norm(vecA)\n vecBNorm = np.linalg.norm(vecB)\n\n # Calculating the coordinates of a vector collinear to B, of the same length as A\n newvecA = vecB * vecANorm/vecBNorm\n\n # Returning new coordinates for the endpoint of A\n return pointSA + newvecA\n\nExample:\na = np.array([1,1,1])\nb = np.array([2,3,4])\nc = np.array([0,0,0])\nd = np.array([1,1,1])\n\nprint(makeAparalleltoB(a, b, c, d))\n\n# [3.1602469 3.1602469 3.1602469]\n\nIf we know that A and B have the same length, then it's even simpler: we simply make it so SB, EB, EA, SA is a parallelogram:\nnewpointEA = pointSA + pointEB - pointSB\n\n"
] | [
1
] | [] | [] | [
"math",
"numpy",
"python"
] | stackoverflow_0074679790_math_numpy_python.txt |
Q:
BGR to RGB for CUB_200 images by Image.split()
I am creating a PyTorch dataset and dataloader from CUB_200. When reading the images as pill, I need to change the BGR channels to RGB and I use the following code:
def _read_images_from_list(imagefile_list):
imgs = []
mean=[0.485, 0.456, 0.406]
std= [0.229, 0.224, 0.225]
Transformations = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize(mean, std)])
for imagefile in imagefile_list:
# read images as PIL instead of NUMPY
img = Image.open(imagefile)
b, g, r = img.split()
img = Image.merge("RGB", (r, g, b))
img = Transformations(img) # ToTensor and between [0,1], then normalized using image net mean and std, then transposed into shape (C,H,W)
imgs += [img]
return imgs
After going through a number of classes, I get the following error.
ValueError: not enough values to unpack (expected 3, got 1)
I wonder what should I do now? it means that one of the images has only one channel instead of one. Can this be the case or there is a problem with my code? I had a different implementation before but it worked. The reason I changed this implementation was that I could not normalize my images.
This is the old implementation:
def _read_images_from_list(imagefile_list):
imgs = []
for imagefile in imagefile_list:
img = cv2.imread(imagefile).astype(np.float32)
img = cv2.resize(img, (224, 224))
# Convert RGB to BGR
img_r, img_g, img_b = np.split(img, 3, axis=2)
img = np.concatenate((img_b, img_g, img_r), axis=2)
# Extract mean
img -= np.array((103.94,116.78,123.68), dtype=np.float32) # BGR mean
# HWC -> CHW, compatible with pytorch
img = np.transpose(img, [2, 0, 1])
imgs += [img]
return imgs
A:
I would strongly recommend you use skimage.io to load your images, not opencv. It opens the images in RGB format by default, removing your shuffling overhead, but if you want to convert BGR to RGB you can use this:
import numpy as np
img = np.arange(27).reshape(3,3,3)
b = img[:,:,0]
g = img[:,:,1]
r = img[:,:,2]
rgb = np.dstack([r,g,b])
print(img)
print("#"*20)
print(rgb)
| BGR to RGB for CUB_200 images by Image.split() | I am creating a PyTorch dataset and dataloader from CUB_200. When reading the images as pill, I need to change the BGR channels to RGB and I use the following code:
def _read_images_from_list(imagefile_list):
imgs = []
mean=[0.485, 0.456, 0.406]
std= [0.229, 0.224, 0.225]
Transformations = transforms.Compose([transforms.Resize([224, 224]), transforms.ToTensor(), transforms.Normalize(mean, std)])
for imagefile in imagefile_list:
# read images as PIL instead of NUMPY
img = Image.open(imagefile)
b, g, r = img.split()
img = Image.merge("RGB", (r, g, b))
img = Transformations(img) # ToTensor and between [0,1], then normalized using image net mean and std, then transposed into shape (C,H,W)
imgs += [img]
return imgs
After going through a number of classes, I get the following error.
ValueError: not enough values to unpack (expected 3, got 1)
I wonder what should I do now? it means that one of the images has only one channel instead of one. Can this be the case or there is a problem with my code? I had a different implementation before but it worked. The reason I changed this implementation was that I could not normalize my images.
This is the old implementation:
def _read_images_from_list(imagefile_list):
imgs = []
for imagefile in imagefile_list:
img = cv2.imread(imagefile).astype(np.float32)
img = cv2.resize(img, (224, 224))
# Convert RGB to BGR
img_r, img_g, img_b = np.split(img, 3, axis=2)
img = np.concatenate((img_b, img_g, img_r), axis=2)
# Extract mean
img -= np.array((103.94,116.78,123.68), dtype=np.float32) # BGR mean
# HWC -> CHW, compatible with pytorch
img = np.transpose(img, [2, 0, 1])
imgs += [img]
return imgs
| [
"I would strongly recommend you use skimage.io to load your images, not opencv. It opens the images in RGB format by default, removing your shuffling overhead, but if you want to convert BGR to RGB you can use this:\nimport numpy as np\n\nimg = np.arange(27).reshape(3,3,3)\nb = img[:,:,0]\ng = img[:,:,1]\nr = img[:,:,2]\n\nrgb = np.dstack([r,g,b])\n\nprint(img)\nprint(\"#\"*20)\nprint(rgb)\n\n"
] | [
1
] | [] | [] | [
"image",
"python",
"pytorch",
"pytorch_dataloader"
] | stackoverflow_0074679922_image_python_pytorch_pytorch_dataloader.txt |
Q:
Convert Float to Time
I am trying to convert a DataFrame series with floats like "1200" into 12:00:00.
My initial DataFrame is this one:
import pandas as pd
df = pd.DataFrame([1200.0, 0.0, 1536.0, 1530.0, 0.0], columns=['Occurred Time'])
print(df)
Occurred Time
0 1200.0
1 0.0
2 1536.0
3 1530.0
4 0.0
I am trying to convert the "Occurred Time" float from 1200.0 to 12:00:00.
I used this code:
import pandas as pd
df = pd.DataFrame([1200.0, 0.0, 1536.0, 1530.0, 0.0], columns=['Occurred Time'])
df['Occurred Time'] = pd.to_datetime(df['Occurred Time'])
print(df)
but it does not work and the output is this:
Occurred Time
0 1970-01-01 00:00:00.000001200
1 1970-01-01 00:00:00.000000000
2 1970-01-01 00:00:00.000001536
3 1970-01-01 00:00:00.000001530
4 1970-01-01 00:00:00.000000000
I don't know what to do!
A:
This should work if you convert to strings, pad with zeros and provide a format to to_datetime:
df['time'] = pd.to_datetime(df['Occurred Time'].astype(int)
.astype(str).str.zfill(4),
format='%H%M')
Output:
Occurred Time time
0 1200 1900-01-01 12:00:00
1 0 1900-01-01 00:00:00
2 1536 1900-01-01 15:36:00
Add .dt.time if you want only the time:
df['time'] = pd.to_datetime(df['Occurred Time'].astype(int)
.astype(str).str.zfill(4),
format='%H%M').dt.time
Output:
Occurred Time time
0 1200 12:00:00
1 0 00:00:00
2 1536 15:36:00
| Convert Float to Time | I am trying to convert a DataFrame series with floats like "1200" into 12:00:00.
My initial DataFrame is this one:
import pandas as pd
df = pd.DataFrame([1200.0, 0.0, 1536.0, 1530.0, 0.0], columns=['Occurred Time'])
print(df)
Occurred Time
0 1200.0
1 0.0
2 1536.0
3 1530.0
4 0.0
I am trying to convert the "Occurred Time" float from 1200.0 to 12:00:00.
I used this code:
import pandas as pd
df = pd.DataFrame([1200.0, 0.0, 1536.0, 1530.0, 0.0], columns=['Occurred Time'])
df['Occurred Time'] = pd.to_datetime(df['Occurred Time'])
print(df)
but it does not work and the output is this:
Occurred Time
0 1970-01-01 00:00:00.000001200
1 1970-01-01 00:00:00.000000000
2 1970-01-01 00:00:00.000001536
3 1970-01-01 00:00:00.000001530
4 1970-01-01 00:00:00.000000000
I don't know what to do!
| [
"This should work if you convert to strings, pad with zeros and provide a format to to_datetime:\ndf['time'] = pd.to_datetime(df['Occurred Time'].astype(int)\n .astype(str).str.zfill(4),\n format='%H%M')\n\nOutput:\n Occurred Time time\n0 1200 1900-01-01 12:00:00\n1 0 1900-01-01 00:00:00\n2 1536 1900-01-01 15:36:00\n\nAdd .dt.time if you want only the time:\ndf['time'] = pd.to_datetime(df['Occurred Time'].astype(int)\n .astype(str).str.zfill(4),\n format='%H%M').dt.time\n\nOutput:\n Occurred Time time\n0 1200 12:00:00\n1 0 00:00:00\n2 1536 15:36:00\n\n"
] | [
2
] | [] | [] | [
"dataframe",
"datetime",
"pandas",
"python"
] | stackoverflow_0074680074_dataframe_datetime_pandas_python.txt |
Q:
PyInstaller problem making exe files that using transformers and PyQt5 library
So I'm working on an AI project using huggingface library, and I need to convert it into an exe file. I'm using PyQt5 for the interface, and transformers and datasets library from huggingface. I tried using PyInstaller to convert it into an exe file, it does finish building the exe files of the project, but it gives me this error when I run the exe file:
Traceback (most recent call last):
File "transformers\utils\versions.py", line 105, in require_version
File "importlib\metadata.py", line 530, in version
File "importlib\metadata.py", line 503, in distribution
File "importlib\metadata.py", line 177, in from_name
importlib.metadata.PackageNotFoundError: tqdm
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "App.py", line 5, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 476, in exec_module
File "transformers\__init__.py", line 43, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 476, in exec_module
File "transformers\dependency_versions_check.py", line 41, in <module>
File "transformers\utils\versions.py", line 120, in require_version_core
File "transformers\utils\versions.py", line 107, in require_version
importlib.metadata.PackageNotFoundError: The 'tqdm>=4.27' distribution was not found and is required by this application.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master
[736] Failed to execute script 'App' due to unhandled exception!
[process exited with code 1]
Line 5 on my code was a line of code for importing the transformers library.
...
4| from PyQt5.QtCore import QThread, QObject, pyqtSignal
5| from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
...
...
And this is my .spec file:
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['App.py'],
pathex=[],
binaries=[],
datas=[
('./resources/images/logo.png', '.'),
('./resources/model/config.json', '.'),
('./resources/model/pytorch_model.bin', '.'),
('./resources/model/special_tokens_map.json', '.'),
('./resources/model/tokenizer.json', '.'),
('./resources/model/tokenizer_config.json', '.'),
('./resources/model/vocab.txt', '.')
],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
[],
exclude_binaries=True,
name='App',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None , icon='logo.ico')
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='App')
I would really appreciate any help given, thanks :D
A:
First, pip install tqdm if you haven't already. Second, specify the path to your Lib/site-packages. You can do this by either:
Adding an argument to pathex in your .spec file
(.venv for a virtual environment at some folder .venv in your local directory, or the absolute path to your global Python install Lib/site-packages if you're not using a virtual environment):
pathex=['.venv/Lib/site-packages']
Specifying the path to Lib/site-packages from the command-line:
pyinstaller --paths '.venv/Lib/site-packages' my_program.py
From the pyinstaller docs
pathex: a list of paths to search for imports (like using PYTHONPATH), including paths given by the --paths option.
Some Python scripts import modules in ways that PyInstaller cannot detect: for example, by using the __import__() function with variable data, using importlib.import_module(), or manipulating the sys.path value at run time.
| PyInstaller problem making exe files that using transformers and PyQt5 library | So I'm working on an AI project using huggingface library, and I need to convert it into an exe file. I'm using PyQt5 for the interface, and transformers and datasets library from huggingface. I tried using PyInstaller to convert it into an exe file, it does finish building the exe files of the project, but it gives me this error when I run the exe file:
Traceback (most recent call last):
File "transformers\utils\versions.py", line 105, in require_version
File "importlib\metadata.py", line 530, in version
File "importlib\metadata.py", line 503, in distribution
File "importlib\metadata.py", line 177, in from_name
importlib.metadata.PackageNotFoundError: tqdm
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "App.py", line 5, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 476, in exec_module
File "transformers\__init__.py", line 43, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 476, in exec_module
File "transformers\dependency_versions_check.py", line 41, in <module>
File "transformers\utils\versions.py", line 120, in require_version_core
File "transformers\utils\versions.py", line 107, in require_version
importlib.metadata.PackageNotFoundError: The 'tqdm>=4.27' distribution was not found and is required by this application.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master
[736] Failed to execute script 'App' due to unhandled exception!
[process exited with code 1]
Line 5 on my code was a line of code for importing the transformers library.
...
4| from PyQt5.QtCore import QThread, QObject, pyqtSignal
5| from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline
...
...
And this is my .spec file:
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(['App.py'],
pathex=[],
binaries=[],
datas=[
('./resources/images/logo.png', '.'),
('./resources/model/config.json', '.'),
('./resources/model/pytorch_model.bin', '.'),
('./resources/model/special_tokens_map.json', '.'),
('./resources/model/tokenizer.json', '.'),
('./resources/model/tokenizer_config.json', '.'),
('./resources/model/vocab.txt', '.')
],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False)
pyz = PYZ(a.pure, a.zipped_data,
cipher=block_cipher)
exe = EXE(pyz,
a.scripts,
[],
exclude_binaries=True,
name='App',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None , icon='logo.ico')
coll = COLLECT(exe,
a.binaries,
a.zipfiles,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='App')
I would really appreciate any help given, thanks :D
| [
"First, pip install tqdm if you haven't already. Second, specify the path to your Lib/site-packages. You can do this by either:\n\nAdding an argument to pathex in your .spec file\n(.venv for a virtual environment at some folder .venv in your local directory, or the absolute path to your global Python install Lib/site-packages if you're not using a virtual environment):\n\npathex=['.venv/Lib/site-packages']\n\n\nSpecifying the path to Lib/site-packages from the command-line:\n\npyinstaller --paths '.venv/Lib/site-packages' my_program.py\n\nFrom the pyinstaller docs\n\npathex: a list of paths to search for imports (like using PYTHONPATH), including paths given by the --paths option.\n\nSome Python scripts import modules in ways that PyInstaller cannot detect: for example, by using the __import__() function with variable data, using importlib.import_module(), or manipulating the sys.path value at run time.\n"
] | [
1
] | [
"Hi is this the accepted answer still valid? I have the exact same problem but the solution doesn't work for me.\n"
] | [
-2
] | [
"huggingface_transformers",
"pyqt",
"pyside",
"python"
] | stackoverflow_0069874436_huggingface_transformers_pyqt_pyside_python.txt |
Q:
Toastr messages to show without refresh in flask-dash
I am using Plotly-Dash in my Flask application to display some graphs. And have Toastr setup in the app to handle notifications.
What I want to do, is that upon a dash button click, an event handler runs a function, and upon any error in the function, I flash the error, and expect that error to be thrown at me using Toastr messages in real time. But that does not happen, I get the toastr message displayed after I refresh my page.
In the docs, there is the flash being called first and then the rendering process happens, anyone knows how to show flashes without new render?
I followed the docs, and tried multiple things, but none of them seem to work.
UPDATE:
Just for more context, I am redirecting to a Dash app which has the following function:
@dashapp.callback(
[
ServersideOutput(
"data-store", "data", backend=cache_backend, arg_check=False
),
ServersideOutput(
"availability-data-store",
"data",
backend=cache_backend,
arg_check=False,
),
],
Input("refresh-placeholder", "n_clicks"),
prevent_initial_call=True,
)
def _fetch_data(_n):
temp_output = get_data(get_session_vinsight_token())
if not temp_output:
print("flash?")
flash(
"No data loaded. Please reconnect to vinsight from your account page if the error continues.",
"error",
)
return [None, None]
(df, availability_df) = temp_output
return [df, availability_df]
Here, I am just checking if I have an error, and if yes, I am trying to flash the error to the user.
But as the docs say, and as how the code normally works, something like:
@server_bp.route("/login", methods=["GET", "POST"])
def login():
form = LoginForm()
if form.validate_on_submit():
email = form.email.data
password = form.password.data
remember_me = form.remember_me.data
try:
auth.login(email, password, remember_me)
except Exception as e:
flash("Couldn't signin, reason: {}".format(e))
return redirect("/login")
Where the flash works due to return redirect("/login"), I want to flash my message from another file.
Basically, the _fetch_data is in another file which is why I cannot use the redirect method.
In my head, answer to any of the two will be a good workaround:
Is there any way to force reload the page in Flask?
Is there any way to show flash messages without reloading in Flask?
A:
It sounds like you are trying to use the Flask flash function to show a message to the user in real time, but the message is only being displayed after the page is refreshed.
The reason this is happening is that the Flask flash function stores messages in the user's session, and those messages are only displayed to the user when the page is rendered. This means that if you call the flash function and then immediately return a response, the user will not see the flashed message until they refresh the page or navigate to a different page in your app.
To show the flashed message in real time, you will need to trigger a new page render after calling the flash function. One way to do this is to use the Flask redirect function to redirect the user to the same page (or a different page) after calling the flash function. This will cause the page to be rendered again, and the flashed message will be displayed to the user.
Here is an example of how you could use the redirect function to show a flashed message in real time:
@dashapp.callback(
[
ServersideOutput(
"data-store", "data", backend=cache_backend, arg_check=False
),
ServersideOutput(
"availability-data-store",
"data",
backend=cache_backend,
arg_check=False,
),
],
Input("refresh-placeholder", "n_clicks"),
prevent_initial_call=True,
)
def _fetch_data(_n):
temp_output = get_data(get_session_vinsight_token())
if not temp_output:
flash(
"No data loaded. Please reconnect to vinsight from your account page if the error continues.",
"error",
)
# Use the redirect function to redirect the user to the same page
return redirect(request.url)
# Alternatively, you can redirect the user to a different page
# return redirect("/some-other-page")
(df, availability_df) = temp_output
return [df, availability_df]
In this code, the _fetch_data function calls the flash function to store a message in the user's session if there is an error. Then, it uses the redirect function to redirect the user to the same page (using request.url). This will cause the page to be rendered again, and the flashed message will be displayed to the user in real time.
| Toastr messages to show without refresh in flask-dash | I am using Plotly-Dash in my Flask application to display some graphs. And have Toastr setup in the app to handle notifications.
What I want to do, is that upon a dash button click, an event handler runs a function, and upon any error in the function, I flash the error, and expect that error to be thrown at me using Toastr messages in real time. But that does not happen, I get the toastr message displayed after I refresh my page.
In the docs, there is the flash being called first and then the rendering process happens, anyone knows how to show flashes without new render?
I followed the docs, and tried multiple things, but none of them seem to work.
UPDATE:
Just for more context, I am redirecting to a Dash app which has the following function:
@dashapp.callback(
[
ServersideOutput(
"data-store", "data", backend=cache_backend, arg_check=False
),
ServersideOutput(
"availability-data-store",
"data",
backend=cache_backend,
arg_check=False,
),
],
Input("refresh-placeholder", "n_clicks"),
prevent_initial_call=True,
)
def _fetch_data(_n):
temp_output = get_data(get_session_vinsight_token())
if not temp_output:
print("flash?")
flash(
"No data loaded. Please reconnect to vinsight from your account page if the error continues.",
"error",
)
return [None, None]
(df, availability_df) = temp_output
return [df, availability_df]
Here, I am just checking if I have an error, and if yes, I am trying to flash the error to the user.
But as the docs say, and as how the code normally works, something like:
@server_bp.route("/login", methods=["GET", "POST"])
def login():
form = LoginForm()
if form.validate_on_submit():
email = form.email.data
password = form.password.data
remember_me = form.remember_me.data
try:
auth.login(email, password, remember_me)
except Exception as e:
flash("Couldn't signin, reason: {}".format(e))
return redirect("/login")
Where the flash works due to return redirect("/login"), I want to flash my message from another file.
Basically, the _fetch_data is in another file which is why I cannot use the redirect method.
In my head, answer to any of the two will be a good workaround:
Is there any way to force reload the page in Flask?
Is there any way to show flash messages without reloading in Flask?
| [
"It sounds like you are trying to use the Flask flash function to show a message to the user in real time, but the message is only being displayed after the page is refreshed.\nThe reason this is happening is that the Flask flash function stores messages in the user's session, and those messages are only displayed to the user when the page is rendered. This means that if you call the flash function and then immediately return a response, the user will not see the flashed message until they refresh the page or navigate to a different page in your app.\nTo show the flashed message in real time, you will need to trigger a new page render after calling the flash function. One way to do this is to use the Flask redirect function to redirect the user to the same page (or a different page) after calling the flash function. This will cause the page to be rendered again, and the flashed message will be displayed to the user.\nHere is an example of how you could use the redirect function to show a flashed message in real time:\[email protected](\n[\n ServersideOutput(\n \"data-store\", \"data\", backend=cache_backend, arg_check=False\n ),\n\n\n ServersideOutput(\n \"availability-data-store\",\n \"data\",\n backend=cache_backend,\n arg_check=False,\n ),\n ],\n Input(\"refresh-placeholder\", \"n_clicks\"),\n prevent_initial_call=True,\n)\n\ndef _fetch_data(_n):\n temp_output = get_data(get_session_vinsight_token())\n if not temp_output:\n flash(\n \"No data loaded. Please reconnect to vinsight from your account page if the error continues.\",\n \"error\",\n )\n # Use the redirect function to redirect the user to the same page\n return redirect(request.url)\n # Alternatively, you can redirect the user to a different page\n # return redirect(\"/some-other-page\")\n\n (df, availability_df) = temp_output\n return [df, availability_df]\n\nIn this code, the _fetch_data function calls the flash function to store a message in the user's session if there is an error. Then, it uses the redirect function to redirect the user to the same page (using request.url). This will cause the page to be rendered again, and the flashed message will be displayed to the user in real time.\n"
] | [
0
] | [] | [] | [
"flask",
"notifications",
"plotly_dash",
"python",
"toastr"
] | stackoverflow_0074666045_flask_notifications_plotly_dash_python_toastr.txt |
Q:
How to improve Julia's performance using just in time compilation (JIT)
I have been playing with JAX (automatic differentiation library in Python) and Zygote (the automatic differentiation library in Julia) to implement Gauss-Newton minimisation method.
I came upon the @jit macro in Jax that runs my Python code in around 0.6 seconds compared to ~60 seconds for the version that does not use @jit.
Julia ran the code in around 40 seconds. Is there an equivalent of @jit in Julia or Zygote that results is a better performance?
Here are the codes I used:
Python
from jax import grad, jit, jacfwd
import jax.numpy as jnp
import numpy as np
import time
def gaussian(x, params):
amp = params[0]
mu = params[1]
sigma = params[2]
amplitude = amp/(jnp.abs(sigma)*jnp.sqrt(2*np.pi))
arg = ((x-mu)/sigma)
return amplitude*jnp.exp(-0.5*(arg**2))
def myjacobian(x, params):
return jacfwd(gaussian, argnums = 1)(x, params)
def op(jac):
return jnp.matmul(
jnp.linalg.inv(jnp.matmul(jnp.transpose(jac),jac)),
jnp.transpose(jac))
def res(x, data, params):
return data - gaussian(x, params)
@jit
def step(x, data, params):
residuals = res(x, data, params)
jacobian_operation = op(myjacobian(x, params))
temp = jnp.matmul(jacobian_operation, residuals)
return params + temp
N = 2000
x = np.linspace(start = -100, stop = 100, num= N)
data = gaussian(x, [5.65, 25.5, 37.23])
ini = jnp.array([0.9, 5., 5.0])
t1 = time.time()
for i in range(5000):
ini = step(x, data, ini)
t2 = time.time()
print('t2-t1: ', t2-t1)
ini
Julia
using Zygote
function gaussian(x::Union{Vector{Float64}, Float64}, params::Vector{Float64})
amp = params[1]
mu = params[2]
sigma = params[3]
amplitude = amp/(abs(sigma)*sqrt(2*pi))
arg = ((x.-mu)./sigma)
return amplitude.*exp.(-0.5.*(arg.^2))
end
function myjacobian(x::Vector{Float64}, params::Vector{Float64})
output = zeros(length(x), length(params))
for (index, ele) in enumerate(x)
output[index,:] = collect(gradient((params)->gaussian(ele, params), params))[1]
end
return output
end
function op(jac::Matrix{Float64})
return inv(jac'*jac)*jac'
end
function res(x::Vector{Float64}, data::Vector{Float64}, params::Vector{Float64})
return data - gaussian(x, params)
end
function step(x::Vector{Float64}, data::Vector{Float64}, params::Vector{Float64})
residuals = res(x, data, params)
jacobian_operation = op(myjacobian(x, params))
temp = jacobian_operation*residuals
return params + temp
end
N = 2000
x = collect(range(start = -100, stop = 100, length= N))
params = vec([5.65, 25.5, 37.23])
data = gaussian(x, params)
ini = vec([0.9, 5., 5.0])
@time for i in range(start = 1, step = 1, length = 5000)
ini = step(x, data, ini)
end
ini
A:
Your Julia code doing a number of things that aren't idiomatic and are worsening your performance. This won't be a full overview, but it should give you a good idea to start.
The first thing is passing params as a Vector is a bad idea. This means it will have to be heap allocated, and the compiler doesn't know how long it is. Instead, use a Tuple which will allow for a lot more optimization. Secondly, don't make gaussian act on a Vector of xs. Instead, write the scalar version and broadcast it. Specifically, with these changes, you will have
function gaussian(x::Number, params::NTuple{3, Float64})
amp, mu, sigma = params
# The next 2 lines should probably be done outside this function, but I'll leave them here for now.
amplitude = amp/(abs(sigma)*sqrt(2*pi))
arg = ((x-mu)/sigma)
return amplitude*exp(-0.5*(arg^2))
end
A:
One straightforward way to speed this up is to use ForwardDiff not Zygote, since you are taking a gradient of a vector of length 3, many times. Here this gets me from 16 to 3.5 seconds, with the last factor of 2 involving Chunk(3) to improve type-stability. Perhaps this can be improved further.
function myjacobian(x::Vector, params)
# return rand(eltype(x), length(x), length(params)) # with no gradient, takes 0.5s
output = zeros(eltype(x), length(x), length(params))
config = ForwardDiff.GradientConfig(nothing, params, ForwardDiff.Chunk(3))
for (i, xi) in enumerate(x)
# grad = gradient(p->gaussian(xi, p), params)[1] # original, takes 16s
# grad = ForwardDiff.gradient(p-> gaussian(xi, p)) # ForwardDiff, takes 7s
grad = ForwardDiff.gradient(p-> gaussian(xi, p), params, config) # takes 3.5s
copyto!(view(output,i,:), grad) # this allows params::Tuple, OK for Zygote, no help
end
return output
end
# This needs gaussian.(x, Ref(params)) elsewhere to use on many x, same params
function gaussian(x::Real, params)
# amp, mu, sigma = params # with params::Vector this is slower, 19 sec
amp = params[1]
mu = params[2]
sigma = params[3] # like this, 16 sec
T = typeof(x) # avoids having (2*pi)::Float64 promote everything
amplitude = amp/(abs(sigma)*sqrt(2*T(pi)))
arg = (x-mu)/sigma
return amplitude * exp(-(arg^2)/2)
end
However, this is still computing many small gradient arrays in a loop. It could easily compute one big gradient array instead.
While in general Julia is happy to compile loops to something fast, loops that make individual arrays tend to be a bad idea. And this is especially true for Zygote, which is fastest on matlab-ish whole-array code.
Here's how this looks, it gets me under 1s for the whole program:
function gaussian(x::Real, amp::Real, mu::Real, sigma::Real)
T = typeof(x)
amplitude = amp/(abs(sigma)*sqrt(2*T(pi)))
arg = (x-mu)/sigma
return amplitude * exp(-(arg^2)/2)
end
function myjacobian2(x::Vector, params) # with this, 0.9s
amp = fill(params[1], length(x))
mu = fill(params[2], length(x))
sigma = fill(params[3], length(x)) # use same sigma & different x value at each row:
grads = gradient((amp, mu, sigma) -> sum(gaussian.(x, amp, mu, sigma)), amp, mu, sigma)
hcat(grads...)
end
# Check that it agrees:
myjacobian2(x, params) ≈ myjacobian(x, params)
While this has little effect on the speed, I think you probably also want op(jac::Matrix) = Hermitian(jac'*jac) \ jac' rather than inv.
| How to improve Julia's performance using just in time compilation (JIT) | I have been playing with JAX (automatic differentiation library in Python) and Zygote (the automatic differentiation library in Julia) to implement Gauss-Newton minimisation method.
I came upon the @jit macro in Jax that runs my Python code in around 0.6 seconds compared to ~60 seconds for the version that does not use @jit.
Julia ran the code in around 40 seconds. Is there an equivalent of @jit in Julia or Zygote that results is a better performance?
Here are the codes I used:
Python
from jax import grad, jit, jacfwd
import jax.numpy as jnp
import numpy as np
import time
def gaussian(x, params):
amp = params[0]
mu = params[1]
sigma = params[2]
amplitude = amp/(jnp.abs(sigma)*jnp.sqrt(2*np.pi))
arg = ((x-mu)/sigma)
return amplitude*jnp.exp(-0.5*(arg**2))
def myjacobian(x, params):
return jacfwd(gaussian, argnums = 1)(x, params)
def op(jac):
return jnp.matmul(
jnp.linalg.inv(jnp.matmul(jnp.transpose(jac),jac)),
jnp.transpose(jac))
def res(x, data, params):
return data - gaussian(x, params)
@jit
def step(x, data, params):
residuals = res(x, data, params)
jacobian_operation = op(myjacobian(x, params))
temp = jnp.matmul(jacobian_operation, residuals)
return params + temp
N = 2000
x = np.linspace(start = -100, stop = 100, num= N)
data = gaussian(x, [5.65, 25.5, 37.23])
ini = jnp.array([0.9, 5., 5.0])
t1 = time.time()
for i in range(5000):
ini = step(x, data, ini)
t2 = time.time()
print('t2-t1: ', t2-t1)
ini
Julia
using Zygote
function gaussian(x::Union{Vector{Float64}, Float64}, params::Vector{Float64})
amp = params[1]
mu = params[2]
sigma = params[3]
amplitude = amp/(abs(sigma)*sqrt(2*pi))
arg = ((x.-mu)./sigma)
return amplitude.*exp.(-0.5.*(arg.^2))
end
function myjacobian(x::Vector{Float64}, params::Vector{Float64})
output = zeros(length(x), length(params))
for (index, ele) in enumerate(x)
output[index,:] = collect(gradient((params)->gaussian(ele, params), params))[1]
end
return output
end
function op(jac::Matrix{Float64})
return inv(jac'*jac)*jac'
end
function res(x::Vector{Float64}, data::Vector{Float64}, params::Vector{Float64})
return data - gaussian(x, params)
end
function step(x::Vector{Float64}, data::Vector{Float64}, params::Vector{Float64})
residuals = res(x, data, params)
jacobian_operation = op(myjacobian(x, params))
temp = jacobian_operation*residuals
return params + temp
end
N = 2000
x = collect(range(start = -100, stop = 100, length= N))
params = vec([5.65, 25.5, 37.23])
data = gaussian(x, params)
ini = vec([0.9, 5., 5.0])
@time for i in range(start = 1, step = 1, length = 5000)
ini = step(x, data, ini)
end
ini
| [
"Your Julia code doing a number of things that aren't idiomatic and are worsening your performance. This won't be a full overview, but it should give you a good idea to start.\nThe first thing is passing params as a Vector is a bad idea. This means it will have to be heap allocated, and the compiler doesn't know how long it is. Instead, use a Tuple which will allow for a lot more optimization. Secondly, don't make gaussian act on a Vector of xs. Instead, write the scalar version and broadcast it. Specifically, with these changes, you will have\nfunction gaussian(x::Number, params::NTuple{3, Float64})\n amp, mu, sigma = params\n \n # The next 2 lines should probably be done outside this function, but I'll leave them here for now.\n amplitude = amp/(abs(sigma)*sqrt(2*pi))\n arg = ((x-mu)/sigma)\n return amplitude*exp(-0.5*(arg^2))\nend\n\n",
"One straightforward way to speed this up is to use ForwardDiff not Zygote, since you are taking a gradient of a vector of length 3, many times. Here this gets me from 16 to 3.5 seconds, with the last factor of 2 involving Chunk(3) to improve type-stability. Perhaps this can be improved further.\nfunction myjacobian(x::Vector, params)\n # return rand(eltype(x), length(x), length(params)) # with no gradient, takes 0.5s\n output = zeros(eltype(x), length(x), length(params))\n config = ForwardDiff.GradientConfig(nothing, params, ForwardDiff.Chunk(3))\n for (i, xi) in enumerate(x)\n # grad = gradient(p->gaussian(xi, p), params)[1] # original, takes 16s\n # grad = ForwardDiff.gradient(p-> gaussian(xi, p)) # ForwardDiff, takes 7s\n grad = ForwardDiff.gradient(p-> gaussian(xi, p), params, config) # takes 3.5s\n copyto!(view(output,i,:), grad) # this allows params::Tuple, OK for Zygote, no help\n end\n return output\nend\n# This needs gaussian.(x, Ref(params)) elsewhere to use on many x, same params\nfunction gaussian(x::Real, params)\n # amp, mu, sigma = params # with params::Vector this is slower, 19 sec\n amp = params[1]\n mu = params[2]\n sigma = params[3] # like this, 16 sec\n T = typeof(x) # avoids having (2*pi)::Float64 promote everything\n amplitude = amp/(abs(sigma)*sqrt(2*T(pi)))\n arg = (x-mu)/sigma\n return amplitude * exp(-(arg^2)/2)\nend\n\nHowever, this is still computing many small gradient arrays in a loop. It could easily compute one big gradient array instead.\nWhile in general Julia is happy to compile loops to something fast, loops that make individual arrays tend to be a bad idea. And this is especially true for Zygote, which is fastest on matlab-ish whole-array code.\nHere's how this looks, it gets me under 1s for the whole program:\nfunction gaussian(x::Real, amp::Real, mu::Real, sigma::Real)\n T = typeof(x)\n amplitude = amp/(abs(sigma)*sqrt(2*T(pi)))\n arg = (x-mu)/sigma\n return amplitude * exp(-(arg^2)/2)\nend\nfunction myjacobian2(x::Vector, params) # with this, 0.9s\n amp = fill(params[1], length(x))\n mu = fill(params[2], length(x))\n sigma = fill(params[3], length(x)) # use same sigma & different x value at each row:\n grads = gradient((amp, mu, sigma) -> sum(gaussian.(x, amp, mu, sigma)), amp, mu, sigma)\n hcat(grads...)\nend\n# Check that it agrees:\nmyjacobian2(x, params) ≈ myjacobian(x, params)\n\nWhile this has little effect on the speed, I think you probably also want op(jac::Matrix) = Hermitian(jac'*jac) \\ jac' rather than inv.\n"
] | [
4,
2
] | [] | [] | [
"jax",
"julia",
"optimization",
"python"
] | stackoverflow_0074678931_jax_julia_optimization_python.txt |
Q:
How can "cursor.callproc" of MySQL be replaced in MariaDB?
I have found examples how to call stored procedures in MySQL from Python using cursor.callproc.
But cursor.callproc seems not to be defined in MariaDB. I am using version 10.3.
How can I solve this?
A:
I am learning how to write stored procedures in Mariadb. I have done a test procedure which insert data in a Mariadb database. This test works from Mariadb terminal. Trying to follow an example from internet I wrote following simple code:
import mariadb
connection = mariadb.connect(host='localhost',
database='Terrain',
password="xxxxxxx",
port = 3306,
user='admin'
)
cursor = connection.cursor(prepared=True)
args = 10,20,45
cursor.callproc(AddPolygon, args)
When typing cursor. a list with alternatives pops up but "callproc" is not in this list.
When i try to execute to code I recieve the message: object has no attribute 'callproc'
| How can "cursor.callproc" of MySQL be replaced in MariaDB? | I have found examples how to call stored procedures in MySQL from Python using cursor.callproc.
But cursor.callproc seems not to be defined in MariaDB. I am using version 10.3.
How can I solve this?
| [
"I am learning how to write stored procedures in Mariadb. I have done a test procedure which insert data in a Mariadb database. This test works from Mariadb terminal. Trying to follow an example from internet I wrote following simple code:\n import mariadb\n connection = mariadb.connect(host='localhost',\n database='Terrain',\n password=\"xxxxxxx\",\n port = 3306,\n user='admin'\n )\n \n cursor = connection.cursor(prepared=True)\n args = 10,20,45\n cursor.callproc(AddPolygon, args)\n\nWhen typing cursor. a list with alternatives pops up but \"callproc\" is not in this list.\nWhen i try to execute to code I recieve the message: object has no attribute 'callproc'\n"
] | [
0
] | [] | [] | [
"mariadb",
"python",
"stored_procedures"
] | stackoverflow_0074679068_mariadb_python_stored_procedures.txt |
Q:
Function can print tokens, but no lemmas
I have a function to run occurrence for tokens/lemmas in a sentence. I want to take a sentence, remove all the annoying bits (punctuation, space, stop), count whats left, then divide that number by how many times those tokens appear in the sentence. Token / count. When I run this function, I can get the user_token to populate, but not the user_lemmas.
import spacy
# Tokens/Lemmas Count: divided by word count
user_lemmas = [token.lemma_ for token in sentence if token_isolation(token)]
user_token = [token for token in sentence if token_isolation(token)]
def function(sentence, user_X):
# Word Count
count = 0
for token in sentence:
if not (token.is_space or token.is_punct):
count += 1
# Word occurrence count
occurrence = 0
> occurrence = len([i for i in user_tokens if i in sentence ])
> occurrence = len([i for i in user_lemmas if i in sentence ])
# Calculation
result = occurrence / count
return result, count, occurrence
Error:
TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str)
I checked both output's types, and I get the same answer: lists. But two different outputs (token: [text, text] lemma: ['text', 'text']). I tried converting the output of lemmas to different formats but I continue to get the same type error.
Edit:
Error occurs when I try to run the occurrence = statement.
When I run the following:
result_token = score_sentence_by_token(sentence, interesting_token)
result_lemma = score_sentence_by_lemma(sentence, interesting_lemmas)
token goes through giving me a score, but lemma gets a type error.
A:
See the following lines:
user_lemmas = [token.lemma_ for token in sentence if token_isolation(token)]
user_token = [token for token in sentence if token_isolation(token)]
the user_lemmas is a list of strings (picked up from lemma_ string attribute), the user_token is a list of spacy tokens, i.e., spacy objects. The sentence is the spacy document object.
When you test: if i in sentence the "in" operator is supported only for the spacy token objects, not strings.
Thus you cannot do "Word" in sentence as this is not supported. You need to test spacy token in sentence which is fullfilled with user_token as it is spacy token and not a string.
Minimal example to produce the error:
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion")
contained = "Apple" in doc
>>> TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str)
Example of code that compares tokens by the lemmas:
import spacy
def compare_token(first_token, second_token, compare_by_lemma):
"""Compares two spacy tokens, first compares POS tag, then lemma/form match.
Switch between lemma/form by bool `compare_by_lemma`.
"""
if not first_token.pos_ == second_token.pos_:
return False
if compare_by_lemma:
if first_token.lemma_ == second_token.lemma_:
return True
else:
if first_token.text == second_token.text:
return True
return False
nlp = spacy.load("en_core_web_sm")
doc = nlp("Apple is looking at buying U.K. startup for $1 billion, while bought other company yesterday.")
buying_token = doc[4]
for token in doc:
is_match = compare_token(buying_token, token, compare_by_lemma=True) # <<< notice compare_by_lemma=True, try False
if is_match:
print(f"{token.text}\t{is_match}")
>>> buying True
>>> bought True
| Function can print tokens, but no lemmas | I have a function to run occurrence for tokens/lemmas in a sentence. I want to take a sentence, remove all the annoying bits (punctuation, space, stop), count whats left, then divide that number by how many times those tokens appear in the sentence. Token / count. When I run this function, I can get the user_token to populate, but not the user_lemmas.
import spacy
# Tokens/Lemmas Count: divided by word count
user_lemmas = [token.lemma_ for token in sentence if token_isolation(token)]
user_token = [token for token in sentence if token_isolation(token)]
def function(sentence, user_X):
# Word Count
count = 0
for token in sentence:
if not (token.is_space or token.is_punct):
count += 1
# Word occurrence count
occurrence = 0
> occurrence = len([i for i in user_tokens if i in sentence ])
> occurrence = len([i for i in user_lemmas if i in sentence ])
# Calculation
result = occurrence / count
return result, count, occurrence
Error:
TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str)
I checked both output's types, and I get the same answer: lists. But two different outputs (token: [text, text] lemma: ['text', 'text']). I tried converting the output of lemmas to different formats but I continue to get the same type error.
Edit:
Error occurs when I try to run the occurrence = statement.
When I run the following:
result_token = score_sentence_by_token(sentence, interesting_token)
result_lemma = score_sentence_by_lemma(sentence, interesting_lemmas)
token goes through giving me a score, but lemma gets a type error.
| [
"See the following lines:\nuser_lemmas = [token.lemma_ for token in sentence if token_isolation(token)]\nuser_token = [token for token in sentence if token_isolation(token)]\n\nthe user_lemmas is a list of strings (picked up from lemma_ string attribute), the user_token is a list of spacy tokens, i.e., spacy objects. The sentence is the spacy document object.\nWhen you test: if i in sentence the \"in\" operator is supported only for the spacy token objects, not strings.\nThus you cannot do \"Word\" in sentence as this is not supported. You need to test spacy token in sentence which is fullfilled with user_token as it is spacy token and not a string.\n\nMinimal example to produce the error:\nimport spacy\nnlp = spacy.load(\"en_core_web_sm\")\ndoc = nlp(\"Apple is looking at buying U.K. startup for $1 billion\")\n\ncontained = \"Apple\" in doc\n\n>>> TypeError: Argument 'other' has incorrect type (expected spacy.tokens.token.Token, got str)\n\n\nExample of code that compares tokens by the lemmas:\nimport spacy\n\ndef compare_token(first_token, second_token, compare_by_lemma):\n \"\"\"Compares two spacy tokens, first compares POS tag, then lemma/form match.\n Switch between lemma/form by bool `compare_by_lemma`.\n \"\"\"\n if not first_token.pos_ == second_token.pos_:\n return False\n\n if compare_by_lemma:\n if first_token.lemma_ == second_token.lemma_:\n return True\n else:\n if first_token.text == second_token.text:\n return True\n return False\n\n\nnlp = spacy.load(\"en_core_web_sm\")\ndoc = nlp(\"Apple is looking at buying U.K. startup for $1 billion, while bought other company yesterday.\")\nbuying_token = doc[4]\n\n\nfor token in doc:\n is_match = compare_token(buying_token, token, compare_by_lemma=True) # <<< notice compare_by_lemma=True, try False\n if is_match:\n print(f\"{token.text}\\t{is_match}\")\n>>> buying True\n>>> bought True\n\n"
] | [
0
] | [] | [] | [
"nlp",
"python"
] | stackoverflow_0074679117_nlp_python.txt |
Q:
Get row when value is higher than a given row value in Pandas
Sorry for the confusing title, I'm trying to figure out something that's a bit hard to explain.
I have a dataframe that looks like this (link to csv)
time value is_critical
0:00 1 false
0:01 9 true
0:02 2 false
0:03 4 false
0:04 6 true
0:05 5 false
0:06 1 false
0:07 4 false
0:08 8 true
0:09 7 false
0:10 10 false
And I want to compute another dataframe with all the critical values and the date of when the value returned or surpassed the critical value. So the new dataframe would look something like this:
time value return_to_critical
0:01 9 0:10
0:04 6 0:08
0:08 8 0:10
How can I do this? Thanks!
A:
It's a bit messy, and not very efficient but here's a solution:
In [3]: df[df["is_critical"]].apply(lambda critical_row: df["time"][(df["time"] > critical_row["time"]) & (df["value"] >= critical_row["value"])].min(), axis=1)
Out[3]:
1 0:10
4 0:08
8 0:10
dtype: object
Works by first filtering out any non-critical rows, then applying a boolean expression to each row of that result: "values in the dataframe where the value is greater than or equal to the current value, and the time is greater than the current time" where "current" refers to each row in the filtered data.
You can clean up a little:
def time_of_return_to_critical(df, critical_row):
mask = (df.time > critical_row.time) & (df.value >= critical_row.value)
return df["time"][mask].min()
df[df.is_critical].apply(lambda row: time_of_return_to_critical(df, row), axis=1)
Note that the .min() is a brittle solution. You should convert the "time" column to a proper datetime or timestamp data type because right now it's only "working" as a string comparator.
| Get row when value is higher than a given row value in Pandas | Sorry for the confusing title, I'm trying to figure out something that's a bit hard to explain.
I have a dataframe that looks like this (link to csv)
time value is_critical
0:00 1 false
0:01 9 true
0:02 2 false
0:03 4 false
0:04 6 true
0:05 5 false
0:06 1 false
0:07 4 false
0:08 8 true
0:09 7 false
0:10 10 false
And I want to compute another dataframe with all the critical values and the date of when the value returned or surpassed the critical value. So the new dataframe would look something like this:
time value return_to_critical
0:01 9 0:10
0:04 6 0:08
0:08 8 0:10
How can I do this? Thanks!
| [
"It's a bit messy, and not very efficient but here's a solution:\nIn [3]: df[df[\"is_critical\"]].apply(lambda critical_row: df[\"time\"][(df[\"time\"] > critical_row[\"time\"]) & (df[\"value\"] >= critical_row[\"value\"])].min(), axis=1)\nOut[3]:\n1 0:10\n4 0:08\n8 0:10\ndtype: object\n\nWorks by first filtering out any non-critical rows, then applying a boolean expression to each row of that result: \"values in the dataframe where the value is greater than or equal to the current value, and the time is greater than the current time\" where \"current\" refers to each row in the filtered data.\nYou can clean up a little:\ndef time_of_return_to_critical(df, critical_row):\n mask = (df.time > critical_row.time) & (df.value >= critical_row.value)\n return df[\"time\"][mask].min()\n\n\ndf[df.is_critical].apply(lambda row: time_of_return_to_critical(df, row), axis=1)\n\nNote that the .min() is a brittle solution. You should convert the \"time\" column to a proper datetime or timestamp data type because right now it's only \"working\" as a string comparator.\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074680131_pandas_python.txt |
Q:
add a clear button to the GUI which clears the output formatted by the user
I am trying to clear the output displayed When clicked the CLEAR button should 'clear' or remove any text written in the Entry box and any text displayed on the Label. While attempting this on my own I tried using the delete method and del both of which did not remove the output when the button is pressed
from tkinter import *
import random
def generate_name_box():
n = input_string_var.get()
s = ""
s += "+" + "-"*len(n) + "+\n"
s += "|" + n + "|\n"
s += "+" + "-"*len(n) + "+\n"
name_string_var.set(s)
def clear_name_box():
#MAIN
#Generate holding structures for GUI
root = Tk()
mainframe = Frame(root)
#Other variables
font_colour = '#EE34A2'
#Create the widgets and associated Vars
name_string_var = StringVar()
name_label = Label(mainframe, text = "", font = ("Courier", 50), textvariable=name_string_var, fg=font_colour)
instruction_label = Label(mainframe, text="Enter your name", font=("Courier", 20))
greeting_button = Button(mainframe, text ="FORMAT", font=("Courier", 20), command=generate_name_box)
clear_button = Button(mainframe, text="CLEAR", font=("Courier",20), command= clear_name_box)
input_string_var = StringVar()
input_entry = Entry(mainframe, textvariable=input_string_var)
#Grid the widgets
#############
root.minsize(450, 400)
mainframe.grid(padx = 50, pady = 50)
instruction_label.grid(row = 1, column = 1, sticky=W)
input_entry.grid(row = 2, column = 1, sticky=W)
greeting_button.grid(row = 3, column = 1, ipadx=55, ipady=10, sticky=W)
clear_button.grid(row=4,column = 1, ipadx= 55, ipady= 20, sticky=W)
name_label.grid(row = 5, column = 1, sticky=W)
root.mainloop()
A:
To clear the text in the Entry widget, you can use the delete method and specify the indices of the characters that you want to delete. For example, to delete all the text in the Entry widget, you can use the following code:
def clear_name_box():
input_entry.delete(0, 'end')
This code will delete all the text from the beginning (index 0) to the end ('end') of the text in the Entry widget.
To clear the text in the Label widget, you can use the config method to change the text attribute of the Label to an empty string. For example:
def clear_name_box():
input_entry.delete(0, 'end')
name_label.config(text="")
This code will clear the text in both the Entry and Label widgets when the clear_name_box function is called.
| add a clear button to the GUI which clears the output formatted by the user | I am trying to clear the output displayed When clicked the CLEAR button should 'clear' or remove any text written in the Entry box and any text displayed on the Label. While attempting this on my own I tried using the delete method and del both of which did not remove the output when the button is pressed
from tkinter import *
import random
def generate_name_box():
n = input_string_var.get()
s = ""
s += "+" + "-"*len(n) + "+\n"
s += "|" + n + "|\n"
s += "+" + "-"*len(n) + "+\n"
name_string_var.set(s)
def clear_name_box():
#MAIN
#Generate holding structures for GUI
root = Tk()
mainframe = Frame(root)
#Other variables
font_colour = '#EE34A2'
#Create the widgets and associated Vars
name_string_var = StringVar()
name_label = Label(mainframe, text = "", font = ("Courier", 50), textvariable=name_string_var, fg=font_colour)
instruction_label = Label(mainframe, text="Enter your name", font=("Courier", 20))
greeting_button = Button(mainframe, text ="FORMAT", font=("Courier", 20), command=generate_name_box)
clear_button = Button(mainframe, text="CLEAR", font=("Courier",20), command= clear_name_box)
input_string_var = StringVar()
input_entry = Entry(mainframe, textvariable=input_string_var)
#Grid the widgets
#############
root.minsize(450, 400)
mainframe.grid(padx = 50, pady = 50)
instruction_label.grid(row = 1, column = 1, sticky=W)
input_entry.grid(row = 2, column = 1, sticky=W)
greeting_button.grid(row = 3, column = 1, ipadx=55, ipady=10, sticky=W)
clear_button.grid(row=4,column = 1, ipadx= 55, ipady= 20, sticky=W)
name_label.grid(row = 5, column = 1, sticky=W)
root.mainloop()
| [
"To clear the text in the Entry widget, you can use the delete method and specify the indices of the characters that you want to delete. For example, to delete all the text in the Entry widget, you can use the following code:\ndef clear_name_box():\n input_entry.delete(0, 'end')\n\nThis code will delete all the text from the beginning (index 0) to the end ('end') of the text in the Entry widget.\nTo clear the text in the Label widget, you can use the config method to change the text attribute of the Label to an empty string. For example:\ndef clear_name_box():\n input_entry.delete(0, 'end')\n name_label.config(text=\"\")\n\nThis code will clear the text in both the Entry and Label widgets when the clear_name_box function is called.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074680220_python.txt |
Q:
Django forms: how to use optional arguments in form class
I'm building a Django web-app which has page create and edit functionality. I create the page and edit the pages using 2 arguments: page title and page contents.
Since the edit and create code is very similar except that the edit code doesn't let you change the title of the page I want to make some code that can do both depending on the input.
This is the current code I'm using right now.
class createPageForm(forms.Form):
page_name = forms.CharField()
page_contents = forms.CharField(widget=forms.Textarea())
class editPageForm(forms.Form):
page_name = forms.CharField(disabled=True)
page_contents = forms.CharField(widget=forms.Textarea())
I know that if I wasn't using classes, but functions I could do something like this:
def PageForm(forms.Form, disabled=False):
page_name = forms.CharField(disabled=disabled)
page_contents = forms.CharField(widget=forms.Textarea())
PageForm(disabled=True)
PageForm(disabled=False)
That is the kind of functionality I'm looking for^^
I tried the following:
class PageForm(forms.Form):
def __init__(self, disabled=False):
self.page_name = forms.CharField(disabled=disabled)
self.page_contents = forms.CharField(widget=forms.Textarea())
class PageForm(forms.Form, disabled=False):
page_name = forms.CharField(disabled=disabled)
page_contents = forms.CharField(widget=forms.Textarea())
Both didn't work and got different errors I couldn't get around. I was hoping someone could lead me in the right direction, since I'm not very familiar with classes.
A:
You can work with:
class CreatePageForm(forms.Form):
page_name = forms.CharField()
page_contents = forms.CharField(widget=forms.Textarea())
def __init__(self, *args, disabled=False, **kwargs):
super().__init__(*args, **kwargs)
self.fields['page_contents'].disabled = disabled
and call it with:
CreatePageForm(disabled=True)
A:
PrintOrderDetails(orderNum: 31, productName: "Red Mug", sellerName: "Gift Shop");
PrintOrderDetails(productName: "Red Mug", sellerName: "Gift Shop", orderNum: 31);
| Django forms: how to use optional arguments in form class | I'm building a Django web-app which has page create and edit functionality. I create the page and edit the pages using 2 arguments: page title and page contents.
Since the edit and create code is very similar except that the edit code doesn't let you change the title of the page I want to make some code that can do both depending on the input.
This is the current code I'm using right now.
class createPageForm(forms.Form):
page_name = forms.CharField()
page_contents = forms.CharField(widget=forms.Textarea())
class editPageForm(forms.Form):
page_name = forms.CharField(disabled=True)
page_contents = forms.CharField(widget=forms.Textarea())
I know that if I wasn't using classes, but functions I could do something like this:
def PageForm(forms.Form, disabled=False):
page_name = forms.CharField(disabled=disabled)
page_contents = forms.CharField(widget=forms.Textarea())
PageForm(disabled=True)
PageForm(disabled=False)
That is the kind of functionality I'm looking for^^
I tried the following:
class PageForm(forms.Form):
def __init__(self, disabled=False):
self.page_name = forms.CharField(disabled=disabled)
self.page_contents = forms.CharField(widget=forms.Textarea())
class PageForm(forms.Form, disabled=False):
page_name = forms.CharField(disabled=disabled)
page_contents = forms.CharField(widget=forms.Textarea())
Both didn't work and got different errors I couldn't get around. I was hoping someone could lead me in the right direction, since I'm not very familiar with classes.
| [
"You can work with:\nclass CreatePageForm(forms.Form):\n page_name = forms.CharField()\n page_contents = forms.CharField(widget=forms.Textarea())\n\n def __init__(self, *args, disabled=False, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields['page_contents'].disabled = disabled\nand call it with:\nCreatePageForm(disabled=True)\n\n",
"PrintOrderDetails(orderNum: 31, productName: \"Red Mug\", sellerName: \"Gift Shop\");\nPrintOrderDetails(productName: \"Red Mug\", sellerName: \"Gift Shop\", orderNum: 31);\n"
] | [
0,
0
] | [] | [] | [
"django",
"forms",
"python"
] | stackoverflow_0074680120_django_forms_python.txt |
Q:
Problem with pymunk while running in Virtual Studio
I tried using the code so I can run a simulation of an object hitting on the ground but it just says
draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0))
import pymunk
#This comands sets the scene for our prosomition
space=pymunk.Space()
space.gravity = 0,-9.80665
body = pymunk.Body()
body.position= 50,100
#This comands create a box that attaches to the body and creates its settings
poly = pymunk.Poly.create_box(body)
poly.mass = 10
space.add(body, poly)
#Creates and prints the scene
print_options = pymunk.SpaceDebugDrawOptions()
speed=int(input("Speed:"))
while True:
space.step(speed)
space.debug_draw(print_options)
Im trying to run this on my visual studio but it's just saying:
draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0))
Is there any package for an graphical enviroment ?
A:
Yes, by default the debug drawing will just print out the result (its made in this way so that you can use it without installing anything else and even run it from the terminal).
However, it also comes with a module for the two libraries pygame and pyglet that are documented here:
http://www.pymunk.org/en/latest/pymunk.pygame_util.html
and here
http://www.pymunk.org/en/latest/pymunk.pyglet_util.html
Both work in more or less the same way, just use their implementation of the SpaceDebugDrawOptions class instead of the default one.
| Problem with pymunk while running in Virtual Studio | I tried using the code so I can run a simulation of an object hitting on the ground but it just says
draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0))
import pymunk
#This comands sets the scene for our prosomition
space=pymunk.Space()
space.gravity = 0,-9.80665
body = pymunk.Body()
body.position= 50,100
#This comands create a box that attaches to the body and creates its settings
poly = pymunk.Poly.create_box(body)
poly.mass = 10
space.add(body, poly)
#Creates and prints the scene
print_options = pymunk.SpaceDebugDrawOptions()
speed=int(input("Speed:"))
while True:
space.step(speed)
space.debug_draw(print_options)
Im trying to run this on my visual studio but it's just saying:
draw_polygon ([Vec2d(55.0, -4779353554820.233), Vec2d(55.0, -4779353554810.233), Vec2d(45.0, -4779353554810.233), Vec2d(45.0, -4779353554820.233)], 0.0, SpaceDebugColor(r=44.0, g=62.0, b=80.0, a=255.0), SpaceDebugColor(r=52.0, g=152.0, b=219.0, a=255.0))
Is there any package for an graphical enviroment ?
| [
"Yes, by default the debug drawing will just print out the result (its made in this way so that you can use it without installing anything else and even run it from the terminal).\nHowever, it also comes with a module for the two libraries pygame and pyglet that are documented here:\nhttp://www.pymunk.org/en/latest/pymunk.pygame_util.html\nand here\nhttp://www.pymunk.org/en/latest/pymunk.pyglet_util.html\nBoth work in more or less the same way, just use their implementation of the SpaceDebugDrawOptions class instead of the default one.\n"
] | [
0
] | [] | [] | [
"pymunk",
"python"
] | stackoverflow_0074670356_pymunk_python.txt |
Q:
How can I make a dictionary (dict) from separate lists of keys and values?
I want to combine these:
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
Into a single dictionary:
{'name': 'Monty', 'age': 42, 'food': 'spam'}
A:
Like this:
keys = ['a', 'b', 'c']
values = [1, 2, 3]
dictionary = dict(zip(keys, values))
print(dictionary) # {'a': 1, 'b': 2, 'c': 3}
Voila :-) The pairwise dict constructor and zip function are awesomely useful.
A:
Imagine that you have:
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
What is the simplest way to produce the following dictionary ?
dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}
Most performant, dict constructor with zip
new_dict = dict(zip(keys, values))
In Python 3, zip now returns a lazy iterator, and this is now the most performant approach.
dict(zip(keys, values)) does require the one-time global lookup each for dict and zip, but it doesn't form any unnecessary intermediate data-structures or have to deal with local lookups in function application.
Runner-up, dict comprehension:
A close runner-up to using the dict constructor is to use the native syntax of a dict comprehension (not a list comprehension, as others have mistakenly put it):
new_dict = {k: v for k, v in zip(keys, values)}
Choose this when you need to map or filter based on the keys or value.
In Python 2, zip returns a list, to avoid creating an unnecessary list, use izip instead (aliased to zip can reduce code changes when you move to Python 3).
from itertools import izip as zip
So that is still (2.7):
new_dict = {k: v for k, v in zip(keys, values)}
Python 2, ideal for <= 2.6
izip from itertools becomes zip in Python 3. izip is better than zip for Python 2 (because it avoids the unnecessary list creation), and ideal for 2.6 or below:
from itertools import izip
new_dict = dict(izip(keys, values))
Result for all cases:
In all cases:
>>> new_dict
{'age': 42, 'name': 'Monty', 'food': 'spam'}
Explanation:
If we look at the help on dict we see that it takes a variety of forms of arguments:
>>> help(dict)
class dict(object)
| dict() -> new empty dictionary
| dict(mapping) -> new dictionary initialized from a mapping object's
| (key, value) pairs
| dict(iterable) -> new dictionary initialized as if via:
| d = {}
| for k, v in iterable:
| d[k] = v
| dict(**kwargs) -> new dictionary initialized with the name=value pairs
| in the keyword argument list. For example: dict(one=1, two=2)
The optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list:
>>> zip(keys, values)
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
In Python 3, the equivalent would be:
>>> list(zip(keys, values))
[('name', 'Monty'), ('age', 42), ('food', 'spam')]
and Python 3's zip merely creates an iterable object:
>>> zip(keys, values)
<zip object at 0x7f0e2ad029c8>
Since we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's zip (since it creates an unnecessary list).
Less performant alternatives:
This is a generator expression being passed to the dict constructor:
generator_expression = ((k, v) for k, v in zip(keys, values))
dict(generator_expression)
or equivalently:
dict((k, v) for k, v in zip(keys, values))
And this is a list comprehension being passed to the dict constructor:
dict([(k, v) for k, v in zip(keys, values)])
In the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so.
Performance review:
In 64 bit Python 3.8.2 provided by Nix, on Ubuntu 16.04, ordered from fastest to slowest:
>>> min(timeit.repeat(lambda: dict(zip(keys, values))))
0.6695233230129816
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)}))
0.6941362579818815
>>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))}))
0.8782548159942962
>>>
>>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)])))
1.077607496001292
>>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values))))
1.1840861019445583
dict(zip(keys, values)) wins even with small sets of keys and values, but for larger sets, the differences in performance will become greater.
A commenter said:
min seems like a bad way to compare performance. Surely mean and/or max would be much more useful indicators for real usage.
We use min because these algorithms are deterministic. We want to know the performance of the algorithms under the best conditions possible.
If the operating system hangs for any reason, it has nothing to do with what we're trying to compare, so we need to exclude those kinds of results from our analysis.
If we used mean, those kinds of events would skew our results greatly, and if we used max we will only get the most extreme result - the one most likely affected by such an event.
A commenter also says:
In python 3.6.8, using mean values, the dict comprehension is indeed still faster, by about 30% for these small lists. For larger lists (10k random numbers), the dict call is about 10% faster.
I presume we mean dict(zip(... with 10k random numbers. That does sound like a fairly unusual use case. It does makes sense that the most direct calls would dominate in large datasets, and I wouldn't be surprised if OS hangs are dominating given how long it would take to run that test, further skewing your numbers. And if you use mean or max I would consider your results meaningless.
Let's use a more realistic size on our top examples:
import numpy
import timeit
l1 = list(numpy.random.random(100))
l2 = list(numpy.random.random(100))
And we see here that dict(zip(... does indeed run faster for larger datasets by about 20%.
>>> min(timeit.repeat(lambda: {k: v for k, v in zip(l1, l2)}))
9.698965263989521
>>> min(timeit.repeat(lambda: dict(zip(l1, l2))))
7.9965161079890095
A:
Try this:
>>> import itertools
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> adict = dict(itertools.izip(keys,values))
>>> adict
{'food': 'spam', 'age': 42, 'name': 'Monty'}
In Python 2, it's also more economical in memory consumption compared to zip.
A:
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
out = dict(zip(keys, values))
Output:
{'food': 'spam', 'age': 42, 'name': 'Monty'}
A:
You can also use dictionary comprehensions in Python ≥ 2.7:
>>> keys = ('name', 'age', 'food')
>>> values = ('Monty', 42, 'spam')
>>> {k: v for k, v in zip(keys, values)}
{'food': 'spam', 'age': 42, 'name': 'Monty'}
A:
A more natural way is to use dictionary comprehension
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
dict = {keys[i]: values[i] for i in range(len(keys))}
A:
If you need to transform keys or values before creating a dictionary then a generator expression could be used. Example:
>>> adict = dict((str(k), v) for k, v in zip(['a', 1, 'b'], [2, 'c', 3]))
Take a look Code Like a Pythonista: Idiomatic Python.
A:
with Python 3.x, goes for dict comprehensions
keys = ('name', 'age', 'food')
values = ('Monty', 42, 'spam')
dic = {k:v for k,v in zip(keys, values)}
print(dic)
More on dict comprehensions here, an example is there:
>>> print {i : chr(65+i) for i in range(4)}
{0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}
A:
For those who need simple code and aren’t familiar with zip:
List1 = ['This', 'is', 'a', 'list']
List2 = ['Put', 'this', 'into', 'dictionary']
This can be done by one line of code:
d = {List1[n]: List2[n] for n in range(len(List1))}
A:
you can use this below code:
dict(zip(['name', 'age', 'food'], ['Monty', 42, 'spam']))
But make sure that length of the lists will be same.if length is not same.then zip function turncate the longer one.
A:
2018-04-18
The best solution is still:
In [92]: keys = ('name', 'age', 'food')
...: values = ('Monty', 42, 'spam')
...:
In [93]: dt = dict(zip(keys, values))
In [94]: dt
Out[94]: {'age': 42, 'food': 'spam', 'name': 'Monty'}
Tranpose it:
lst = [('name', 'Monty'), ('age', 42), ('food', 'spam')]
keys, values = zip(*lst)
In [101]: keys
Out[101]: ('name', 'age', 'food')
In [102]: values
Out[102]: ('Monty', 42, 'spam')
A:
Here is also an example of adding a list value in you dictionary
list1 = ["Name", "Surname", "Age"]
list2 = [["Cyd", "JEDD", "JESS"], ["DEY", "AUDIJE", "PONGARON"], [21, 32, 47]]
dic = dict(zip(list1, list2))
print(dic)
always make sure the your "Key"(list1) is always in the first parameter.
{'Name': ['Cyd', 'JEDD', 'JESS'], 'Surname': ['DEY', 'AUDIJE', 'PONGARON'], 'Age': [21, 32, 47]}
A:
I had this doubt while I was trying to solve a graph-related problem. The issue I had was I needed to define an empty adjacency list and wanted to initialize all the nodes with an empty list, that's when I thought how about I check if it is fast enough, I mean if it will be worth doing a zip operation rather than simple assignment key-value pair. After all most of the times, the time factor is an important ice breaker. So I performed timeit operation for both approaches.
import timeit
def dictionary_creation(n_nodes):
dummy_dict = dict()
for node in range(n_nodes):
dummy_dict[node] = []
return dummy_dict
def dictionary_creation_1(n_nodes):
keys = list(range(n_nodes))
values = [[] for i in range(n_nodes)]
graph = dict(zip(keys, values))
return graph
def wrapper(func, *args, **kwargs):
def wrapped():
return func(*args, **kwargs)
return wrapped
iteration = wrapper(dictionary_creation, n_nodes)
shorthand = wrapper(dictionary_creation_1, n_nodes)
for trail in range(1, 8):
print(f'Itertion: {timeit.timeit(iteration, number=trails)}\nShorthand: {timeit.timeit(shorthand, number=trails)}')
For n_nodes = 10,000,000
I get,
Iteration: 2.825081646999024
Shorthand: 3.535717916001886
Iteration: 5.051560923002398
Shorthand: 6.255070794999483
Iteration: 6.52859034499852
Shorthand: 8.221581164998497
Iteration: 8.683652416999394
Shorthand: 12.599181543999293
Iteration: 11.587241565001023
Shorthand: 15.27298851100204
Iteration: 14.816342867001367
Shorthand: 17.162912737003353
Iteration: 16.645022411001264
Shorthand: 19.976680120998935
You can clearly see after a certain point, iteration approach at n_th step overtakes the time taken by shorthand approach at n-1_th step.
A:
It can be done by the following way.
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
dict = {}
for i in range(len(keys)):
dict[keys[i]] = values[i]
print(dict)
{'name': 'Monty', 'age': 42, 'food': 'spam'}
A:
All answers sum up:
l = [1, 5, 8, 9]
ll = [3, 7, 10, 11]
zip:
dict(zip(l,ll)) # {1: 3, 5: 7, 8: 10, 9: 11}
#if you want to play with key or value @recommended
{k:v*10 for k, v in zip(l, ll)} #{1: 30, 5: 70, 8: 100, 9: 110}
counter:
d = {}
c=0
for k in l:
d[k] = ll[c] #setting up keys from the second list values
c += 1
print(d)
{1: 3, 5: 7, 8: 10, 9: 11}
enumerate:
d = {}
for i,k in enumerate(l):
d[k] = ll[i]
print(d)
{1: 3, 5: 7, 8: 10, 9: 11}
A:
Solution as dictionary comprehension with enumerate:
dict = {item : values[index] for index, item in enumerate(keys)}
Solution as for loop with enumerate:
dict = {}
for index, item in enumerate(keys):
dict[item] = values[index]
A:
If you are working with more than 1 set of values and wish to have a list of dicts you can use this:
def as_dict_list(data: list, columns: list):
return [dict((zip(columns, row))) for row in data]
Real-life example would be a list of tuples from a db query paired to a tuple of columns from the same query. Other answers only provided for 1 to 1.
A:
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
dic = {}
c = 0
for i in keys:
dic[i] = values[c]
c += 1
print(dic)
{'name': 'Monty', 'age': 42, 'food': 'spam'}
A:
import pprint
def makeDictUsingAlternateLists1(**rest):
print("*rest.keys() : ",*rest.keys())
print("rest.keys() : ",rest.keys())
print("*rest.values() : ",*rest.values())
print("**rest.keys() : ",rest.keys())
print("**rest.values() : ",rest.values())
[print(a) for a in zip(*rest.values())]
[ print(dict(zip(rest.keys(),a))) for a in zip(*rest.values())]
print("...")
finalRes= [ dict( zip( rest.keys(),a)) for a in zip(*rest.values())]
return finalRes
l = makeDictUsingAlternateLists1(p=p,q=q,r=r,s=s)
pprint.pprint(l)
"""
*rest.keys() : p q r s
rest.keys() : dict_keys(['p', 'q', 'r', 's'])
*rest.values() : ['A', 'B', 'C'] [5, 2, 7] ['M', 'F', 'M'] ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']
**rest.keys() : dict_keys(['p', 'q', 'r', 's'])
**rest.values() : dict_values([['A', 'B', 'C'], [5, 2, 7], ['M', 'F', 'M'], ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']])
('A', 5, 'M', 'Sovabazaar')
('B', 2, 'F', 'Shyambazaar')
('C', 7, 'M', 'Bagbazaar')
{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'}
{'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'}
{'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}
...
[{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'},
{'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'},
{'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}]
"""
| How can I make a dictionary (dict) from separate lists of keys and values? | I want to combine these:
keys = ['name', 'age', 'food']
values = ['Monty', 42, 'spam']
Into a single dictionary:
{'name': 'Monty', 'age': 42, 'food': 'spam'}
| [
"Like this:\nkeys = ['a', 'b', 'c']\nvalues = [1, 2, 3]\ndictionary = dict(zip(keys, values))\nprint(dictionary) # {'a': 1, 'b': 2, 'c': 3}\n\nVoila :-) The pairwise dict constructor and zip function are awesomely useful.\n",
"\nImagine that you have:\nkeys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam')\n\nWhat is the simplest way to produce the following dictionary ?\ndict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'}\n\n\nMost performant, dict constructor with zip\nnew_dict = dict(zip(keys, values))\n\nIn Python 3, zip now returns a lazy iterator, and this is now the most performant approach.\ndict(zip(keys, values)) does require the one-time global lookup each for dict and zip, but it doesn't form any unnecessary intermediate data-structures or have to deal with local lookups in function application.\nRunner-up, dict comprehension:\nA close runner-up to using the dict constructor is to use the native syntax of a dict comprehension (not a list comprehension, as others have mistakenly put it):\nnew_dict = {k: v for k, v in zip(keys, values)}\n\nChoose this when you need to map or filter based on the keys or value.\nIn Python 2, zip returns a list, to avoid creating an unnecessary list, use izip instead (aliased to zip can reduce code changes when you move to Python 3).\nfrom itertools import izip as zip\n\nSo that is still (2.7):\nnew_dict = {k: v for k, v in zip(keys, values)}\n\nPython 2, ideal for <= 2.6\nizip from itertools becomes zip in Python 3. izip is better than zip for Python 2 (because it avoids the unnecessary list creation), and ideal for 2.6 or below:\nfrom itertools import izip\nnew_dict = dict(izip(keys, values))\n\nResult for all cases:\nIn all cases:\n>>> new_dict\n{'age': 42, 'name': 'Monty', 'food': 'spam'}\n\nExplanation:\nIf we look at the help on dict we see that it takes a variety of forms of arguments:\n\n>>> help(dict)\n\nclass dict(object)\n | dict() -> new empty dictionary\n | dict(mapping) -> new dictionary initialized from a mapping object's\n | (key, value) pairs\n | dict(iterable) -> new dictionary initialized as if via:\n | d = {}\n | for k, v in iterable:\n | d[k] = v\n | dict(**kwargs) -> new dictionary initialized with the name=value pairs\n | in the keyword argument list. For example: dict(one=1, two=2)\n\n\nThe optimal approach is to use an iterable while avoiding creating unnecessary data structures. In Python 2, zip creates an unnecessary list:\n>>> zip(keys, values)\n[('name', 'Monty'), ('age', 42), ('food', 'spam')]\n\nIn Python 3, the equivalent would be:\n>>> list(zip(keys, values))\n[('name', 'Monty'), ('age', 42), ('food', 'spam')]\n\nand Python 3's zip merely creates an iterable object:\n>>> zip(keys, values)\n<zip object at 0x7f0e2ad029c8>\n\nSince we want to avoid creating unnecessary data structures, we usually want to avoid Python 2's zip (since it creates an unnecessary list).\nLess performant alternatives:\nThis is a generator expression being passed to the dict constructor:\ngenerator_expression = ((k, v) for k, v in zip(keys, values))\ndict(generator_expression)\n\nor equivalently:\ndict((k, v) for k, v in zip(keys, values))\n\nAnd this is a list comprehension being passed to the dict constructor:\ndict([(k, v) for k, v in zip(keys, values)])\n\nIn the first two cases, an extra layer of non-operative (thus unnecessary) computation is placed over the zip iterable, and in the case of the list comprehension, an extra list is unnecessarily created. I would expect all of them to be less performant, and certainly not more-so.\nPerformance review:\nIn 64 bit Python 3.8.2 provided by Nix, on Ubuntu 16.04, ordered from fastest to slowest:\n>>> min(timeit.repeat(lambda: dict(zip(keys, values))))\n0.6695233230129816\n>>> min(timeit.repeat(lambda: {k: v for k, v in zip(keys, values)}))\n0.6941362579818815\n>>> min(timeit.repeat(lambda: {keys[i]: values[i] for i in range(len(keys))}))\n0.8782548159942962\n>>> \n>>> min(timeit.repeat(lambda: dict([(k, v) for k, v in zip(keys, values)])))\n1.077607496001292\n>>> min(timeit.repeat(lambda: dict((k, v) for k, v in zip(keys, values))))\n1.1840861019445583\n\ndict(zip(keys, values)) wins even with small sets of keys and values, but for larger sets, the differences in performance will become greater.\nA commenter said:\n\nmin seems like a bad way to compare performance. Surely mean and/or max would be much more useful indicators for real usage.\n\nWe use min because these algorithms are deterministic. We want to know the performance of the algorithms under the best conditions possible. \nIf the operating system hangs for any reason, it has nothing to do with what we're trying to compare, so we need to exclude those kinds of results from our analysis.\nIf we used mean, those kinds of events would skew our results greatly, and if we used max we will only get the most extreme result - the one most likely affected by such an event.\nA commenter also says:\n\nIn python 3.6.8, using mean values, the dict comprehension is indeed still faster, by about 30% for these small lists. For larger lists (10k random numbers), the dict call is about 10% faster. \n\nI presume we mean dict(zip(... with 10k random numbers. That does sound like a fairly unusual use case. It does makes sense that the most direct calls would dominate in large datasets, and I wouldn't be surprised if OS hangs are dominating given how long it would take to run that test, further skewing your numbers. And if you use mean or max I would consider your results meaningless.\nLet's use a more realistic size on our top examples:\nimport numpy\nimport timeit\nl1 = list(numpy.random.random(100))\nl2 = list(numpy.random.random(100))\n\nAnd we see here that dict(zip(... does indeed run faster for larger datasets by about 20%.\n>>> min(timeit.repeat(lambda: {k: v for k, v in zip(l1, l2)}))\n9.698965263989521\n>>> min(timeit.repeat(lambda: dict(zip(l1, l2))))\n7.9965161079890095\n\n",
"Try this:\n>>> import itertools\n>>> keys = ('name', 'age', 'food')\n>>> values = ('Monty', 42, 'spam')\n>>> adict = dict(itertools.izip(keys,values))\n>>> adict\n{'food': 'spam', 'age': 42, 'name': 'Monty'}\n\nIn Python 2, it's also more economical in memory consumption compared to zip.\n",
"keys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam')\nout = dict(zip(keys, values))\n\nOutput:\n{'food': 'spam', 'age': 42, 'name': 'Monty'}\n\n",
"You can also use dictionary comprehensions in Python ≥ 2.7:\n>>> keys = ('name', 'age', 'food')\n>>> values = ('Monty', 42, 'spam')\n>>> {k: v for k, v in zip(keys, values)}\n{'food': 'spam', 'age': 42, 'name': 'Monty'}\n\n",
"A more natural way is to use dictionary comprehension \nkeys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam') \ndict = {keys[i]: values[i] for i in range(len(keys))}\n\n",
"If you need to transform keys or values before creating a dictionary then a generator expression could be used. Example:\n>>> adict = dict((str(k), v) for k, v in zip(['a', 1, 'b'], [2, 'c', 3])) \n\nTake a look Code Like a Pythonista: Idiomatic Python.\n",
"with Python 3.x, goes for dict comprehensions\nkeys = ('name', 'age', 'food')\nvalues = ('Monty', 42, 'spam')\n\ndic = {k:v for k,v in zip(keys, values)}\n\nprint(dic)\n\nMore on dict comprehensions here, an example is there:\n>>> print {i : chr(65+i) for i in range(4)}\n {0 : 'A', 1 : 'B', 2 : 'C', 3 : 'D'}\n\n",
"For those who need simple code and aren’t familiar with zip:\nList1 = ['This', 'is', 'a', 'list']\nList2 = ['Put', 'this', 'into', 'dictionary']\n\nThis can be done by one line of code:\nd = {List1[n]: List2[n] for n in range(len(List1))}\n\n",
"you can use this below code:\ndict(zip(['name', 'age', 'food'], ['Monty', 42, 'spam']))\n\nBut make sure that length of the lists will be same.if length is not same.then zip function turncate the longer one.\n",
"\n2018-04-18\n\nThe best solution is still:\nIn [92]: keys = ('name', 'age', 'food')\n...: values = ('Monty', 42, 'spam')\n...: \n\nIn [93]: dt = dict(zip(keys, values))\nIn [94]: dt\nOut[94]: {'age': 42, 'food': 'spam', 'name': 'Monty'}\n\nTranpose it:\n lst = [('name', 'Monty'), ('age', 42), ('food', 'spam')]\n keys, values = zip(*lst)\n In [101]: keys\n Out[101]: ('name', 'age', 'food')\n In [102]: values\n Out[102]: ('Monty', 42, 'spam')\n\n",
"Here is also an example of adding a list value in you dictionary\nlist1 = [\"Name\", \"Surname\", \"Age\"]\nlist2 = [[\"Cyd\", \"JEDD\", \"JESS\"], [\"DEY\", \"AUDIJE\", \"PONGARON\"], [21, 32, 47]]\ndic = dict(zip(list1, list2))\nprint(dic)\n\nalways make sure the your \"Key\"(list1) is always in the first parameter.\n{'Name': ['Cyd', 'JEDD', 'JESS'], 'Surname': ['DEY', 'AUDIJE', 'PONGARON'], 'Age': [21, 32, 47]}\n\n",
"I had this doubt while I was trying to solve a graph-related problem. The issue I had was I needed to define an empty adjacency list and wanted to initialize all the nodes with an empty list, that's when I thought how about I check if it is fast enough, I mean if it will be worth doing a zip operation rather than simple assignment key-value pair. After all most of the times, the time factor is an important ice breaker. So I performed timeit operation for both approaches.\nimport timeit\ndef dictionary_creation(n_nodes):\n dummy_dict = dict()\n for node in range(n_nodes):\n dummy_dict[node] = []\n return dummy_dict\n\n\ndef dictionary_creation_1(n_nodes):\n keys = list(range(n_nodes))\n values = [[] for i in range(n_nodes)]\n graph = dict(zip(keys, values))\n return graph\n\n\ndef wrapper(func, *args, **kwargs):\n def wrapped():\n return func(*args, **kwargs)\n return wrapped\n\niteration = wrapper(dictionary_creation, n_nodes)\nshorthand = wrapper(dictionary_creation_1, n_nodes)\n\nfor trail in range(1, 8):\n print(f'Itertion: {timeit.timeit(iteration, number=trails)}\\nShorthand: {timeit.timeit(shorthand, number=trails)}')\n\nFor n_nodes = 10,000,000\nI get,\nIteration: 2.825081646999024\nShorthand: 3.535717916001886\nIteration: 5.051560923002398\nShorthand: 6.255070794999483\nIteration: 6.52859034499852\nShorthand: 8.221581164998497\nIteration: 8.683652416999394\nShorthand: 12.599181543999293\nIteration: 11.587241565001023\nShorthand: 15.27298851100204\nIteration: 14.816342867001367\nShorthand: 17.162912737003353\nIteration: 16.645022411001264\nShorthand: 19.976680120998935\nYou can clearly see after a certain point, iteration approach at n_th step overtakes the time taken by shorthand approach at n-1_th step.\n",
"It can be done by the following way.\nkeys = ['name', 'age', 'food']\nvalues = ['Monty', 42, 'spam'] \n\ndict = {}\n\nfor i in range(len(keys)):\n dict[keys[i]] = values[i]\n \nprint(dict)\n\n{'name': 'Monty', 'age': 42, 'food': 'spam'}\n\n",
"All answers sum up:\nl = [1, 5, 8, 9]\nll = [3, 7, 10, 11]\n\nzip:\ndict(zip(l,ll)) # {1: 3, 5: 7, 8: 10, 9: 11}\n\n#if you want to play with key or value @recommended\n\n{k:v*10 for k, v in zip(l, ll)} #{1: 30, 5: 70, 8: 100, 9: 110}\n\ncounter:\nd = {}\nc=0\nfor k in l:\n d[k] = ll[c] #setting up keys from the second list values\n c += 1\nprint(d)\n{1: 3, 5: 7, 8: 10, 9: 11}\n\n\nenumerate:\nd = {}\nfor i,k in enumerate(l):\n d[k] = ll[i]\nprint(d)\n{1: 3, 5: 7, 8: 10, 9: 11}\n\n",
"Solution as dictionary comprehension with enumerate:\ndict = {item : values[index] for index, item in enumerate(keys)}\n\nSolution as for loop with enumerate:\ndict = {}\nfor index, item in enumerate(keys):\n dict[item] = values[index]\n\n",
"If you are working with more than 1 set of values and wish to have a list of dicts you can use this:\ndef as_dict_list(data: list, columns: list):\n return [dict((zip(columns, row))) for row in data]\n\nReal-life example would be a list of tuples from a db query paired to a tuple of columns from the same query. Other answers only provided for 1 to 1.\n",
"keys = ['name', 'age', 'food']\nvalues = ['Monty', 42, 'spam']\ndic = {}\nc = 0\nfor i in keys:\n dic[i] = values[c]\n c += 1\n\nprint(dic)\n{'name': 'Monty', 'age': 42, 'food': 'spam'}\n\n",
" import pprint\n def makeDictUsingAlternateLists1(**rest):\n print(\"*rest.keys() : \",*rest.keys())\n print(\"rest.keys() : \",rest.keys())\n print(\"*rest.values() : \",*rest.values())\n print(\"**rest.keys() : \",rest.keys())\n print(\"**rest.values() : \",rest.values())\n [print(a) for a in zip(*rest.values())]\n \n [ print(dict(zip(rest.keys(),a))) for a in zip(*rest.values())]\n print(\"...\")\n \n \n finalRes= [ dict( zip( rest.keys(),a)) for a in zip(*rest.values())] \n return finalRes\n \n l = makeDictUsingAlternateLists1(p=p,q=q,r=r,s=s)\n pprint.pprint(l) \n\"\"\"\n*rest.keys() : p q r s\nrest.keys() : dict_keys(['p', 'q', 'r', 's'])\n*rest.values() : ['A', 'B', 'C'] [5, 2, 7] ['M', 'F', 'M'] ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']\n**rest.keys() : dict_keys(['p', 'q', 'r', 's'])\n**rest.values() : dict_values([['A', 'B', 'C'], [5, 2, 7], ['M', 'F', 'M'], ['Sovabazaar', 'Shyambazaar', 'Bagbazaar', 'Hatkhola']])\n('A', 5, 'M', 'Sovabazaar')\n('B', 2, 'F', 'Shyambazaar')\n('C', 7, 'M', 'Bagbazaar')\n{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'}\n{'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'}\n{'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}\n...\n[{'p': 'A', 'q': 5, 'r': 'M', 's': 'Sovabazaar'},\n {'p': 'B', 'q': 2, 'r': 'F', 's': 'Shyambazaar'},\n {'p': 'C', 'q': 7, 'r': 'M', 's': 'Bagbazaar'}]\n\"\"\"\n\n"
] | [
2791,
220,
134,
40,
31,
19,
15,
11,
10,
3,
3,
2,
2,
1,
1,
0,
0,
0,
0
] | [
"method without zip function\nl1 = [1,2,3,4,5]\nl2 = ['a','b','c','d','e']\nd1 = {}\nfor l1_ in l1:\n for l2_ in l2:\n d1[l1_] = l2_\n l2.remove(l2_)\n break \n\nprint (d1)\n\n\n{1: 'd', 2: 'b', 3: 'e', 4: 'a', 5: 'c'}\n\n",
"Although there are multiple ways of doing this but i think most fundamental way of approaching it; creating a loop and dictionary and store values into that dictionary. In the recursive approach the idea is still same it but instead of using a loop, the function called itself until it reaches to the end. Of course there are other approaches like using dict(zip(key, value)) and etc. These aren't the most effective solutions.\ny = [1,2,3,4]\nx = [\"a\",\"b\",\"c\",\"d\"]\n\n# This below is a brute force method\nobj = {}\nfor i in range(len(y)):\n obj[y[i]] = x[i]\nprint(obj)\n\n# Recursive approach \nobj = {}\ndef map_two_lists(a,b,j=0):\n if j < len(a):\n obj[b[j]] = a[j]\n j +=1\n map_two_lists(a, b, j)\n return obj\n \n\n\nres = map_two_lists(x,y)\nprint(res)\n\n\nBoth the results should print\n{1: 'a', 2: 'b', 3: 'c', 4: 'd'} \n\n"
] | [
-1,
-1
] | [
"dictionary",
"list",
"python"
] | stackoverflow_0000209840_dictionary_list_python.txt |
Q:
modify fasta file with a function using biopython
I should do this command for thounsands of fasta file, so I'm wondering if there is a function to accelerate the process
from Bio import SeqIO
new= open("new.fasta", "w")
for rec in SeqIO.parse("old.fasta","fasta"):
print(rec.id)
print(rec.seq.reverse_complement())
new.write(">rc_"+rec.id+"\n")
new.write(str(rec.seq.reverse_complement())+"\n")
new.close()
A:
I rewrote you code into a function that can be called using each filename you have, possibly collected into a list using os.listdir().
from Bio import SeqIO
def parse_file(filename):
new_name = f"rc_{filename}"
with open(new_name, "w") as new:
for rec in SeqIO.parse(filename, "fasta"):
print(rec_id:=rec.id)
print(rev_comp:=str(rec.seq.reverse_complement()))
new.write(f">rc_{rec_id}\n{rev_comp}\n")
I used f-strings to create both the new filename and the strings written to that file. I also used the "walrus operator" to assign the values of rec.id and rec.seq.reverse_complement() to temp variables so we don't have to run those operations again when we write the data. This will save compute cycles and time over the long run. However, use of := means the code will only run under Python 3.8 and later.
| modify fasta file with a function using biopython | I should do this command for thounsands of fasta file, so I'm wondering if there is a function to accelerate the process
from Bio import SeqIO
new= open("new.fasta", "w")
for rec in SeqIO.parse("old.fasta","fasta"):
print(rec.id)
print(rec.seq.reverse_complement())
new.write(">rc_"+rec.id+"\n")
new.write(str(rec.seq.reverse_complement())+"\n")
new.close()
| [
"I rewrote you code into a function that can be called using each filename you have, possibly collected into a list using os.listdir().\nfrom Bio import SeqIO\n\ndef parse_file(filename):\n new_name = f\"rc_{filename}\"\n with open(new_name, \"w\") as new:\n for rec in SeqIO.parse(filename, \"fasta\"):\n print(rec_id:=rec.id)\n print(rev_comp:=str(rec.seq.reverse_complement()))\n new.write(f\">rc_{rec_id}\\n{rev_comp}\\n\")\n\nI used f-strings to create both the new filename and the strings written to that file. I also used the \"walrus operator\" to assign the values of rec.id and rec.seq.reverse_complement() to temp variables so we don't have to run those operations again when we write the data. This will save compute cycles and time over the long run. However, use of := means the code will only run under Python 3.8 and later.\n"
] | [
0
] | [] | [] | [
"biopython",
"python"
] | stackoverflow_0074671147_biopython_python.txt |
Q:
How do I store a variable in a function so I can access it from a different file
I am trying to make a program that allows you to select a day, and then store a value for the day with a separate file. However, I can't find a way to store the selected day in a variable that I can use.
from tkinter import *
from tkcalendar import *
main = Tk()
main.title('Calendar')
main.geometry('600x400')
cal = Calendar(main, selectmode='day')
cal.pack()
def set_date():
my_label.config(text=cal.get_date())
today = cal.get_date()
print(today)
my_button = Button(main, text='Get Date',command=set_date)
my_button.pack(pady=20)
my_label = Label(main,text="Haha")
my_label.pack(pady=20)
main.mainloop()
If I store the vairable inside the function set_date() it stores the date that is selected but I can't import it on a separate file. And if I store the variable outside of the function set_date() it only stores the current date and not the one selected.
A:
I can't tell exactly what you want because of how you worded it, but I'm pretty sure this is what you want
def set_date():
my_label.config(text=cal.get_date())
today = cal.get_date()
return today
If you then import this function in another file and call it like this
selected_date = set_date()
This isn't quite what you want though as it will modify the label each time you want to get the date, so you will want to add this function to get the selected date that you want.
def get_date():
today = cal.get_date()
return today
| How do I store a variable in a function so I can access it from a different file | I am trying to make a program that allows you to select a day, and then store a value for the day with a separate file. However, I can't find a way to store the selected day in a variable that I can use.
from tkinter import *
from tkcalendar import *
main = Tk()
main.title('Calendar')
main.geometry('600x400')
cal = Calendar(main, selectmode='day')
cal.pack()
def set_date():
my_label.config(text=cal.get_date())
today = cal.get_date()
print(today)
my_button = Button(main, text='Get Date',command=set_date)
my_button.pack(pady=20)
my_label = Label(main,text="Haha")
my_label.pack(pady=20)
main.mainloop()
If I store the vairable inside the function set_date() it stores the date that is selected but I can't import it on a separate file. And if I store the variable outside of the function set_date() it only stores the current date and not the one selected.
| [
"I can't tell exactly what you want because of how you worded it, but I'm pretty sure this is what you want\ndef set_date():\n my_label.config(text=cal.get_date())\n today = cal.get_date()\n return today\n\nIf you then import this function in another file and call it like this\nselected_date = set_date()\n\nThis isn't quite what you want though as it will modify the label each time you want to get the date, so you will want to add this function to get the selected date that you want.\ndef get_date():\n today = cal.get_date()\n return today\n\n"
] | [
1
] | [
"a = 5\n\ndef set_a(val):\n global a\n a = val\n \nprint(a)\nset_a(0)\nprint(a)\n\nWhat you are doing is a very bad practice (you can use the global keyword before your variable to update it, but never do this). Instead either store it in a mutable data object for example a dictionary or as a pickle file(depending on your application).\nprogram_variables = {'a':5}\n\ndef set_a(val):\n program_variables['a'] = val\n \nprint(program_variables['a'])\nset_a(0)\nprint(program_variables['a'])\n\nif you need pickles, you can search for tutorials on using it as well\n"
] | [
-2
] | [
"python",
"tkcalendar",
"tkinter"
] | stackoverflow_0074680264_python_tkcalendar_tkinter.txt |
Q:
Pythonic way of checking if a condition holds for any element of a list
I have a list in Python, and I want to check if any elements are negative. Is there a simple function or syntax I can use to apply the "is negative" check to all the elements, and see if any of them is negative? I looked through the documentation and couldn't find anything similar. The best I could come up with was:
if (True in [t < 0 for t in x]):
# do something
I find this rather inelegant. Is there a better way to do this in Python?
Existing answers here use the built-in function any to do the iteration. See How do Python's any and all functions work? for an explanation of any and its counterpart, all.
If the condition you want to check is "is found in another container", see How to check if one of the following items is in a list? and its counterpart, How to check if all of the following items are in a list?. Using any and all will work, but more efficient solutions are possible.
A:
any():
if any(t < 0 for t in x):
# do something
Also, if you're going to use "True in ...", make it a generator expression so it doesn't take O(n) memory:
if True in (t < 0 for t in x):
A:
Use any().
if any(t < 0 for t in x):
# do something
A:
Python has a built in any() function for exactly this purpose.
| Pythonic way of checking if a condition holds for any element of a list | I have a list in Python, and I want to check if any elements are negative. Is there a simple function or syntax I can use to apply the "is negative" check to all the elements, and see if any of them is negative? I looked through the documentation and couldn't find anything similar. The best I could come up with was:
if (True in [t < 0 for t in x]):
# do something
I find this rather inelegant. Is there a better way to do this in Python?
Existing answers here use the built-in function any to do the iteration. See How do Python's any and all functions work? for an explanation of any and its counterpart, all.
If the condition you want to check is "is found in another container", see How to check if one of the following items is in a list? and its counterpart, How to check if all of the following items are in a list?. Using any and all will work, but more efficient solutions are possible.
| [
"any():\nif any(t < 0 for t in x):\n # do something\n\nAlso, if you're going to use \"True in ...\", make it a generator expression so it doesn't take O(n) memory:\nif True in (t < 0 for t in x):\n\n",
"Use any().\nif any(t < 0 for t in x):\n # do something\n\n",
"Python has a built in any() function for exactly this purpose.\n"
] | [
246,
37,
11
] | [
"a=x.copy()\na.sort()\nif a[0]<0:\n # do something\n\n"
] | [
-1
] | [
"list",
"python"
] | stackoverflow_0001342601_list_python.txt |
Q:
Instapy problem,why doesnt the code work?
from instapy import InstaPy
from instapy import smart_run
import time
my_username = '_georgekazaras'
my_password = 'mypassword'
def job():
session = InstaPy(username=my_username,
password=my_password)
with smart_run(session):
session.set_relationship_bounds(enabled=True,
delimit_by_numbers=True,
max_followers=90000000000000,
min_followers=1,
min_following=30)
session.set_do_follow(True, precentage=100)
session.set_dont_like(['tag1', 'tag2', 'tag3', 'tag4', 'tag5', 'tag6', 'tag7'])
session.like_by_tags(['cars', 'chess', 'sports'])
job()
I tried getting the code out of the function to see if it helps but it didnt, is the library not working anymore ,is iit maybe because of my instagram settings or is it something about the code ?
A:
You have a typo in set_do_follow(percentage not precentage)
| Instapy problem,why doesnt the code work? | from instapy import InstaPy
from instapy import smart_run
import time
my_username = '_georgekazaras'
my_password = 'mypassword'
def job():
session = InstaPy(username=my_username,
password=my_password)
with smart_run(session):
session.set_relationship_bounds(enabled=True,
delimit_by_numbers=True,
max_followers=90000000000000,
min_followers=1,
min_following=30)
session.set_do_follow(True, precentage=100)
session.set_dont_like(['tag1', 'tag2', 'tag3', 'tag4', 'tag5', 'tag6', 'tag7'])
session.like_by_tags(['cars', 'chess', 'sports'])
job()
I tried getting the code out of the function to see if it helps but it didnt, is the library not working anymore ,is iit maybe because of my instagram settings or is it something about the code ?
| [
"You have a typo in set_do_follow(percentage not precentage)\n"
] | [
0
] | [] | [] | [
"function",
"instagram",
"instapy",
"module",
"python"
] | stackoverflow_0074680374_function_instagram_instapy_module_python.txt |
Q:
How to scale QImage to a small size with good quality
I have eight different videos. And I am trying to show these videos in a split window. My video quality is 720p. But I need to fit in small frame. When I resize the video with p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio) as a 256x450 I couldn't get good quality of video. How can I resize as a good quality. What is your suggestion to me?
@pyqtSlot(list)
def update_image(self, cv_img = []):
"""Updates the image_label with a new opencv image"""
qt_img = []
for i in range(0,8):
# qt_img.append(0)
qt_img.append(self.convert_cv_qt(cv_img[i]))
self.ui.video1.setPixmap(qt_img[0])
self.ui.video2.setPixmap(qt_img[1])
self.ui.video3.setPixmap(qt_img[2])
self.ui.video4.setPixmap(qt_img[3])
self.ui.video5.setPixmap(qt_img[4])
self.ui.video6.setPixmap(qt_img[5])
self.ui.video7.setPixmap(qt_img[6])
self.ui.video8.setPixmap(qt_img[7])
def convert_cv_qt(self, cv_img):
"""Convert from an opencv image to QPixmap"""
rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
h, w, ch = rgb_image.shape
bytes_per_line = ch * w
convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888)
p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio)
return QPixmap.fromImage(p)
A:
Note that QImage::scaled() has an optional parameter transformMode that defaults to Qt::FastTransformation. If you pass Qt::SmoothTransformation the results should be better, because bilinear filtering is used.
| How to scale QImage to a small size with good quality | I have eight different videos. And I am trying to show these videos in a split window. My video quality is 720p. But I need to fit in small frame. When I resize the video with p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio) as a 256x450 I couldn't get good quality of video. How can I resize as a good quality. What is your suggestion to me?
@pyqtSlot(list)
def update_image(self, cv_img = []):
"""Updates the image_label with a new opencv image"""
qt_img = []
for i in range(0,8):
# qt_img.append(0)
qt_img.append(self.convert_cv_qt(cv_img[i]))
self.ui.video1.setPixmap(qt_img[0])
self.ui.video2.setPixmap(qt_img[1])
self.ui.video3.setPixmap(qt_img[2])
self.ui.video4.setPixmap(qt_img[3])
self.ui.video5.setPixmap(qt_img[4])
self.ui.video6.setPixmap(qt_img[5])
self.ui.video7.setPixmap(qt_img[6])
self.ui.video8.setPixmap(qt_img[7])
def convert_cv_qt(self, cv_img):
"""Convert from an opencv image to QPixmap"""
rgb_image = cv2.cvtColor(cv_img, cv2.COLOR_BGR2RGB)
h, w, ch = rgb_image.shape
bytes_per_line = ch * w
convert_to_Qt_format = QtGui.QImage(rgb_image.data, w, h, bytes_per_line, QtGui.QImage.Format_RGB888)
p = convert_to_Qt_format.scaled(256, 450, Qt.KeepAspectRatio)
return QPixmap.fromImage(p)
| [
"Note that QImage::scaled() has an optional parameter transformMode that defaults to Qt::FastTransformation. If you pass Qt::SmoothTransformation the results should be better, because bilinear filtering is used.\n"
] | [
0
] | [] | [] | [
"pyqt5",
"python",
"qimage",
"qt",
"video_processing"
] | stackoverflow_0074679910_pyqt5_python_qimage_qt_video_processing.txt |
Q:
How to match group of lines between two matches?
So, I have a result of a tool that goes like this:
>Cluster 1
0 1967nt, >001126F:363892-365859... *
1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00%
2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00%
3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00%
4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00%
5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00%
6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00%
7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00%
>Cluster 2
0 1902nt, >000723F:1251286-1253188... *
1 1863nt, >Aag2_family_100_all/000723F:1251295-1253158... at +/100.00%
2 800nt, >Aag2_family_100_all/000723F:1252107-1252907... at +/100.00%
And I'm trying to match all the lines in groups, like:
0 1967nt, >001126F:363892-365859... *
1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00%
2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00%
3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00%
4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00%
5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00%
6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00%
7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00%
And deposit in an variable, and relate the line starting with 0 with the others line like:
000723F:1251286-1253188 = [">Aag2_family_100_all/000723F:1251295-1253158, >Aag2_family_100_all/000723F:1252107-1252907"]
I'm trying on python using the re library.
I tried working on with line by line, but I think my logic is wrong, really rookie on regex
import re
result = []
with open("cluster_whit_eefinder.clstr", "r") as cluster:
for line in cluster:
if re.search(r'>Cluster.*', line):
print(line)
A:
I think you missed .read() for cluster.
| How to match group of lines between two matches? | So, I have a result of a tool that goes like this:
>Cluster 1
0 1967nt, >001126F:363892-365859... *
1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00%
2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00%
3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00%
4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00%
5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00%
6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00%
7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00%
>Cluster 2
0 1902nt, >000723F:1251286-1253188... *
1 1863nt, >Aag2_family_100_all/000723F:1251295-1253158... at +/100.00%
2 800nt, >Aag2_family_100_all/000723F:1252107-1252907... at +/100.00%
And I'm trying to match all the lines in groups, like:
0 1967nt, >001126F:363892-365859... *
1 1676nt, >Aag2_family_100_all/000015F:2300484-2302160... at -/100.00%
2 1544nt, >Aag2_family_100_all/000453F:1675071-1676615... at +/100.00%
3 1208nt, >Aag2_family_100_all/000453F:1675260-1676468... at +/100.00%
4 1676nt, >Aag2_family_100_all/001252F:481349-483025... at -/100.00%
5 1676nt, >Aag2_family_100_all/001305F:490050-491726... at -/100.00%
6 1676nt, >Aag2_family_100_all/001828F:112497-114173... at -/100.00%
7 206nt, >Aag2_family_100_all/002989F:21276-21482... at +/100.00%
And deposit in an variable, and relate the line starting with 0 with the others line like:
000723F:1251286-1253188 = [">Aag2_family_100_all/000723F:1251295-1253158, >Aag2_family_100_all/000723F:1252107-1252907"]
I'm trying on python using the re library.
I tried working on with line by line, but I think my logic is wrong, really rookie on regex
import re
result = []
with open("cluster_whit_eefinder.clstr", "r") as cluster:
for line in cluster:
if re.search(r'>Cluster.*', line):
print(line)
| [
"I think you missed .read() for cluster.\n"
] | [
0
] | [] | [] | [
"python",
"python_re"
] | stackoverflow_0074680389_python_python_re.txt |
Q:
Obtaining data from both token and word objects in a Stanza Document / Sentence
I am using a Stanford STANZA pipeline on some (italian) text.
Problem I'm grappling with is that I need data from BOTH the Token and Word objects.
While I'm able to access one or the other separately I'm not wrapping my head on how to get data from both in a single loop over the Document -> Sentence
Specifically I need both some Word data (such as lemma, upos and head) but I also need to know the corresponding start and end position, which in my understanding I can find in the token.start_char and token.end_char.
Here's my code to test what I've achieved:
import stanza
IN_TXT = '''Il paziente Rossi e' stato ricoverato presso il nostro reparto a seguito di accesso
al pronto soccorso con diagnosi sospetta di aneurisma aorta
addominale sottorenale. In data 12/11/2022 e' stato sottoposto ad asportazione dell'aneurisma
con anastomosi aorto aortica con protesi in dacron da 20mm. Paziente dimesso in data odierna in
condizioni stabili.'''
stanza.download('it', verbose=False)
it_nlp = stanza.Pipeline('it', processors='tokenize,lemma,pos,depparse,ner',
verbose=False, use_gpu=False)
it_doc = it_nlp(IN_TXT)
# iterate through the Token objects
T = 0
for token in it_doc.iter_tokens():
T += 1
token_id = 'T' + str((T))
token_start = token.start_char
token_end = token.end_char
token_text = token.text
print(f"{token_id}\t{token_start} {token_end} {token_text}")
# iterate through Word objects
print(*[f'word: {word.text}\t\t\tupos: {word.upos}\txpos: {word.xpos}\tfeats: {word.feats if word.feats else "_"}' for sent in it_doc.sentences for word in sent.words], sep='\n')
Here is the documentation of these objects: https://stanfordnlp.github.io/stanza/data_objects.html
A:
To access data from both the Word and Token objects in a single loop, you can simply loop through the Sentence objects in the document, and then within each sentence loop through the Word objects. For each Word object, you can access its associated Token object through the .token attribute. Here is an example of how you might do this:
for sentence in it_doc.sentences:
for word in sentence.words:
# Get the Word object's data
word_text = word.text
word_upos = word.upos
word_xpos = word.xpos
word_feats = word.feats
# Get the Token object's data
token = word.token
token_start = token.start_char
token_end = token.end_char
token_text = token.text
# Use the data as needed
print(f"Word: {word_text}\nUPOS: {word_upos}\nXPOS: {word_xpos}\nFeats: {word_feats}\nToken: {token_text}\nToken start: {token_start}\nToken end: {token_end}")
Alternatively, you can access the Token object directly from the Sentence object, using the sentence.tokens property, which is a list of Token objects. Here is an example of how you might do this:
for sentence in it_doc.sentences:
# Get the Sentence object's tokens
tokens = sentence.tokens
for token in tokens:
token_start = token.start_char
token_end = token.end_char
token_text = token.text
# Use the data as needed
print(f"Token: {token_text}\nToken start: {token_start}\nToken end: {token_end}")
Either of these approaches should allow you to access data from both the Word and Token objects in a single loop.
A:
I just discovered the zip function which returns an iterator of tuples in Python 3.
Therefore to iterate in parallel through the Words and Tokens of a sentence you can code:
for sentence in it_doc.sentences:
for t, w in zip(sentence.tokens, sentence.words):
print(f"Text->{w.text}\tLemma->{w.lemma}\tStart->{t.start_char}\tStop->{t.end_char}")
| Obtaining data from both token and word objects in a Stanza Document / Sentence | I am using a Stanford STANZA pipeline on some (italian) text.
Problem I'm grappling with is that I need data from BOTH the Token and Word objects.
While I'm able to access one or the other separately I'm not wrapping my head on how to get data from both in a single loop over the Document -> Sentence
Specifically I need both some Word data (such as lemma, upos and head) but I also need to know the corresponding start and end position, which in my understanding I can find in the token.start_char and token.end_char.
Here's my code to test what I've achieved:
import stanza
IN_TXT = '''Il paziente Rossi e' stato ricoverato presso il nostro reparto a seguito di accesso
al pronto soccorso con diagnosi sospetta di aneurisma aorta
addominale sottorenale. In data 12/11/2022 e' stato sottoposto ad asportazione dell'aneurisma
con anastomosi aorto aortica con protesi in dacron da 20mm. Paziente dimesso in data odierna in
condizioni stabili.'''
stanza.download('it', verbose=False)
it_nlp = stanza.Pipeline('it', processors='tokenize,lemma,pos,depparse,ner',
verbose=False, use_gpu=False)
it_doc = it_nlp(IN_TXT)
# iterate through the Token objects
T = 0
for token in it_doc.iter_tokens():
T += 1
token_id = 'T' + str((T))
token_start = token.start_char
token_end = token.end_char
token_text = token.text
print(f"{token_id}\t{token_start} {token_end} {token_text}")
# iterate through Word objects
print(*[f'word: {word.text}\t\t\tupos: {word.upos}\txpos: {word.xpos}\tfeats: {word.feats if word.feats else "_"}' for sent in it_doc.sentences for word in sent.words], sep='\n')
Here is the documentation of these objects: https://stanfordnlp.github.io/stanza/data_objects.html
| [
"To access data from both the Word and Token objects in a single loop, you can simply loop through the Sentence objects in the document, and then within each sentence loop through the Word objects. For each Word object, you can access its associated Token object through the .token attribute. Here is an example of how you might do this:\nfor sentence in it_doc.sentences:\n for word in sentence.words:\n # Get the Word object's data\n word_text = word.text\n word_upos = word.upos\n word_xpos = word.xpos\n word_feats = word.feats\n\n # Get the Token object's data\n token = word.token\n token_start = token.start_char\n token_end = token.end_char\n token_text = token.text\n \n # Use the data as needed\n print(f\"Word: {word_text}\\nUPOS: {word_upos}\\nXPOS: {word_xpos}\\nFeats: {word_feats}\\nToken: {token_text}\\nToken start: {token_start}\\nToken end: {token_end}\")\n\nAlternatively, you can access the Token object directly from the Sentence object, using the sentence.tokens property, which is a list of Token objects. Here is an example of how you might do this:\nfor sentence in it_doc.sentences:\n # Get the Sentence object's tokens\n tokens = sentence.tokens\n \n for token in tokens:\n token_start = token.start_char\n token_end = token.end_char\n token_text = token.text\n\n # Use the data as needed\n print(f\"Token: {token_text}\\nToken start: {token_start}\\nToken end: {token_end}\")\n\nEither of these approaches should allow you to access data from both the Word and Token objects in a single loop.\n",
"I just discovered the zip function which returns an iterator of tuples in Python 3.\nTherefore to iterate in parallel through the Words and Tokens of a sentence you can code:\nfor sentence in it_doc.sentences:\n for t, w in zip(sentence.tokens, sentence.words):\n print(f\"Text->{w.text}\\tLemma->{w.lemma}\\tStart->{t.start_char}\\tStop->{t.end_char}\")\n\n"
] | [
0,
0
] | [] | [] | [
"nlp",
"python",
"stanford_nlp"
] | stackoverflow_0074668152_nlp_python_stanford_nlp.txt |
Q:
Python Add 2 Lists (Arrays) in Python
How can i add 2 numbers in a List.
I am trying to add 2 numbers in an array, it just shows None on the response box.
The code is looking thus :
def add2NumberArrays(a,b):
res = []
for i in range(0,len(a)):
return res.append(a[i] + b[i])
a = [4,4,7]
b = [2,1,2]
print(add2NumberArrays(a,b))
Why does this return none? Please help.
Edits
My code looks thus :
def add2NumberArrays(a,b):
for i in range(0,len(a)):
res = []
ans = res.append(a[i]+b[i])
return ans
a = [4,4,7]
b = [2,1,2]
print(add2NumberArrays(a,b))
A:
You can use itertools.zip_longest to handle different length of two lists.
from itertools import zip_longest
def add2NumberArrays(a,b):
# Explanation:
# list(zip_longest(a, b, fillvalue=0))
# [(4, 2), (4, 1), (7, 2), (0, 2)]
return [i+j for i,j in zip_longest(a, b, fillvalue=0)]
a = [4,4,7]
b = [2,1,2,2]
print(add2NumberArrays(a,b))
# [6, 5, 9, 2]
Your code need to change like below:
def add2NumberArrays(a,b):
res = []
for i in range(0,len(a)):
res.append(a[i]+b[i])
return res
# As a list comprehension
# return [a[i]+b[i] for i in range(0,len(a))]
a = [4,4,7]
b = [2,1,2]
print(add2NumberArrays(a,b))
# [6, 5, 9]
| Python Add 2 Lists (Arrays) in Python | How can i add 2 numbers in a List.
I am trying to add 2 numbers in an array, it just shows None on the response box.
The code is looking thus :
def add2NumberArrays(a,b):
res = []
for i in range(0,len(a)):
return res.append(a[i] + b[i])
a = [4,4,7]
b = [2,1,2]
print(add2NumberArrays(a,b))
Why does this return none? Please help.
Edits
My code looks thus :
def add2NumberArrays(a,b):
for i in range(0,len(a)):
res = []
ans = res.append(a[i]+b[i])
return ans
a = [4,4,7]
b = [2,1,2]
print(add2NumberArrays(a,b))
| [
"You can use itertools.zip_longest to handle different length of two lists.\nfrom itertools import zip_longest\ndef add2NumberArrays(a,b):\n # Explanation: \n # list(zip_longest(a, b, fillvalue=0))\n # [(4, 2), (4, 1), (7, 2), (0, 2)]\n return [i+j for i,j in zip_longest(a, b, fillvalue=0)]\n \n\na = [4,4,7]\nb = [2,1,2,2]\n\nprint(add2NumberArrays(a,b))\n# [6, 5, 9, 2]\n\nYour code need to change like below:\ndef add2NumberArrays(a,b):\n res = []\n for i in range(0,len(a)):\n res.append(a[i]+b[i])\n return res\n\n # As a list comprehension\n # return [a[i]+b[i] for i in range(0,len(a))]\n\n\na = [4,4,7]\nb = [2,1,2]\n\nprint(add2NumberArrays(a,b))\n# [6, 5, 9]\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074680339_python.txt |
Q:
Stop urllib.request from raising exceptions on HTTP errors
Python's urllib.request.urlopen() will raise an exception if the HTTP status code of the request is not OK (e.g., 404).
This is because the default opener uses the HTTPDefaultErrorHandler class:
A class which defines a default handler for HTTP error responses; all responses are turned into HTTPError exceptions.
Even if you build your own opener, it (un)helpfully includes the HTTPDefaultErrorHandler for you implicitly.
If, however, you don't want Python to raise an exception if you get a non-OK response, it's unclear how to disable this behavior.
| Stop urllib.request from raising exceptions on HTTP errors | Python's urllib.request.urlopen() will raise an exception if the HTTP status code of the request is not OK (e.g., 404).
This is because the default opener uses the HTTPDefaultErrorHandler class:
A class which defines a default handler for HTTP error responses; all responses are turned into HTTPError exceptions.
Even if you build your own opener, it (un)helpfully includes the HTTPDefaultErrorHandler for you implicitly.
If, however, you don't want Python to raise an exception if you get a non-OK response, it's unclear how to disable this behavior.
| [] | [] | [
"I'll be very clear, I'm answering this question to test GPT. That's not my answer, I don't know the anwser. But I asked GPT.\nIf this offends anyone, feel free to delete my message.\n\nTo disable the automatic raising of exceptions for non-OK HTTP status codes, you can create your own HTTPErrorProcessor class that extends the urllib.request.HTTPDefaultErrorHandler class and overrides the http_error_default() method, like this:\nfrom urllib.request import HTTPErrorProcessor, HTTPDefaultErrorHandler\nclass MyHTTPErrorProcessor(HTTPErrorProcessor, HTTPDefaultErrorHandler):\n def http_error_default(self, req, fp, code, msg, headers):\n # do nothing and return the response object\n return fp\n\nThen, when you create an OpenerDirector object and add the handler for HTTP errors, you can use your custom MyHTTPErrorProcessor class instead of the default HTTPDefaultErrorHandler class, like this:\nfrom urllib.request import build_opener, HTTPHandler, OpenerDirector\n\n# create an OpenerDirector object\nopener = build_opener(HTTPHandler)\n\n# add the custom HTTPErrorProcessor to the opener\nopener.add_handler(MyHTTPErrorProcessor())\n\nNow, when you make a request using this opener, Python will not automatically raise an exception for non-OK HTTP status codes. You can check the HTTP status code of the response and handle it as needed, like this:\n# make a request using the opener\nresponse = opener.open(\"https://www.example.com\")\n\n# check the HTTP status code of the response\nif response.code != 200:\n print(\"Received non-OK HTTP status code:\", response.code)\nelse:\n # process the response\n ...\n\nPlease note that this solution only applies to the urllib.request module in Python 2. In Python 3, the urllib.request module has been split into several smaller modules, and the urllib.request.urlopen() function has been moved to the urllib.request.urlopen() function in the urllib.request module. The solution for disabling the automatic raising of exceptions for non-OK HTTP status codes in Python 3 is similar, but the syntax and the classes used are slightly different.\n"
] | [
-2
] | [
"python",
"urllib"
] | stackoverflow_0074680393_python_urllib.txt |
Q:
python keyboard - how to deal with multiple keys?
Im trying to do a simple bot with using python keyboard lib.
i can send key presses with keyobard.send however if player presses and holds w for movement keyboard.send is not working.
What should i do?
A:
If you want to simulate a key press and hold using the keyboard library in Python, you can use the keyboard.press_and_release method instead of the keyboard.send method. This method simulates a press and release of a key, so if you want to simulate holding down a key, you can use a loop to repeatedly call this method.
Here's an example of how you can use the keyboard.press_and_release method to simulate holding down the "W" key:
import keyboard
# simulate holding down the "W" key
while True:
keyboard.press_and_release('w')
This code will simulate pressing and releasing the "W" key repeatedly, which should have the same effect as holding down the key. You can modify this code to suit your specific use case.
| python keyboard - how to deal with multiple keys? | Im trying to do a simple bot with using python keyboard lib.
i can send key presses with keyobard.send however if player presses and holds w for movement keyboard.send is not working.
What should i do?
| [
"If you want to simulate a key press and hold using the keyboard library in Python, you can use the keyboard.press_and_release method instead of the keyboard.send method. This method simulates a press and release of a key, so if you want to simulate holding down a key, you can use a loop to repeatedly call this method.\nHere's an example of how you can use the keyboard.press_and_release method to simulate holding down the \"W\" key:\nimport keyboard\n\n# simulate holding down the \"W\" key\nwhile True:\n keyboard.press_and_release('w')\n\nThis code will simulate pressing and releasing the \"W\" key repeatedly, which should have the same effect as holding down the key. You can modify this code to suit your specific use case.\n"
] | [
0
] | [] | [] | [
"keyboard",
"python"
] | stackoverflow_0074680454_keyboard_python.txt |
Q:
Navigate in JSON with nultiple keys
I'm trying to get a key from a JSON from a website using the following code:
import json
import requests
from bs4 import BeautifulSoup
url = input('Enter url:')
html = requests.get(url)
soup = BeautifulSoup(html.text,'html.parser')
data = json.loads(soup.find('script', type='application/json').text)
print(data)
print("####################################")
And here is the JSON:
{"props": {
"XYZ": {
"ABC": [
{
"current": "sold",
"location": "FD",
"type": "d",
"uid": "01020633"
}
],
"searchTerm": "asd"
}
}}
I'm able to load the page, find the JSON, and print all data. The question is, how can I print only the information from the current key? Will something like the following work?
print(data['props']['XYZ']['ABC']['current']
A:
You can access it like this:
data["props"]["XYZ"]["ABC"][0]["current"]
Why? Key current is inside a list of dictionaries. ABC is of type list, and we access the elements using their location in the list (0 in your example).
A:
As the other answers have already explained, you need to add [0] between ['ABC'] and ['current'] because the value that corresponds to the "ABC" key is a list containing a dictionary with the "current" key, so you can access it with
data["props"]["XYZ"]["ABC"][0]["current"]
Also, if you have a very complex and nested data structure and you want to quickly you can use this set of functions
getNestedVal(data, 'current')
prints sold, and
getNestedVal(data, 'current', 'just_expr', 'data')
prints data["props"]["XYZ"]["ABC"][0]["current"] so that you can copy it from the terminal and use in your code. (It's not the best idea to use it other than for just figuring out data structure since it can use up quite a bit of time and memory.)
A:
You can access current like below:
data["props"]["XYZ"]["ABC"][0]["current"]
you can get a list of dictionaries by accessing "ABC" key's value.
So whenever you get curly {} bracket it shows that it is a dictionary and you can access value of particular keys using
dict[key] or dict.get(key,default_value)
and whenever you get square [] bracket it shows that it is list and you can access its element by index list[0],list[1].
So, accessing list element and dictionary's key's value is two different things. I hope you understood my point.
| Navigate in JSON with nultiple keys | I'm trying to get a key from a JSON from a website using the following code:
import json
import requests
from bs4 import BeautifulSoup
url = input('Enter url:')
html = requests.get(url)
soup = BeautifulSoup(html.text,'html.parser')
data = json.loads(soup.find('script', type='application/json').text)
print(data)
print("####################################")
And here is the JSON:
{"props": {
"XYZ": {
"ABC": [
{
"current": "sold",
"location": "FD",
"type": "d",
"uid": "01020633"
}
],
"searchTerm": "asd"
}
}}
I'm able to load the page, find the JSON, and print all data. The question is, how can I print only the information from the current key? Will something like the following work?
print(data['props']['XYZ']['ABC']['current']
| [
"You can access it like this:\ndata[\"props\"][\"XYZ\"][\"ABC\"][0][\"current\"]\n\nWhy? Key current is inside a list of dictionaries. ABC is of type list, and we access the elements using their location in the list (0 in your example).\n",
"As the other answers have already explained, you need to add [0] between ['ABC'] and ['current'] because the value that corresponds to the \"ABC\" key is a list containing a dictionary with the \"current\" key, so you can access it with\ndata[\"props\"][\"XYZ\"][\"ABC\"][0][\"current\"]\n\n\nAlso, if you have a very complex and nested data structure and you want to quickly you can use this set of functions\ngetNestedVal(data, 'current')\n\nprints sold, and\ngetNestedVal(data, 'current', 'just_expr', 'data')\n\nprints data[\"props\"][\"XYZ\"][\"ABC\"][0][\"current\"] so that you can copy it from the terminal and use in your code. (It's not the best idea to use it other than for just figuring out data structure since it can use up quite a bit of time and memory.)\n",
"You can access current like below:\ndata[\"props\"][\"XYZ\"][\"ABC\"][0][\"current\"]\n\nyou can get a list of dictionaries by accessing \"ABC\" key's value.\nSo whenever you get curly {} bracket it shows that it is a dictionary and you can access value of particular keys using\ndict[key] or dict.get(key,default_value)\nand whenever you get square [] bracket it shows that it is list and you can access its element by index list[0],list[1].\nSo, accessing list element and dictionary's key's value is two different things. I hope you understood my point.\n"
] | [
1,
1,
0
] | [] | [] | [
"beautifulsoup",
"json",
"python",
"python_3.x",
"python_requests"
] | stackoverflow_0074678361_beautifulsoup_json_python_python_3.x_python_requests.txt |
Q:
Conflicts with relationship between tables
I've been constantly getting a warning on the console and I'm going crazy from how much I've been reading but I haven't been able to resolve this:
SAWarning: relationship 'Book.users' will copy column user.uid to column user_book.uid, which conflicts with relationship(s): 'User.books' (copies user.uid to user_book.uid). If this is not intention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. The 'overlaps' parameter may be used to remove this warning.
The tables the console cites in this notice are as follows:
user_book = db.Table('user_book',
db.Column('uid', db.Integer, db.ForeignKey('user.uid'), primary_key=True),
db.Column('bid', db.Text, db.ForeignKey('book.bid'), primary_key=True),
db.Column('date_added', db.DateTime(timezone=True), server_default=db.func.now())
)
class User(db.Model):
__tablename__ = 'user'
uid = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(25), nullable=False)
hash = db.Column(db.String(), nullable=False)
first_name = db.Column(db.String(30), nullable=True)
last_name = db.Column(db.String(80), nullable=True)
books = db.relationship('Book', secondary=user_book)
class Book(db.Model):
__tablename__ = 'book'
bid = db.Column(db.Text, primary_key=True)
title = db.Column(db.Text, nullable=False)
authors = db.Column(db.Text, nullable=False)
thumbnail = db.Column(db.Text, nullable=True)
users = db.relationship('User', secondary=user_book)
I use the user_book table to show the user the books he has added.
What am I missing? I take this opportunity to ask, semantically the relationship between tables and foreign keys is being done correctly?
A:
As the warning message suggests, you are missing the back_populates= attributes in your relationships:
class User(db.Model):
# …
books = db.relationship('Book', secondary=user_book, back_populates="users")
# …
class Book(db.Model):
# …
users = db.relationship('User', secondary=user_book, back_populates="books")
# …
A:
I kind of figure this out.
As the code in official tutorial.
from sqlalchemy import Column, ForeignKey, Integer, String, Table
from sqlalchemy.orm import declarative_base, relationship
Base = declarative_base()
class User(Base):
__tablename__ = "user"
id = Column(Integer, primary_key=True)
name = Column(String(64))
kw = relationship("Keyword", secondary=lambda: user_keyword_table)
def __init__(self, name):
self.name = name
class Keyword(Base):
__tablename__ = "keyword"
id = Column(Integer, primary_key=True)
keyword = Column("keyword", String(64))
def __init__(self, keyword):
self.keyword = keyword
user_keyword_table = Table(
"user_keyword",
Base.metadata,
Column("user_id", Integer, ForeignKey("user.id"), primary_key=True),
Column("keyword_id", Integer, ForeignKey("keyword.id"), primary_key=True),
)
Doesn't it make you wander why the relationship only exists in User class rather than both class ?
The thing is, it automatically creates the reverse relationship in Keyword class (a "backref='users' liked parameter is required I supposed ?)
| Conflicts with relationship between tables | I've been constantly getting a warning on the console and I'm going crazy from how much I've been reading but I haven't been able to resolve this:
SAWarning: relationship 'Book.users' will copy column user.uid to column user_book.uid, which conflicts with relationship(s): 'User.books' (copies user.uid to user_book.uid). If this is not intention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. The 'overlaps' parameter may be used to remove this warning.
The tables the console cites in this notice are as follows:
user_book = db.Table('user_book',
db.Column('uid', db.Integer, db.ForeignKey('user.uid'), primary_key=True),
db.Column('bid', db.Text, db.ForeignKey('book.bid'), primary_key=True),
db.Column('date_added', db.DateTime(timezone=True), server_default=db.func.now())
)
class User(db.Model):
__tablename__ = 'user'
uid = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(25), nullable=False)
hash = db.Column(db.String(), nullable=False)
first_name = db.Column(db.String(30), nullable=True)
last_name = db.Column(db.String(80), nullable=True)
books = db.relationship('Book', secondary=user_book)
class Book(db.Model):
__tablename__ = 'book'
bid = db.Column(db.Text, primary_key=True)
title = db.Column(db.Text, nullable=False)
authors = db.Column(db.Text, nullable=False)
thumbnail = db.Column(db.Text, nullable=True)
users = db.relationship('User', secondary=user_book)
I use the user_book table to show the user the books he has added.
What am I missing? I take this opportunity to ask, semantically the relationship between tables and foreign keys is being done correctly?
| [
"As the warning message suggests, you are missing the back_populates= attributes in your relationships:\nclass User(db.Model):\n# …\n books = db.relationship('Book', secondary=user_book, back_populates=\"users\")\n# …\n \nclass Book(db.Model):\n# …\n users = db.relationship('User', secondary=user_book, back_populates=\"books\")\n# …\n\n",
"I kind of figure this out.\nAs the code in official tutorial.\nfrom sqlalchemy import Column, ForeignKey, Integer, String, Table\nfrom sqlalchemy.orm import declarative_base, relationship\n\nBase = declarative_base()\n\n\nclass User(Base):\n __tablename__ = \"user\"\n id = Column(Integer, primary_key=True)\n name = Column(String(64))\n kw = relationship(\"Keyword\", secondary=lambda: user_keyword_table)\n\n def __init__(self, name):\n self.name = name\n\n\nclass Keyword(Base):\n __tablename__ = \"keyword\"\n id = Column(Integer, primary_key=True)\n keyword = Column(\"keyword\", String(64))\n\n def __init__(self, keyword):\n self.keyword = keyword\n\n\nuser_keyword_table = Table(\n \"user_keyword\",\n Base.metadata,\n Column(\"user_id\", Integer, ForeignKey(\"user.id\"), primary_key=True),\n Column(\"keyword_id\", Integer, ForeignKey(\"keyword.id\"), primary_key=True),\n)\n\nDoesn't it make you wander why the relationship only exists in User class rather than both class ?\nThe thing is, it automatically creates the reverse relationship in Keyword class (a \"backref='users' liked parameter is required I supposed ?)\n"
] | [
6,
1
] | [] | [] | [
"postgresql",
"python",
"sqlalchemy"
] | stackoverflow_0068322485_postgresql_python_sqlalchemy.txt |
Q:
Convert a string of characters into a hex variable input
I have output as in below
Hello
I use "list" to separate it to characters
MS=list(MS)
output:
['H', 'e', 'l', 'l', 'o']
I attempted to make it as input in hex format, as shown below:
p = (0x48, 0x65, 0x6c, 0x6c, 0x6f)
I tried to use the code below to convert it to hex:
p = tuple(hex(x) for x in MS)
However, it did not work. Is there anyone who knows how to do this?
The error message
TypeError: 'str' object cannot be interpreted as an integer
A:
The hex() function expects an integer, that's the reason why it complains about the str object.
You can use help(hex) to obtain the documentation : "Return the hexadecimal representation of an integer."
In the loop you can convert each letter to an integer thanks to the ord() function that "returns the Unicode code point for a one-character string".
p = tuple(hex(ord(x)) for x in MS) # ('0x48', '0x65', '0x6c', '0x6c', '0x6f')
| Convert a string of characters into a hex variable input | I have output as in below
Hello
I use "list" to separate it to characters
MS=list(MS)
output:
['H', 'e', 'l', 'l', 'o']
I attempted to make it as input in hex format, as shown below:
p = (0x48, 0x65, 0x6c, 0x6c, 0x6f)
I tried to use the code below to convert it to hex:
p = tuple(hex(x) for x in MS)
However, it did not work. Is there anyone who knows how to do this?
The error message
TypeError: 'str' object cannot be interpreted as an integer
| [
"The hex() function expects an integer, that's the reason why it complains about the str object.\nYou can use help(hex) to obtain the documentation : \"Return the hexadecimal representation of an integer.\"\nIn the loop you can convert each letter to an integer thanks to the ord() function that \"returns the Unicode code point for a one-character string\".\np = tuple(hex(ord(x)) for x in MS) # ('0x48', '0x65', '0x6c', '0x6c', '0x6f')\n\n"
] | [
0
] | [] | [] | [
"hex",
"python"
] | stackoverflow_0074679916_hex_python.txt |
Q:
How to write numbers and strings to csv file in Python
I'm new to coding so this may seem a trifle basic ...
I'm trying to write three data elements to each record of a csv file. Two of the elements (flow_temp and return_temp) are floating point numbers while the third (flame) is a string ("on" or "off").
Here is my write statement:
f.write(str(flow_temp)+","+str(return_temp)+flame+"\n")
and here is the error:
TypeError: can only concatenate str (not "bytes") to str
If I remove flame from the write statement the error goes.
I have also tried csv.write but couldn't get that to work either!
Mike
EDIT: here is the complete code. It uses a command called ebusctl to read the three data elements from a bus every 10 seconds.
from subprocess import check_output
from datetime import datetime
import threading
with open("sandbox.csv","w", encoding="utf-8") as f:
f.write("Flow,Return,Flame"+"\n")
def read_ebus():
threading.Timer(10.0, read_ebus).start()
cmd = ["/usr/bin/ebusctl", "read", "-f"]
flow_temp = float(check_output([*cmd, "FlowTemp", "temp"]))
return_temp = float(check_output([*cmd, "ReturnTemp", "temp"]))
flame = check_output([*cmd, "Flame"])
with open("sandbox.csv","a", encoding="utf8") as f:
f.write(datetime.today().strftime('%Y-%m-%d'+"," '%H:%M:%S')+",")
f.write(str(flow_temp)+","+str(return_temp)+flame+"\n")
read_ebus()
| How to write numbers and strings to csv file in Python | I'm new to coding so this may seem a trifle basic ...
I'm trying to write three data elements to each record of a csv file. Two of the elements (flow_temp and return_temp) are floating point numbers while the third (flame) is a string ("on" or "off").
Here is my write statement:
f.write(str(flow_temp)+","+str(return_temp)+flame+"\n")
and here is the error:
TypeError: can only concatenate str (not "bytes") to str
If I remove flame from the write statement the error goes.
I have also tried csv.write but couldn't get that to work either!
Mike
EDIT: here is the complete code. It uses a command called ebusctl to read the three data elements from a bus every 10 seconds.
from subprocess import check_output
from datetime import datetime
import threading
with open("sandbox.csv","w", encoding="utf-8") as f:
f.write("Flow,Return,Flame"+"\n")
def read_ebus():
threading.Timer(10.0, read_ebus).start()
cmd = ["/usr/bin/ebusctl", "read", "-f"]
flow_temp = float(check_output([*cmd, "FlowTemp", "temp"]))
return_temp = float(check_output([*cmd, "ReturnTemp", "temp"]))
flame = check_output([*cmd, "Flame"])
with open("sandbox.csv","a", encoding="utf8") as f:
f.write(datetime.today().strftime('%Y-%m-%d'+"," '%H:%M:%S')+",")
f.write(str(flow_temp)+","+str(return_temp)+flame+"\n")
read_ebus()
| [] | [] | [
"Try to add str() to flame as well\nf.write(str(flow_temp)+\",\"+str(return_temp)+str(flame)+\"\\n\")\n\nor you could alternativaly write row using csv lib in python\nimport csv\n\n# create a csv.writer object\nwriter = csv.writer(f)\n\n# write the data to the CSV file\nwriter.writerow([flow_temp, return_temp, flame])\n\n"
] | [
-1
] | [
"concatenation",
"csv",
"python"
] | stackoverflow_0074680497_concatenation_csv_python.txt |
Q:
lambda function returning a list of None elements
does anyone know why the function fill the list with "None"?
I can not find the problem, everything looks true.
my_lis = []
l = lambda m : [my_lis.append(x) for x in range(m)]
l(10)
output : [None, None, None, None, None, None, None, None, None, None]
if i print the x instead of append, i get 1 to 10 and the None list at the end.
anyway I'm trying to get a list of numbers by this way
A:
A simple list comprehension
lst = [i**2 for i in range(3)]
is interpreted as:
lst = []
for i in range(3):
lst.append(i**2)
Now back to your example: So your code is currently like this:
my_lis = []
def l(m):
result = []
for x in range(m):
result.append(my_lis.append(x))
return result
print(l(10)) # [None, None, None, None, None, None, None, None, None, None]
print(my_lis) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
So basically you're filling the my_lis when you call my_lis.append(), but .append() is an in-place method, It just adds item to the list but its return value is None and you're filling result with Nones. Indeed result is the what list comprehension hands you after-all.
As per request in comment:
You basically don't need extra my_lis list. The list comprehension inside the lambda gives you the final result, so:
l = lambda m: [x for x in range(m)]
print(l(10))
Now [x for x in range(m)] is pointless and slower here, You can directly call list on range(m):
l = lambda m: list(range(m))
print(l(10))
| lambda function returning a list of None elements | does anyone know why the function fill the list with "None"?
I can not find the problem, everything looks true.
my_lis = []
l = lambda m : [my_lis.append(x) for x in range(m)]
l(10)
output : [None, None, None, None, None, None, None, None, None, None]
if i print the x instead of append, i get 1 to 10 and the None list at the end.
anyway I'm trying to get a list of numbers by this way
| [
"A simple list comprehension\nlst = [i**2 for i in range(3)]\n\nis interpreted as:\nlst = []\nfor i in range(3):\n lst.append(i**2)\n\nNow back to your example: So your code is currently like this:\nmy_lis = []\n\ndef l(m):\n result = []\n for x in range(m):\n result.append(my_lis.append(x))\n return result\n\n\nprint(l(10)) # [None, None, None, None, None, None, None, None, None, None]\nprint(my_lis) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n\nSo basically you're filling the my_lis when you call my_lis.append(), but .append() is an in-place method, It just adds item to the list but its return value is None and you're filling result with Nones. Indeed result is the what list comprehension hands you after-all.\n\nAs per request in comment:\nYou basically don't need extra my_lis list. The list comprehension inside the lambda gives you the final result, so:\nl = lambda m: [x for x in range(m)]\nprint(l(10))\n\nNow [x for x in range(m)] is pointless and slower here, You can directly call list on range(m):\nl = lambda m: list(range(m))\nprint(l(10))\n\n"
] | [
2
] | [] | [] | [
"lambda",
"python"
] | stackoverflow_0074680509_lambda_python.txt |
Q:
robobrowser won't change cookies
I have a POST request sent to server from robobrowser, and the server responds with no data. The response headers are as follows (this is the response from Chrome browser and it's the way it supposed to be):
Cache-Control:no-cache, no-store, must-revalidate
Content-Length:335
Content-Type:application/json; charset=utf-8
Date:Wed, 02 Aug 2017 17:01:17 GMT
Expires:-1
lg:2673
Pragma:no-cache
Server:Unknown
Set-Cookie:BrandingUserLocationGroupID=ic4DUh/NXVp8VOKAtyDgbA==; expires=Fri, 01-Sep-2017 17:01:16 GMT; path=/; secure; HttpOnly
Set-Cookie:.AIRWATCHAUTH=A69C1A5EE8A5F3626385F35DA1B104EE7DFF5E5AF549DDB02EE8ED53931A0585C0FBB8299E3FC7B428A982B9826EF68390E659F4A74DCE00E195601F400D6E69F53907DADED4194F32DD08A72BA212DCCD0D23AB7C5BD56171E6C55EF1BE90849E9C81B2DAE23B05CA6E361326F44604; expires=Thu, 03-Aug-2017 17:01:17 GMT; path=/; secure; HttpOnly
Strict-Transport-Security:max-age=31536000;includeSubDomains
user:5679
X-Content-Type-Options:nosniff
x-download-options:noopen
x-frame-options:SAMEORIGIN
X-XSS-Protection:1; mode=block
it looks like server is resetting cookies, but my robobrowser instance does not respond/refresh to new cookies.
basically, the website is trying to switch sessions/change cookies I think, but my python robobrowser does not reflect that or does not allow it to change for some reason
Here is my POST request and response:
browser=RoboBrowser()
browser.session.headers['X-Requested-With']='XMLHttpRequest'
browser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')
print browser.response.content
This gives me the following error message:
{"RedirectUrl":null,"IsSuccess":false,"Message":"Save Failed","CustomMessage":null,"Errors":[{"Key":"","Value":["An error has occurred. This error has automatically been saved for further analysis. Please contact technical support."]}],"Messages":{},"HasView":false,"ViewHtml":null,"ViewUrl":null,"IsValidationException":false,"IsValidationWarning":false,"ReloadPage":false,"IsSessionExpired":false,"Script":null,"NextWizardUrl":null,"PreviousWizardUrl":null,"ShowDialog":false}
Does anyone know how to get robobrowser to respond to new cookies?
Added screenshot of cookie from Developer Tools in Chrome.
The red box highlighted is where the change occurs once the link is clicked.
A:
You can use the update_state() method and pass in the new cookies that were set in the server response.
For example:
browser=RoboBrowser()
browser.session.headers['X-Requested-With']='XMLHttpRequest'
browser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')
# Update cookies with new values from the server response
browser.cookies = browser.response.cookies
print browser.response.content
Alternatively, you can use the open() method again with the same URL to automatically update the cookies with the new values from the server response.
browser=RoboBrowser()
browser.session.headers['X-Requested-With']='XMLHttpRequest'
browser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')
# Open the same URL again to update cookies with new values
browser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')
print browser.response.content
You can also use the cookies property of the RoboBrowser instance to manually access and update the cookies.
browser=RoboBrowser()
browser.session.headers['X-Requested-With']='XMLHttpRequest'
browser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')
# Update cookies with new values from the server response
browser.cookies = browser.response.cookies
print browser.response.content
| robobrowser won't change cookies | I have a POST request sent to server from robobrowser, and the server responds with no data. The response headers are as follows (this is the response from Chrome browser and it's the way it supposed to be):
Cache-Control:no-cache, no-store, must-revalidate
Content-Length:335
Content-Type:application/json; charset=utf-8
Date:Wed, 02 Aug 2017 17:01:17 GMT
Expires:-1
lg:2673
Pragma:no-cache
Server:Unknown
Set-Cookie:BrandingUserLocationGroupID=ic4DUh/NXVp8VOKAtyDgbA==; expires=Fri, 01-Sep-2017 17:01:16 GMT; path=/; secure; HttpOnly
Set-Cookie:.AIRWATCHAUTH=A69C1A5EE8A5F3626385F35DA1B104EE7DFF5E5AF549DDB02EE8ED53931A0585C0FBB8299E3FC7B428A982B9826EF68390E659F4A74DCE00E195601F400D6E69F53907DADED4194F32DD08A72BA212DCCD0D23AB7C5BD56171E6C55EF1BE90849E9C81B2DAE23B05CA6E361326F44604; expires=Thu, 03-Aug-2017 17:01:17 GMT; path=/; secure; HttpOnly
Strict-Transport-Security:max-age=31536000;includeSubDomains
user:5679
X-Content-Type-Options:nosniff
x-download-options:noopen
x-frame-options:SAMEORIGIN
X-XSS-Protection:1; mode=block
it looks like server is resetting cookies, but my robobrowser instance does not respond/refresh to new cookies.
basically, the website is trying to switch sessions/change cookies I think, but my python robobrowser does not reflect that or does not allow it to change for some reason
Here is my POST request and response:
browser=RoboBrowser()
browser.session.headers['X-Requested-With']='XMLHttpRequest'
browser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')
print browser.response.content
This gives me the following error message:
{"RedirectUrl":null,"IsSuccess":false,"Message":"Save Failed","CustomMessage":null,"Errors":[{"Key":"","Value":["An error has occurred. This error has automatically been saved for further analysis. Please contact technical support."]}],"Messages":{},"HasView":false,"ViewHtml":null,"ViewUrl":null,"IsValidationException":false,"IsValidationWarning":false,"ReloadPage":false,"IsSessionExpired":false,"Script":null,"NextWizardUrl":null,"PreviousWizardUrl":null,"ShowDialog":false}
Does anyone know how to get robobrowser to respond to new cookies?
Added screenshot of cookie from Developer Tools in Chrome.
The red box highlighted is where the change occurs once the link is clicked.
| [
"You can use the update_state() method and pass in the new cookies that were set in the server response. \nFor example:\nbrowser=RoboBrowser()\nbrowser.session.headers['X-Requested-With']='XMLHttpRequest'\nbrowser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')\n\n# Update cookies with new values from the server response\nbrowser.cookies = browser.response.cookies\n\nprint browser.response.content\n\nAlternatively, you can use the open() method again with the same URL to automatically update the cookies with the new values from the server response.\nbrowser=RoboBrowser()\nbrowser.session.headers['X-Requested-With']='XMLHttpRequest'\nbrowser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')\n\n# Open the same URL again to update cookies with new values\nbrowser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')\n\nprint browser.response.content\n\nYou can also use the cookies property of the RoboBrowser instance to manually access and update the cookies.\nbrowser=RoboBrowser()\nbrowser.session.headers['X-Requested-With']='XMLHttpRequest'\nbrowser.open('https://example.com/test/Users/set-role?id='+role_id+'&__RequestVerificationToken='+token,method='POST')\n\n# Update cookies with new values from the server response\nbrowser.cookies = browser.response.cookies\n\nprint browser.response.content\n\n"
] | [
0
] | [] | [] | [
"http",
"python",
"python_2.7",
"python_requests",
"urllib2"
] | stackoverflow_0045467986_http_python_python_2.7_python_requests_urllib2.txt |
Q:
How to make Python CUDA atomicAdd to work with long int
How can I make Python CUDA atomicAdd works with long int? I tried with the below code, and it does not work as long as I use long *result_count or atomicAdd(&InitianCount,1);, with compilation error such as below.
import os
_path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64"
if os.system("cl.exe"):
os.environ['PATH'] += ';' + _path
if os.system("cl.exe"):
raise RuntimeError("cl.exe still not found, path probably incorrect")
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import pandas as pd
import numpy as np
InitianCount = 0
record_result = np.zeros((100000000, 4)).astype(np.float32)
record_result_gpu = cuda.mem_alloc(record_result.nbytes)
# result_count = np.int64(0)
result_count = np.zeros(1, dtype=np.int64)
result_count_gpu = cuda.mem_alloc(result_count.nbytes)
cuda.memcpy_htod(result_count_gpu, result_count)
print('result_count.nbytes is ' + str(result_count.nbytes))
mod = SourceModule("""
#include <cstdlib>
__global__ void test_cuda_LongIntArray(long InitianCount, int *result_count, float *record_result)
// __global__ void test_cuda_LongIntArray(long InitianCount, long *result_count, float *record_result)
{
long result_index;
result_index = atomicAdd(result_count,1);
// result_index = atomicAdd(&InitianCount,1);
}
""")
func = mod.get_function("test_cuda_LongIntArray")
func( np.int64(InitianCount), result_count_gpu, record_result_gpu, block=(4,16,16))
record_result = np.empty((100000000, 4), dtype=np.float32)
cuda.memcpy_dtoh(record_result, record_result_gpu)
print('record_result is with dimension ' + str(len(record_result)) + ' x ' + str(len(record_result[0])))
print(record_result)
record_result_gpu.free()
'cl.exe' is not recognized as an internal or external command,
operable program or batch file.
Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31629 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
result_count.nbytes is 8
Traceback (most recent call last):
File C:\PythonProjects\TradeAnalysis\Test\TestCUDAUtilisationBrute_Long_a.py:34 in <module>
mod = SourceModule("""
File ~\anaconda3\lib\site-packages\pycuda\compiler.py:352 in __init__
cubin = compile(
File ~\anaconda3\lib\site-packages\pycuda\compiler.py:301 in compile
return compile_plain(source, options, keep, nvcc, cache_dir, target)
File ~\anaconda3\lib\site-packages\pycuda\compiler.py:154 in compile_plain
raise CompileError(
CompileError: nvcc compilation of C:\Users\henry\AppData\Local\Temp\tmpjggfucw2\kernel.cu failed
A:
First, on windows, long (or long int) is a (signed) 32-bit (integer) type. So when asking for "how to use atomics with long" while at the same time allocating for 64-bit types in your code:
result_count = np.zeros(1, dtype=np.int64)
begs the question are you really asking about how to use atomics with long, or are you really asking about how to use atomics on a 64-bit integer type? I'll assume you want 64-bit atomics. In CUDA, atomicAdd for 64-bit integer types is only supported for unsigned types. So we will choose to use unsigned long long here, as it is unambiguously 64-bits unsigned on either windows or linux (for all currently supported CUDA activity using 64-bit OS's).
Second, to address your attempt in comments, you cannot do atomics on a local variable. When you use this variable:
__global__ void test_cuda_LongIntArray(long InitianCount,
^^^^^^^^^^^^
in a thread, that is effectively thread-local. Atomics work on global (or shared) variables. A global variable will generally be passed to a kernel using a pointer, perhaps like so:
__global__ void test_cuda_LongIntArray(long *InitianCount,
If we aim for that (albeit using the result_count example), we can create something like this:
$ cat t36.py
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import numpy as np
InitianCount = 0
record_result = np.zeros((10000000, 4)).astype(np.float32)
record_result_gpu = cuda.mem_alloc(record_result.nbytes)
# result_count = np.int64(0)
result_count = np.zeros(1, dtype=np.uint64)
result_count_gpu = cuda.mem_alloc(result_count.nbytes)
cuda.memcpy_htod(result_count_gpu, result_count)
print('result_count.nbytes is ' + str(result_count.nbytes))
mod = SourceModule("""
#include <cstdlib>
__global__ void test_cuda_LongIntArray(long long InitianCount, unsigned long long *result_count, float *record_result)
// __global__ void test_cuda_LongIntArray(long InitianCount, long *result_count, float *record_result)
{
unsigned long long result_index;
result_index = atomicAdd(result_count,1);
// result_index = atomicAdd(&InitianCount,1);
}
""")
func = mod.get_function("test_cuda_LongIntArray")
func( np.int64(InitianCount), result_count_gpu, record_result_gpu, block=(4,16,16))
cuda.memcpy_dtoh(record_result, record_result_gpu)
print('record_result is with dimension ' + str(len(record_result)) + ' x ' + str(len(record_result[0])))
print(record_result)
record_result_gpu.free()
$ python t36.py
result_count.nbytes is 8
record_result is with dimension 10000000 x 4
[[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
...,
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]]
$
| How to make Python CUDA atomicAdd to work with long int | How can I make Python CUDA atomicAdd works with long int? I tried with the below code, and it does not work as long as I use long *result_count or atomicAdd(&InitianCount,1);, with compilation error such as below.
import os
_path = r"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.33.31629\bin\Hostx64\x64"
if os.system("cl.exe"):
os.environ['PATH'] += ';' + _path
if os.system("cl.exe"):
raise RuntimeError("cl.exe still not found, path probably incorrect")
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
import pandas as pd
import numpy as np
InitianCount = 0
record_result = np.zeros((100000000, 4)).astype(np.float32)
record_result_gpu = cuda.mem_alloc(record_result.nbytes)
# result_count = np.int64(0)
result_count = np.zeros(1, dtype=np.int64)
result_count_gpu = cuda.mem_alloc(result_count.nbytes)
cuda.memcpy_htod(result_count_gpu, result_count)
print('result_count.nbytes is ' + str(result_count.nbytes))
mod = SourceModule("""
#include <cstdlib>
__global__ void test_cuda_LongIntArray(long InitianCount, int *result_count, float *record_result)
// __global__ void test_cuda_LongIntArray(long InitianCount, long *result_count, float *record_result)
{
long result_index;
result_index = atomicAdd(result_count,1);
// result_index = atomicAdd(&InitianCount,1);
}
""")
func = mod.get_function("test_cuda_LongIntArray")
func( np.int64(InitianCount), result_count_gpu, record_result_gpu, block=(4,16,16))
record_result = np.empty((100000000, 4), dtype=np.float32)
cuda.memcpy_dtoh(record_result, record_result_gpu)
print('record_result is with dimension ' + str(len(record_result)) + ' x ' + str(len(record_result[0])))
print(record_result)
record_result_gpu.free()
'cl.exe' is not recognized as an internal or external command,
operable program or batch file.
Microsoft (R) C/C++ Optimizing Compiler Version 19.33.31629 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
result_count.nbytes is 8
Traceback (most recent call last):
File C:\PythonProjects\TradeAnalysis\Test\TestCUDAUtilisationBrute_Long_a.py:34 in <module>
mod = SourceModule("""
File ~\anaconda3\lib\site-packages\pycuda\compiler.py:352 in __init__
cubin = compile(
File ~\anaconda3\lib\site-packages\pycuda\compiler.py:301 in compile
return compile_plain(source, options, keep, nvcc, cache_dir, target)
File ~\anaconda3\lib\site-packages\pycuda\compiler.py:154 in compile_plain
raise CompileError(
CompileError: nvcc compilation of C:\Users\henry\AppData\Local\Temp\tmpjggfucw2\kernel.cu failed
| [
"First, on windows, long (or long int) is a (signed) 32-bit (integer) type. So when asking for \"how to use atomics with long\" while at the same time allocating for 64-bit types in your code:\nresult_count = np.zeros(1, dtype=np.int64)\n\nbegs the question are you really asking about how to use atomics with long, or are you really asking about how to use atomics on a 64-bit integer type? I'll assume you want 64-bit atomics. In CUDA, atomicAdd for 64-bit integer types is only supported for unsigned types. So we will choose to use unsigned long long here, as it is unambiguously 64-bits unsigned on either windows or linux (for all currently supported CUDA activity using 64-bit OS's).\nSecond, to address your attempt in comments, you cannot do atomics on a local variable. When you use this variable:\n__global__ void test_cuda_LongIntArray(long InitianCount,\n ^^^^^^^^^^^^\n\nin a thread, that is effectively thread-local. Atomics work on global (or shared) variables. A global variable will generally be passed to a kernel using a pointer, perhaps like so:\n__global__ void test_cuda_LongIntArray(long *InitianCount,\n\nIf we aim for that (albeit using the result_count example), we can create something like this:\n$ cat t36.py\nimport pycuda.driver as cuda\nimport pycuda.autoinit\nfrom pycuda.compiler import SourceModule\nimport numpy as np\n\nInitianCount = 0\n\nrecord_result = np.zeros((10000000, 4)).astype(np.float32)\nrecord_result_gpu = cuda.mem_alloc(record_result.nbytes)\n\n# result_count = np.int64(0)\nresult_count = np.zeros(1, dtype=np.uint64)\nresult_count_gpu = cuda.mem_alloc(result_count.nbytes)\ncuda.memcpy_htod(result_count_gpu, result_count)\n\nprint('result_count.nbytes is ' + str(result_count.nbytes))\n\nmod = SourceModule(\"\"\"\n #include <cstdlib>\n\n __global__ void test_cuda_LongIntArray(long long InitianCount, unsigned long long *result_count, float *record_result)\n// __global__ void test_cuda_LongIntArray(long InitianCount, long *result_count, float *record_result)\n {\n unsigned long long result_index;\n result_index = atomicAdd(result_count,1);\n// result_index = atomicAdd(&InitianCount,1);\n }\n \"\"\")\n\nfunc = mod.get_function(\"test_cuda_LongIntArray\")\nfunc( np.int64(InitianCount), result_count_gpu, record_result_gpu, block=(4,16,16))\n\ncuda.memcpy_dtoh(record_result, record_result_gpu)\n\nprint('record_result is with dimension ' + str(len(record_result)) + ' x ' + str(len(record_result[0])))\nprint(record_result)\n\nrecord_result_gpu.free()\n$ python t36.py\nresult_count.nbytes is 8\nrecord_result is with dimension 10000000 x 4\n[[ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n ...,\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]\n [ 0. 0. 0. 0.]]\n$\n\n"
] | [
1
] | [] | [] | [
"cuda",
"python"
] | stackoverflow_0074678342_cuda_python.txt |
Q:
How to disable Teacher Forcing RNN model
I have the following Teacher forcing RNN model where I'm implicitly passing the entire input sequence (inputs = ids[:, i:i+seq_length] to the model at once.
What should I modify to disable teacher forcing training and get the original model.
ids = corpus.get_data('data/train.txt', batch_size)
model = RNNLM(vocab_size, embed_size, hidden_size, num_layers).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Truncated backpropagation
def detach(states):
return [state.detach() for state in states]
# Train the model
for epoch in range(num_epochs):
# Set initial hidden and cell states
states = (torch.zeros(num_layers, batch_size, hidden_size).to(device),
torch.zeros(num_layers, batch_size, hidden_size).to(device))
for i in range(0, ids.size(1) - seq_length, seq_length):
# Get mini-batch inputs and targets
inputs = ids[:, i:i+seq_length].to(device)
targets = ids[:, (i+1):(i+1)+seq_length].to(device)
# Forward pass
states = detach(states)
outputs, states = model(inputs, states)
loss = criterion(outputs, targets.reshape(-1))
# Backward and optimize
optimizer.zero_grad()
loss.backward()
clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
step = (i+1) // seq_length
if step % 100 == 0:
print ('Epoch [{}/{}], Step[{}/{}], Loss: {:.4f}, Perplexity: {:5.2f}'
.format(epoch+1, num_epochs, step, num_batches, loss.item(), np.exp(loss.item())))
I tried to pass input and targets in different way, but nothing works. I'm kinda confused what the input and targets should be for the original model.
A:
To disable teacher forcing in the model, you need to modify the code that generates the input and target sequences. Currently, the input sequence is constructed by taking a contiguous block of seq_length tokens from the ids tensor, starting at position i and ending at position i + seq_length. The target sequence is constructed by taking a contiguous block of seq_length tokens from the ids tensor, starting at position i + 1 and ending at position (i + 1) + seq_length.
To disable teacher forcing, you need to construct the input and target sequences differently. Instead of using a fixed length sequence of tokens, you should use a single token as the input and the corresponding target token. This means that you will need to loop through the ids tensor one token at a time, instead of using a fixed length block of tokens. Here is how you could modify the code to do this:
#Train the model
for epoch in range(num_epochs):
# Set initial hidden and cell states
states = (torch.zeros(num_layers, batch_size, hidden_size).to(device),
torch.zeros(num_layers, batch_size, hidden_size).to(device))
# Loop through the tokens in the input sequence one at a time
for i in range(0, ids.size(1)):
# Get the current input and target tokens
input = ids[:, i].to(device)
target = ids[:, i+1].to(device)
# Forward pass
states = detach(states)
outputs, states = model(input, states)
loss = criterion(outputs, target)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
if i % 100 == 0:
`
| How to disable Teacher Forcing RNN model | I have the following Teacher forcing RNN model where I'm implicitly passing the entire input sequence (inputs = ids[:, i:i+seq_length] to the model at once.
What should I modify to disable teacher forcing training and get the original model.
ids = corpus.get_data('data/train.txt', batch_size)
model = RNNLM(vocab_size, embed_size, hidden_size, num_layers).to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Truncated backpropagation
def detach(states):
return [state.detach() for state in states]
# Train the model
for epoch in range(num_epochs):
# Set initial hidden and cell states
states = (torch.zeros(num_layers, batch_size, hidden_size).to(device),
torch.zeros(num_layers, batch_size, hidden_size).to(device))
for i in range(0, ids.size(1) - seq_length, seq_length):
# Get mini-batch inputs and targets
inputs = ids[:, i:i+seq_length].to(device)
targets = ids[:, (i+1):(i+1)+seq_length].to(device)
# Forward pass
states = detach(states)
outputs, states = model(inputs, states)
loss = criterion(outputs, targets.reshape(-1))
# Backward and optimize
optimizer.zero_grad()
loss.backward()
clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
step = (i+1) // seq_length
if step % 100 == 0:
print ('Epoch [{}/{}], Step[{}/{}], Loss: {:.4f}, Perplexity: {:5.2f}'
.format(epoch+1, num_epochs, step, num_batches, loss.item(), np.exp(loss.item())))
I tried to pass input and targets in different way, but nothing works. I'm kinda confused what the input and targets should be for the original model.
| [
"To disable teacher forcing in the model, you need to modify the code that generates the input and target sequences. Currently, the input sequence is constructed by taking a contiguous block of seq_length tokens from the ids tensor, starting at position i and ending at position i + seq_length. The target sequence is constructed by taking a contiguous block of seq_length tokens from the ids tensor, starting at position i + 1 and ending at position (i + 1) + seq_length.\nTo disable teacher forcing, you need to construct the input and target sequences differently. Instead of using a fixed length sequence of tokens, you should use a single token as the input and the corresponding target token. This means that you will need to loop through the ids tensor one token at a time, instead of using a fixed length block of tokens. Here is how you could modify the code to do this:\n#Train the model\nfor epoch in range(num_epochs):\n # Set initial hidden and cell states\n states = (torch.zeros(num_layers, batch_size, hidden_size).to(device),\n torch.zeros(num_layers, batch_size, hidden_size).to(device))\n \n # Loop through the tokens in the input sequence one at a time\n for i in range(0, ids.size(1)):\n # Get the current input and target tokens\n input = ids[:, i].to(device)\n target = ids[:, i+1].to(device)\n \n # Forward pass\n states = detach(states)\n outputs, states = model(input, states)\n loss = criterion(outputs, target)\n \n # Backward and optimize\n optimizer.zero_grad()\n loss.backward()\n clip_grad_norm_(model.parameters(), 0.5)\n optimizer.step()\n\n if i % 100 == 0:\n\n`\n"
] | [
0
] | [] | [] | [
"python",
"recurrent_neural_network"
] | stackoverflow_0074680569_python_recurrent_neural_network.txt |
Q:
Are Python coroutines stackless or stackful?
I've seen conflicting views on whether Python coroutines (I primarily mean async/await) are stackless or stackful.
Some sources say they're stackful:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2074r0.pdf
'Python coroutines are stackful.'
How do coroutines in Python compare to those in Lua?
Yes, Python coroutines are stackful, first-class and asymmetric.
While others seem to imply they're stackless, e.g. https://gamelisp.rs/reference/coroutines.html
GameLisp's coroutines follow the model set by Rust, Python, C# and C++. Our coroutines are "stackless"
In general my understanding always was that any meaningful async/await implementation implies stackless coroutines, while stackful ones are basically fibers (userspace threads, often switched more or less cooperatively), like goroutines, Boost.Coroutine, apparently those in Lua etc.
Is my understanding correct? Or do Python coroutines somehow fundamentally differ from those in say C++, and are stackful? Or do the authors of the source above mean different things?
A:
It seems that the sources are using different terminology and definitions for "stackful" and "stackless" coroutines.
In the first source, "stackful" means that the coroutine has its own stack, which is separate from the calling function's stack. This allows the coroutine to have its own local variables and execution state, and to return to the calling function at a later time.
In the second source, "stackless" means that the coroutine does not have its own stack, and instead shares the stack with the calling function. This means that the coroutine cannot return to the calling function at a later time, and must instead be resumed from the same point in the calling function's stack.
So, both sources are correct, but they are talking about different aspects of coroutines. Python coroutines are stackful in the sense that they have their own stack, but they may also be considered stackless in the sense that they share the stack with the calling function.
| Are Python coroutines stackless or stackful? | I've seen conflicting views on whether Python coroutines (I primarily mean async/await) are stackless or stackful.
Some sources say they're stackful:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2074r0.pdf
'Python coroutines are stackful.'
How do coroutines in Python compare to those in Lua?
Yes, Python coroutines are stackful, first-class and asymmetric.
While others seem to imply they're stackless, e.g. https://gamelisp.rs/reference/coroutines.html
GameLisp's coroutines follow the model set by Rust, Python, C# and C++. Our coroutines are "stackless"
In general my understanding always was that any meaningful async/await implementation implies stackless coroutines, while stackful ones are basically fibers (userspace threads, often switched more or less cooperatively), like goroutines, Boost.Coroutine, apparently those in Lua etc.
Is my understanding correct? Or do Python coroutines somehow fundamentally differ from those in say C++, and are stackful? Or do the authors of the source above mean different things?
| [
"It seems that the sources are using different terminology and definitions for \"stackful\" and \"stackless\" coroutines.\nIn the first source, \"stackful\" means that the coroutine has its own stack, which is separate from the calling function's stack. This allows the coroutine to have its own local variables and execution state, and to return to the calling function at a later time.\nIn the second source, \"stackless\" means that the coroutine does not have its own stack, and instead shares the stack with the calling function. This means that the coroutine cannot return to the calling function at a later time, and must instead be resumed from the same point in the calling function's stack.\nSo, both sources are correct, but they are talking about different aspects of coroutines. Python coroutines are stackful in the sense that they have their own stack, but they may also be considered stackless in the sense that they share the stack with the calling function.\n"
] | [
0
] | [] | [] | [
"coroutine",
"python",
"python_asyncio"
] | stackoverflow_0070339355_coroutine_python_python_asyncio.txt |