content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Unable to plot 2 classes in Linear Discriminant Analysis in Python using sklearn
Thanks for reading my question - I would greatly appreciate any input!
I am currently working on a LDA problem in Python - I'm a little new to ML, so that might be one reason why I am running into this problem. Regardless, here it is:
I have a classification problem, for short we'll call it T and non-T. I have a dataframe called PODall which contains my data and their labels (0 (non-T) vs 1 (T)).
I have used the sklearn LDA module to run this analysis. I am able to get a classification accuracy, etc., just unable to actually plot my data for visualization.
I have borrowed code from https://sebastianraschka.com/Articles/2014_python_lda.html#principal-component-analysis-vs-linear-discriminant-analysis, to be able to visualise my data, namely the plotting function:
X_lda_sklearn = sklearn_lda.fit_transform(X, y)
def plot_scikit_lda(X, title):
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X[:,0][y == label],
y=X[:,1][y == label] * -1, # flip the figure
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label])
plt.xlabel('LD1')
plt.ylabel('LD2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title(title)
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.grid()
plt.tight_layout
plt.show()
plot_step_lda()
plot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn')
When I run this, I get the error that X is a one-dimensional array, and therefore X[:,1] errors.
If I add one more class, ie. "pre-T", "T", and "post-T", I am able to plot this visualisation.
If I need to clarify my problem, please let me know!!
Thanks!
~CJ
A:
This is probably a bit too late answer to help OP, but maybe it'll be useful for others: as per guide from Scikit-learn documentation, LDA always produces fewer dimensions than the number of classes in data.
When the number of components is not specified, it's calculated as the highest amount possible, that is:
n_components int, default=None
Number of components (<= min(n_classes - 1, n_features)) for
dimensionality reduction. If None, will be set to min(n_classes - 1, n_features).
n_components parameter description
So if you're using LDA on the 2 classes problem, you get only one dimension, which you can plot by setting the y parameter in scatter plot for example as array of zeros (or any other constant value).
| Unable to plot 2 classes in Linear Discriminant Analysis in Python using sklearn | Thanks for reading my question - I would greatly appreciate any input!
I am currently working on a LDA problem in Python - I'm a little new to ML, so that might be one reason why I am running into this problem. Regardless, here it is:
I have a classification problem, for short we'll call it T and non-T. I have a dataframe called PODall which contains my data and their labels (0 (non-T) vs 1 (T)).
I have used the sklearn LDA module to run this analysis. I am able to get a classification accuracy, etc., just unable to actually plot my data for visualization.
I have borrowed code from https://sebastianraschka.com/Articles/2014_python_lda.html#principal-component-analysis-vs-linear-discriminant-analysis, to be able to visualise my data, namely the plotting function:
X_lda_sklearn = sklearn_lda.fit_transform(X, y)
def plot_scikit_lda(X, title):
ax = plt.subplot(111)
for label,marker,color in zip(
range(1,4),('^', 's', 'o'),('blue', 'red', 'green')):
plt.scatter(x=X[:,0][y == label],
y=X[:,1][y == label] * -1, # flip the figure
marker=marker,
color=color,
alpha=0.5,
label=label_dict[label])
plt.xlabel('LD1')
plt.ylabel('LD2')
leg = plt.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.5)
plt.title(title)
# hide axis ticks
plt.tick_params(axis="both", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
# remove axis spines
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["bottom"].set_visible(False)
ax.spines["left"].set_visible(False)
plt.grid()
plt.tight_layout
plt.show()
plot_step_lda()
plot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn')
When I run this, I get the error that X is a one-dimensional array, and therefore X[:,1] errors.
If I add one more class, ie. "pre-T", "T", and "post-T", I am able to plot this visualisation.
If I need to clarify my problem, please let me know!!
Thanks!
~CJ
| [
"This is probably a bit too late answer to help OP, but maybe it'll be useful for others: as per guide from Scikit-learn documentation, LDA always produces fewer dimensions than the number of classes in data.\nWhen the number of components is not specified, it's calculated as the highest amount possible, that is:\n\nn_components int, default=None\nNumber of components (<= min(n_classes - 1, n_features)) for\ndimensionality reduction. If None, will be set to min(n_classes - 1, n_features).\n\nn_components parameter description\nSo if you're using LDA on the 2 classes problem, you get only one dimension, which you can plot by setting the y parameter in scatter plot for example as array of zeros (or any other constant value).\n"
] | [
0
] | [] | [] | [
"python",
"scikit_learn"
] | stackoverflow_0065644516_python_scikit_learn.txt |
Q:
discord login button selenium
im trying to auto login to my discord account and stay online with pyton and selenium
the error :
driver.find_element(By.XPATH, '//*[@id="app-mount"]/div[2]/div/div[2]/div/div/form/div/div/div[1]/div[3]/button[2]').click()
this is my code :
import time
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
# Github credentials
username = "email"
password = "password"
# initialize the Chrome driver
driver = webdriver.Chrome("chromedriver")
# head to github login page
driver.get("https://discord.com/login")
time.sleep(3)
# find username/email field and send the username itself to the input field
driver.find_element(By.NAME, 'email').send_keys(username)
# find password input field and insert password as well
driver.find_element(By.NAME, 'password').send_keys(password)
time.sleep(10)
# click login button
driver.find_element(By.XPATH, '//*[@id="app-mount"]/div[2]/div/div[2]/div/div/form/div/div/div[1]/div[3]/button[2]').click()
# wait the ready state to be complete
WebDriverWait(driver=driver, timeout=10).until(
lambda x: x.execute_script("return document.readyState === 'complete'")
)
error_message = "Incorrect username or password."
# get the errors (if there are)
errors = driver.find_elements(By.CLASS_NAME, "flash-error")
# if we find that error message within errors, then login is failed
if any(error_message in e.text for e in errors):
print("[!] Login failed")
else:
print("[+] Login su")
i didnt find any help in the web
A:
The problem is you are selecting the wrong XPATH. Here's how to find the correct XPATH:
Open discord login page
Using inspect find the element
Right click on the element and select Copy > Copy XPATH
Here's your correct XPATH: //*[@id="app-mount"]/div[2]/div/div[1]/div/div/div/div/form/div[2]/div/div[1]/div[2]/button[2]
Here's your final code:
import time
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
# Github credentials
username = "email"
password = "password"
# initialize the Chrome driver
driver = webdriver.Chrome("chromedriver")
# head to github login page
driver.get("https://discord.com/login")
time.sleep(3)
# find username/email field and send the username itself to the input field
driver.find_element(By.NAME, 'email').send_keys(username)
# find password input field and insert password as well
driver.find_element(By.NAME, 'password').send_keys(password)
time.sleep(10)
# click login button
driver.find_element(By.XPATH, '//*[@id="app-mount"]/div[2]/div/div[1]/div/div/div/div/form/div[2]/div/div[1]/div[2]/button[2]').click()
# wait the ready state to be complete
WebDriverWait(driver=driver, timeout=10).until(
lambda x: x.execute_script("return document.readyState === 'complete'")
)
error_message = "Incorrect username or password."
# get the errors (if there are)
errors = driver.find_elements(By.CLASS_NAME, "flash-error")
# if we find that error message within errors, then login is failed
if any(error_message in e.text for e in errors):
print("[!] Login failed")
else:
print("[+] Login su")
| discord login button selenium | im trying to auto login to my discord account and stay online with pyton and selenium
the error :
driver.find_element(By.XPATH, '//*[@id="app-mount"]/div[2]/div/div[2]/div/div/form/div/div/div[1]/div[3]/button[2]').click()
this is my code :
import time
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
# Github credentials
username = "email"
password = "password"
# initialize the Chrome driver
driver = webdriver.Chrome("chromedriver")
# head to github login page
driver.get("https://discord.com/login")
time.sleep(3)
# find username/email field and send the username itself to the input field
driver.find_element(By.NAME, 'email').send_keys(username)
# find password input field and insert password as well
driver.find_element(By.NAME, 'password').send_keys(password)
time.sleep(10)
# click login button
driver.find_element(By.XPATH, '//*[@id="app-mount"]/div[2]/div/div[2]/div/div/form/div/div/div[1]/div[3]/button[2]').click()
# wait the ready state to be complete
WebDriverWait(driver=driver, timeout=10).until(
lambda x: x.execute_script("return document.readyState === 'complete'")
)
error_message = "Incorrect username or password."
# get the errors (if there are)
errors = driver.find_elements(By.CLASS_NAME, "flash-error")
# if we find that error message within errors, then login is failed
if any(error_message in e.text for e in errors):
print("[!] Login failed")
else:
print("[+] Login su")
i didnt find any help in the web
| [
"The problem is you are selecting the wrong XPATH. Here's how to find the correct XPATH:\n\nOpen discord login page\nUsing inspect find the element\nRight click on the element and select Copy > Copy XPATH\n\n\nHere's your correct XPATH: //*[@id=\"app-mount\"]/div[2]/div/div[1]/div/div/div/div/form/div[2]/div/div[1]/div[2]/button[2]\nHere's your final code:\nimport time\n\nfrom selenium import webdriver\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\n\n# Github credentials\nusername = \"email\"\npassword = \"password\"\n\n# initialize the Chrome driver\ndriver = webdriver.Chrome(\"chromedriver\")\n\n# head to github login page\ndriver.get(\"https://discord.com/login\")\ntime.sleep(3)\n\n\n# find username/email field and send the username itself to the input field\ndriver.find_element(By.NAME, 'email').send_keys(username)\n\n# find password input field and insert password as well\ndriver.find_element(By.NAME, 'password').send_keys(password)\ntime.sleep(10)\n# click login button\ndriver.find_element(By.XPATH, '//*[@id=\"app-mount\"]/div[2]/div/div[1]/div/div/div/div/form/div[2]/div/div[1]/div[2]/button[2]').click()\n\n\n\n# wait the ready state to be complete\nWebDriverWait(driver=driver, timeout=10).until(\n lambda x: x.execute_script(\"return document.readyState === 'complete'\")\n)\nerror_message = \"Incorrect username or password.\"\n# get the errors (if there are)\nerrors = driver.find_elements(By.CLASS_NAME, \"flash-error\")\n# if we find that error message within errors, then login is failed\nif any(error_message in e.text for e in errors):\n print(\"[!] Login failed\")\nelse:\n print(\"[+] Login su\")\n\n"
] | [
0
] | [] | [] | [
"python",
"selenium"
] | stackoverflow_0074675225_python_selenium.txt |
Q:
Python - Need Help Web Scraping Dynamic Website
I'm pretty new to web scraping and would appreciate any advice for the scenarios below:
I'm trying to produce a home loans listing table using data from https://www.canstar.com.au/home-loans/
I'm mainly trying to get listings values like the ones below:
Homestar Finance | Star Essentials P&I 80% | Variable
Unloan | Home Loan LVR <80% | Variable
TicToc Home Loans | Live-in Variable P&I | Variable
ubank | Neat Home Loan Owner Occupied P&I 70-80% | Variable
and push them into a nested table
results = [[Homestar Finance, Star Essentials P&I 80%, Variable], etc, etc]
My first attempt, I've used BeautifulSoup entirely and practice on an offline version of the site.
import pandas as pd
from bs4 import BeautifulSoup
with open('/local/path/canstar.html', 'r') as canstar_offline :
content = canstar_offline.read()
results = [['Affiliate', 'Product Name', 'Product Type']]
soup = BeautifulSoup(content, 'lxml')
for listing in soup.find_all('div', class_='table-cards-container') :
for listing1 in listing.find_all('a') :
if listing1.text.strip() != 'More details' and listing1.text.strip() != '' :
results.append(listing1.text.strip().split(' | '))
df = pd.DataFrame(results[1:], columns=results[0]).to_dict('list')
df2 = pd.DataFrame(df)
print(df2)
I pretty much got very close to what I wanted, but unfortunately it doesn't work for the actual site cause it looks like I'm getting blocked for repeated requests.
So I tried this again on Selenium but now I'm stuck.
I tried using as much of the transferrable filtering logic that I used from BS, but I can't get anywhere close to what I had using Selenium.
import time
from selenium.webdriver.common.by import By
url = 'https://www.canstar.com.au/home-loans'
results = []
driver = webdriver.Chrome()
driver.get(url)
# content = driver.page_source
# soup = BeautifulSoup(content)
time.sleep(3)
tables = driver.find_elements(By.CLASS_NAME, 'table-cards-container')
for table in tables :
listing = table.find_element(By.TAG_NAME, 'a')
print(listing.text)
This version (above) only returns one listing (I'm trying to get the entire table through iteration)
import time
from selenium.webdriver.common.by import By
url = 'https://www.canstar.com.au/home-loans'
results = []
driver = webdriver.Chrome()
driver.get(url)
# content = driver.page_source
# soup = BeautifulSoup(content)
time.sleep(3)
tables = driver.find_elements(By.CLASS_NAME, 'table-cards-container')
for table in tables :
# listing = table.find_element(By.TAG_NAME, 'a')
print(table.text)
This version (above) looks like it gets all the text from the 'table-cards-container' class, but I'm unable to filter through it to just get the listings.
A:
I think you can try something like this, I hope the comments in the code explain what it is doing.
# Needed libs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Initiate the driver and navigate
driver = webdriver.Chrome()
url = 'https://www.canstar.com.au/home-loans'
driver.get(url)
# We save the loans list
loans = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.XPATH, "//cnslib-table-card")))
# We make a loop once per loan in the loop
for i in range(1, len(loans)):
# With this Xpath I save the title of the loan
loan_title = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f"((//cnslib-table-card)[{i}]//a)[1]"))).text
print(loan_title)
# With this Xpath I save the first percentaje we see for the loan
loan_first_percentaje = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f"((//cnslib-table-card)[{i}]//span)[1]"))).text
print(loan_first_percentaje)
# With this Xpath I save the second percentaje we see for the loan
loan_second_percentaje = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f"((//cnslib-table-card)[{i}]//span)[3]"))).text
print(loan_second_percentaje)
# With this Xpath I save the amount we see for the loan
loan_amount = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f"((//cnslib-table-card)[{i}]//span)[5]"))).text
print(loan_amount)
| Python - Need Help Web Scraping Dynamic Website | I'm pretty new to web scraping and would appreciate any advice for the scenarios below:
I'm trying to produce a home loans listing table using data from https://www.canstar.com.au/home-loans/
I'm mainly trying to get listings values like the ones below:
Homestar Finance | Star Essentials P&I 80% | Variable
Unloan | Home Loan LVR <80% | Variable
TicToc Home Loans | Live-in Variable P&I | Variable
ubank | Neat Home Loan Owner Occupied P&I 70-80% | Variable
and push them into a nested table
results = [[Homestar Finance, Star Essentials P&I 80%, Variable], etc, etc]
My first attempt, I've used BeautifulSoup entirely and practice on an offline version of the site.
import pandas as pd
from bs4 import BeautifulSoup
with open('/local/path/canstar.html', 'r') as canstar_offline :
content = canstar_offline.read()
results = [['Affiliate', 'Product Name', 'Product Type']]
soup = BeautifulSoup(content, 'lxml')
for listing in soup.find_all('div', class_='table-cards-container') :
for listing1 in listing.find_all('a') :
if listing1.text.strip() != 'More details' and listing1.text.strip() != '' :
results.append(listing1.text.strip().split(' | '))
df = pd.DataFrame(results[1:], columns=results[0]).to_dict('list')
df2 = pd.DataFrame(df)
print(df2)
I pretty much got very close to what I wanted, but unfortunately it doesn't work for the actual site cause it looks like I'm getting blocked for repeated requests.
So I tried this again on Selenium but now I'm stuck.
I tried using as much of the transferrable filtering logic that I used from BS, but I can't get anywhere close to what I had using Selenium.
import time
from selenium.webdriver.common.by import By
url = 'https://www.canstar.com.au/home-loans'
results = []
driver = webdriver.Chrome()
driver.get(url)
# content = driver.page_source
# soup = BeautifulSoup(content)
time.sleep(3)
tables = driver.find_elements(By.CLASS_NAME, 'table-cards-container')
for table in tables :
listing = table.find_element(By.TAG_NAME, 'a')
print(listing.text)
This version (above) only returns one listing (I'm trying to get the entire table through iteration)
import time
from selenium.webdriver.common.by import By
url = 'https://www.canstar.com.au/home-loans'
results = []
driver = webdriver.Chrome()
driver.get(url)
# content = driver.page_source
# soup = BeautifulSoup(content)
time.sleep(3)
tables = driver.find_elements(By.CLASS_NAME, 'table-cards-container')
for table in tables :
# listing = table.find_element(By.TAG_NAME, 'a')
print(table.text)
This version (above) looks like it gets all the text from the 'table-cards-container' class, but I'm unable to filter through it to just get the listings.
| [
"I think you can try something like this, I hope the comments in the code explain what it is doing.\n# Needed libs\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n# Initiate the driver and navigate\ndriver = webdriver.Chrome()\nurl = 'https://www.canstar.com.au/home-loans'\ndriver.get(url)\n\n# We save the loans list\nloans = WebDriverWait(driver, 30).until(EC.presence_of_all_elements_located((By.XPATH, \"//cnslib-table-card\")))\n\n# We make a loop once per loan in the loop\nfor i in range(1, len(loans)):\n # With this Xpath I save the title of the loan\n loan_title = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f\"((//cnslib-table-card)[{i}]//a)[1]\"))).text\n print(loan_title)\n # With this Xpath I save the first percentaje we see for the loan\n loan_first_percentaje = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f\"((//cnslib-table-card)[{i}]//span)[1]\"))).text\n print(loan_first_percentaje)\n # With this Xpath I save the second percentaje we see for the loan\n loan_second_percentaje = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f\"((//cnslib-table-card)[{i}]//span)[3]\"))).text\n print(loan_second_percentaje)\n # With this Xpath I save the amount we see for the loan\n loan_amount = WebDriverWait(driver, 30).until(EC.presence_of_element_located((By.XPATH, f\"((//cnslib-table-card)[{i}]//span)[5]\"))).text\n print(loan_amount)\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"dynamic",
"python",
"selenium",
"web_scraping"
] | stackoverflow_0074674619_beautifulsoup_dynamic_python_selenium_web_scraping.txt |
Q:
Add to the list, a value of a column of the current row of a DataFrame only if the previous rows pass the test
A brief example of my CSV file (there is no way to publish complete by the limit of characters):
market_name,runner_name,odds,result,back
First Half Goals 0.5,Over 0.5 Goals,1.7,WINNER,0.6545
Over/Under 6.5 Goals,Under 6.5 Goals,1.01,WINNER,0.00935
Over/Under 0.5 Goals,Over 0.5 Goals,1.71,WINNER,0.66385
Over/Under 2.5 Goals,Under 2.5 Goals,1.41,WINNER,0.3833499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.25,WINNER,0.23375
Over/Under 4.5 Goals,Under 4.5 Goals,1.34,WINNER,0.3179
First Half Goals 0.5,Under 0.5 Goals,1.96,WINNER,0.8976000000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.6,WINNER,0.5610000000000002
Over/Under 2.5 Goals,Over 2.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 3.5 Goals,Over 3.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 3.5 Goals,Under 3.5 Goals,1.98,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.09,WINNER,0.08415
Over/Under 3.5 Goals,Over 3.5 Goals,2.02,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,3.15,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.44,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.7,WINNER,0.6545
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
Over/Under 4.5 Goals,Over 4.5 Goals,2.06,WINNER,0.9911
Over/Under 3.5 Goals,Under 3.5 Goals,2.0,WINNER,0.935
Over/Under 1.5 Goals,Under 1.5 Goals,1.41,WINNER,0.3833499999999999
Over/Under 7.5 Goals,Under 7.5 Goals,1.27,WINNER,0.25245
Over/Under 5.5 Goals,Under 5.5 Goals,1.5,WINNER,0.4675
Over/Under 4.5 Goals,Under 4.5 Goals,1.29,WINNER,0.27115
Over/Under 1.5 Goals,Over 1.5 Goals,1.15,WINNER,0.1402499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.53,WINNER,0.49555
Over/Under 1.5 Goals,Over 1.5 Goals,1.57,WINNER,0.53295
First Half Goals 0.5,Over 0.5 Goals,1.44,WINNER,0.4114
Over/Under 0.5 Goals,Over 0.5 Goals,2.06,WINNER,0.9911
First Half Goals 0.5,Under 0.5 Goals,2.32,WINNER,1.2342
First Half Goals 0.5,Under 0.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 2.5 Goals,Under 2.5 Goals,1.2,WINNER,0.1869999999999999
First Half Goals 0.5,Under 0.5 Goals,1.08,WINNER,0.0748
First Half Goals 0.5,Over 0.5 Goals,2.02,WINNER,0.9537
Over/Under 1.5 Goals,Under 1.5 Goals,1.69,WINNER,0.64515
Over/Under 0.5 Goals,Over 0.5 Goals,1.25,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,4.7,WINNER,3.4595
First Half Goals 0.5,Over 0.5 Goals,1.74,WINNER,0.6919000000000001
First Half Goals 1.5,Under 1.5 Goals,1.41,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,4.3,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,2.44,WINNER,1.3464
Over/Under 1.5 Goals,Over 1.5 Goals,1.6,WINNER,0.5610000000000002
First Half Goals 0.5,Under 0.5 Goals,1.77,WINNER,0.7199500000000001
First Half Goals 1.5,Under 1.5 Goals,1.88,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.93,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.62,WINNER,0.5797000000000001
Over/Under 3.5 Goals,Under 3.5 Goals,1.93,WINNER,0.86955
First Half Goals 0.5,Under 0.5 Goals,1.4,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.31,WINNER,0.28985
First Half Goals 0.5,Under 0.5 Goals,1.23,WINNER,0.21505
Over/Under 1.5 Goals,Under 1.5 Goals,1.75,WINNER,0.70125
Over/Under 2.5 Goals,Under 2.5 Goals,1.24,WINNER,0.2244
Over/Under 3.5 Goals,Under 3.5 Goals,1.1,WINNER,0.0935
First Half Goals 0.5,Under 0.5 Goals,1.13,WINNER,0.1215499999999999
Over/Under 5.5 Goals,Under 5.5 Goals,1.15,WINNER,0.1402499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.4,WINNER,0.3739999999999999
Over/Under 0.5 Goals,Under 0.5 Goals,3.7,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.7,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,4.0,WINNER,2.805
First Half Goals 0.5,Over 0.5 Goals,1.73,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.17,WINNER,0.1589499999999999
First Half Goals 0.5,Over 0.5 Goals,1.97,WINNER,0.90695
Over/Under 0.5 Goals,Under 0.5 Goals,5.6,WINNER,4.301
Over/Under 1.5 Goals,Over 1.5 Goals,1.44,WINNER,0.4114
First Half Goals 0.5,Over 0.5 Goals,1.75,WINNER,0.70125
First Half Goals 0.5,Over 0.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,2.02,WINNER,0.9537
First Half Goals 0.5,Under 0.5 Goals,1.75,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.61,WINNER,0.5703500000000001
Over/Under 0.5 Goals,Under 0.5 Goals,2.26,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,2.02,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.8,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.4,WINNER,0.3739999999999999
Over/Under 2.5 Goals,Under 2.5 Goals,2.32,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.27,WINNER,0.25245
First Half Goals 0.5,Under 0.5 Goals,1.5,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.06,WINNER,0.0561
Over/Under 4.5 Goals,Over 4.5 Goals,3.3,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.41,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.9,WINNER,0.8414999999999999
First Half Goals 0.5,Under 0.5 Goals,1.04,WINNER,0.0374
First Half Goals 1.5,Over 1.5 Goals,2.02,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.73,WINNER,0.68255
Over/Under 4.5 Goals,Over 4.5 Goals,1.47,WINNER,0.43945
Over/Under 2.5 Goals,Over 2.5 Goals,1.33,WINNER,0.3085500000000001
Over/Under 1.5 Goals,Over 1.5 Goals,2.08,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.33,WINNER,0.3085500000000001
First Half Goals 1.5,Under 1.5 Goals,1.42,WINNER,0.3926999999999999
First Half Goals 0.5,Under 0.5 Goals,1.25,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.5,WINNER,0.4675
First Half Goals 0.5,Over 0.5 Goals,2.02,WINNER,0.9537
First Half Goals 0.5,Under 0.5 Goals,1.5,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.98,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.11,WINNER,0.1028500000000001
First Half Goals 0.5,Under 0.5 Goals,1.03,WINNER,0.02805
First Half Goals 0.5,Over 0.5 Goals,2.42,WINNER,1.3277
First Half Goals 0.5,Under 0.5 Goals,1.23,LOSER,-1.0
Over/Under 4.5 Goals,Over 4.5 Goals,4.7,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.71,WINNER,0.66385
Over/Under 2.5 Goals,Over 2.5 Goals,1.67,WINNER,0.62645
Over/Under 1.5 Goals,Under 1.5 Goals,1.54,WINNER,0.5049
First Half Goals 1.5,Under 1.5 Goals,1.46,WINNER,0.4301
First Half Goals 1.5,Under 1.5 Goals,1.13,WINNER,0.1215499999999999
First Half Goals 0.5,Under 0.5 Goals,1.2,WINNER,0.1869999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.84,WINNER,0.7854000000000001
First Half Goals 0.5,Over 0.5 Goals,1.68,WINNER,0.6358
Over/Under 1.5 Goals,Over 1.5 Goals,1.73,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,1.27,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.73,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
First Half Goals 0.5,Under 0.5 Goals,2.0,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.9,WINNER,0.8414999999999999
First Half Goals 1.5,Under 1.5 Goals,1.33,WINNER,0.3085500000000001
First Half Goals 0.5,Over 0.5 Goals,2.02,WINNER,0.9537
First Half Goals 0.5,Under 0.5 Goals,1.51,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Over 2.5 Goals,1.29,WINNER,0.27115
First Half Goals 1.5,Over 1.5 Goals,2.02,WINNER,0.9537
Over/Under 3.5 Goals,Over 3.5 Goals,1.68,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.16,WINNER,0.1495999999999999
First Half Goals 0.5,Under 0.5 Goals,2.3,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.77,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.94,WINNER,0.8789
First Half Goals 0.5,Under 0.5 Goals,1.87,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.14,WINNER,0.1308999999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.24,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.21,WINNER,0.1963499999999999
First Half Goals 1.5,Over 1.5 Goals,1.91,WINNER,0.85085
Over/Under 2.5 Goals,Over 2.5 Goals,1.77,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.82,WINNER,0.7667
Over/Under 1.5 Goals,Under 1.5 Goals,2.14,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.16,WINNER,0.1495999999999999
First Half Goals 1.5,Under 1.5 Goals,1.11,WINNER,0.1028500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.71,LOSER,-1.0
First Half Goals 1.5,Over 1.5 Goals,1.85,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.48,WINNER,0.4488
First Half Goals 1.5,Under 1.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.26,WINNER,0.2431
Over/Under 1.5 Goals,Over 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.47,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.58,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.46,WINNER,0.4301
Over/Under 1.5 Goals,Over 1.5 Goals,1.83,WINNER,0.7760500000000001
Over/Under 2.5 Goals,Over 2.5 Goals,2.58,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,1.42,LOSER,-1.0
First Half Goals 2.5,Under 2.5 Goals,1.42,WINNER,0.3926999999999999
First Half Goals 0.5,Over 0.5 Goals,2.1,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,2.04,WINNER,0.9724
Over/Under 1.5 Goals,Under 1.5 Goals,1.26,WINNER,0.2431
First Half Goals 0.5,Under 0.5 Goals,1.4,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.95,WINNER,0.88825
First Half Goals 1.5,Under 1.5 Goals,1.32,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.29,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.16,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 0.5,Over 0.5 Goals,1.45,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.58,WINNER,0.5423000000000001
Over/Under 2.5 Goals,Under 2.5 Goals,1.76,WINNER,0.7106
Over/Under 0.5 Goals,Over 0.5 Goals,1.54,WINNER,0.5049
Over/Under 4.5 Goals,Under 4.5 Goals,1.63,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.71,WINNER,0.66385
Over/Under 2.5 Goals,Under 2.5 Goals,1.95,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,2.8,WINNER,1.6829999999999998
Over/Under 3.5 Goals,Under 3.5 Goals,2.2,WINNER,1.1220000000000003
Over/Under 1.5 Goals,Over 1.5 Goals,2.16,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.5,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.27,WINNER,0.25245
Over/Under 6.5 Goals,Under 6.5 Goals,2.0,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,2.06,WINNER,0.9911
Over/Under 3.5 Goals,Under 3.5 Goals,1.9,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,2.08,WINNER,1.0098
Over/Under 1.5 Goals,Over 1.5 Goals,1.54,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.4,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,3.35,WINNER,2.1972500000000004
Over/Under 2.5 Goals,Over 2.5 Goals,1.44,WINNER,0.4114
Over/Under 5.5 Goals,Over 5.5 Goals,1.33,WINNER,0.3085500000000001
Over/Under 1.5 Goals,Under 1.5 Goals,1.94,WINNER,0.8789
Over/Under 3.5 Goals,Under 3.5 Goals,1.01,WINNER,0.00935
First Half Goals 1.5,Under 1.5 Goals,1.78,WINNER,0.7293000000000001
First Half Goals 0.5,Over 0.5 Goals,1.63,WINNER,0.58905
First Half Goals 0.5,Over 0.5 Goals,1.75,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,2.38,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.37,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.4,WINNER,0.3739999999999999
First Half Goals 2.5,Under 2.5 Goals,1.01,WINNER,0.00935
Over/Under 1.5 Goals,Under 1.5 Goals,1.93,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.14,WINNER,0.1308999999999999
First Half Goals 0.5,Over 0.5 Goals,1.47,WINNER,0.43945
Over/Under 3.5 Goals,Over 3.5 Goals,1.55,WINNER,0.5142500000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.1,WINNER,0.0935
First Half Goals 0.5,Under 0.5 Goals,1.4,WINNER,0.3739999999999999
Over/Under 2.5 Goals,Over 2.5 Goals,3.0,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.04,WINNER,0.0374
First Half Goals 0.5,Under 0.5 Goals,1.28,WINNER,0.2618
First Half Goals 1.5,Under 1.5 Goals,1.32,WINNER,0.2992000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.33,WINNER,0.3085500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.31,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.38,WINNER,0.3552999999999999
First Half Goals 2.5,Under 2.5 Goals,1.3,WINNER,0.2805000000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 0.5,Under 0.5 Goals,1.65,WINNER,0.6077499999999999
First Half Goals 0.5,Under 0.5 Goals,1.39,WINNER,0.3646499999999999
First Half Goals 0.5,Over 0.5 Goals,1.71,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.85,WINNER,0.7947500000000002
First Half Goals 0.5,Under 0.5 Goals,1.72,WINNER,0.6732
Over/Under 0.5 Goals,Under 0.5 Goals,1.66,WINNER,0.6171
Over/Under 0.5 Goals,Under 0.5 Goals,2.92,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,2.5,WINNER,1.4025
Over/Under 1.5 Goals,Under 1.5 Goals,1.31,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.22,WINNER,0.2057
Over/Under 2.5 Goals,Over 2.5 Goals,1.08,WINNER,0.0748
Over/Under 1.5 Goals,Over 1.5 Goals,1.17,WINNER,0.1589499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.24,WINNER,0.2244
Over/Under 1.5 Goals,Over 1.5 Goals,1.58,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.66,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.47,WINNER,0.43945
Over/Under 3.5 Goals,Over 3.5 Goals,1.12,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,2.16,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,3.25,WINNER,2.10375
Over/Under 1.5 Goals,Under 1.5 Goals,3.1,WINNER,1.9635
Over/Under 0.5 Goals,Over 0.5 Goals,1.48,WINNER,0.4488
Over/Under 2.5 Goals,Over 2.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 3.5 Goals,Over 3.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Under 2.5 Goals,4.7,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.29,WINNER,0.27115
Over/Under 3.5 Goals,Under 3.5 Goals,1.05,WINNER,0.04675
First Half Goals 0.5,Under 0.5 Goals,2.04,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.15,WINNER,0.1402499999999999
Over/Under 3.5 Goals,Over 3.5 Goals,1.43,WINNER,0.4020499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.49,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.28,WINNER,0.2618
Over/Under 1.5 Goals,Under 1.5 Goals,5.4,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.45,WINNER,0.4207499999999999
Over/Under 1.5 Goals,Under 1.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 2.5 Goals,Under 2.5 Goals,1.38,WINNER,0.3552999999999999
First Half Goals 0.5,Under 0.5 Goals,1.42,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.33,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.3,WINNER,0.2805000000000001
Over/Under 3.5 Goals,Over 3.5 Goals,1.28,WINNER,0.2618
Over/Under 1.5 Goals,Under 1.5 Goals,2.94,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.95,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 1.5 Goals,Under 1.5 Goals,1.27,WINNER,0.25245
Over/Under 4.5 Goals,Over 4.5 Goals,1.09,WINNER,0.08415
Over/Under 3.5 Goals,Under 3.5 Goals,1.01,WINNER,0.00935
Over/Under 1.5 Goals,Over 1.5 Goals,2.14,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.02,WINNER,0.0187
Over/Under 4.5 Goals,Over 4.5 Goals,1.17,WINNER,0.1589499999999999
Over/Under 5.5 Goals,Over 5.5 Goals,1.24,WINNER,0.2244
First Half Goals 0.5,Under 0.5 Goals,1.4,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.35,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.51,LOSER,-1.0
First Half Goals 1.5,Over 1.5 Goals,2.62,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.81,WINNER,0.7573500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.2,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.36,WINNER,0.3366000000000001
First Half Goals 0.5,Over 0.5 Goals,1.63,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.8,WINNER,0.7480000000000001
First Half Goals 0.5,Under 0.5 Goals,1.68,WINNER,0.6358
Over/Under 0.5 Goals,Under 0.5 Goals,1.27,WINNER,0.25245
Over/Under 1.5 Goals,Over 1.5 Goals,1.59,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.59,WINNER,0.5516500000000001
Over/Under 2.5 Goals,Over 2.5 Goals,1.62,WINNER,0.5797000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.64,LOSER,-1.0
Over/Under 4.5 Goals,Over 4.5 Goals,1.34,WINNER,0.3179
Over/Under 4.5 Goals,Over 4.5 Goals,1.34,WINNER,0.3179
First Half Goals 0.5,Under 0.5 Goals,1.6,WINNER,0.5610000000000002
Over/Under 1.5 Goals,Over 1.5 Goals,2.1,LOSER,-1.0
Over/Under 5.5 Goals,Under 5.5 Goals,1.52,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.2,WINNER,0.1869999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.1,WINNER,0.0935
Over/Under 3.5 Goals,Under 3.5 Goals,1.27,LOSER,-1.0
Over/Under 7.5 Goals,Under 7.5 Goals,1.38,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.87,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,2.14,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.22,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.5,WINNER,0.4675
First Half Goals 0.5,Under 0.5 Goals,1.12,WINNER,0.1122000000000001
First Half Goals 2.5,Under 2.5 Goals,1.27,WINNER,0.25245
First Half Goals 0.5,Under 0.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.25,WINNER,0.23375
Over/Under 2.5 Goals,Under 2.5 Goals,2.48,LOSER,-1.0
First Half Goals 1.5,Over 1.5 Goals,1.72,WINNER,0.6732
First Half Goals 0.5,Over 0.5 Goals,1.62,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.52,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,2.38,WINNER,1.2903
First Half Goals 0.5,Under 0.5 Goals,1.82,WINNER,0.7667
First Half Goals 0.5,Under 0.5 Goals,2.24,WINNER,1.1594000000000002
Over/Under 0.5 Goals,Over 0.5 Goals,1.28,WINNER,0.2618
Over/Under 0.5 Goals,Under 0.5 Goals,4.0,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.77,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,3.15,WINNER,2.01025
Over/Under 0.5 Goals,Over 0.5 Goals,1.66,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.15,WINNER,0.1402499999999999
First Half Goals 0.5,Under 0.5 Goals,2.42,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.28,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.28,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.55,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.51,WINNER,0.47685
First Half Goals 1.5,Under 1.5 Goals,1.32,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.33,WINNER,0.3085500000000001
First Half Goals 0.5,Under 0.5 Goals,2.18,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.82,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.63,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.29,WINNER,0.27115
First Half Goals 1.5,Over 1.5 Goals,1.47,WINNER,0.43945
First Half Goals 1.5,Over 1.5 Goals,1.76,WINNER,0.7106
First Half Goals 0.5,Under 0.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 2.5,Under 2.5 Goals,1.54,WINNER,0.5049
Over/Under 4.5 Goals,Over 4.5 Goals,1.12,WINNER,0.1122000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.28,LOSER,-1.0
Over/Under 4.5 Goals,Over 4.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 0.5 Goals,Under 0.5 Goals,1.95,WINNER,0.88825
Over/Under 0.5 Goals,Under 0.5 Goals,2.36,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.57,WINNER,0.53295
First Half Goals 0.5,Under 0.5 Goals,1.44,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.86,WINNER,0.8041000000000001
Over/Under 2.5 Goals,Under 2.5 Goals,2.18,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.22,WINNER,0.2057
Over/Under 0.5 Goals,Over 0.5 Goals,1.23,WINNER,0.21505
First Half Goals 0.5,Over 0.5 Goals,2.26,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.45,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.58,WINNER,0.5423000000000001
First Half Goals 1.5,Under 1.5 Goals,1.3,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.25,WINNER,0.23375
First Half Goals 0.5,Over 0.5 Goals,1.32,WINNER,0.2992000000000001
Over/Under 0.5 Goals,Under 0.5 Goals,1.22,WINNER,0.2057
First Half Goals 1.5,Over 1.5 Goals,2.44,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.21,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.29,WINNER,0.27115
Over/Under 0.5 Goals,Over 0.5 Goals,1.72,WINNER,0.6732
Over/Under 0.5 Goals,Under 0.5 Goals,2.5,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.99,WINNER,0.92565
Over/Under 3.5 Goals,Over 3.5 Goals,1.65,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.54,WINNER,0.5049
Over/Under 1.5 Goals,Under 1.5 Goals,1.67,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.43,WINNER,0.4020499999999999
First Half Goals 0.5,Under 0.5 Goals,1.46,WINNER,0.4301
First Half Goals 1.5,Under 1.5 Goals,1.2,WINNER,0.1869999999999999
First Half Goals 2.5,Under 2.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 4.5 Goals,Over 4.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.23,WINNER,0.21505
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
Over/Under 4.5 Goals,Over 4.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.61,WINNER,0.5703500000000001
Over/Under 3.5 Goals,Over 3.5 Goals,1.47,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.83,WINNER,0.7760500000000001
Over/Under 0.5 Goals,Under 0.5 Goals,2.0,WINNER,0.935
First Half Goals 2.5,Under 2.5 Goals,1.46,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,2.36,WINNER,1.2716
Over/Under 3.5 Goals,Under 3.5 Goals,1.6,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.68,WINNER,0.6358
Over/Under 2.5 Goals,Under 2.5 Goals,1.52,WINNER,0.4862
Over/Under 3.5 Goals,Under 3.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 0.5,Over 0.5 Goals,1.82,WINNER,0.7667
Over/Under 1.5 Goals,Under 1.5 Goals,4.1,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.75,WINNER,0.70125
First Half Goals 0.5,Under 0.5 Goals,1.48,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.53,LOSER,-1.0
Over/Under 4.5 Goals,Under 4.5 Goals,1.22,WINNER,0.2057
First Half Goals 1.5,Under 1.5 Goals,1.86,WINNER,0.8041000000000001
Over/Under 0.5 Goals,Under 0.5 Goals,1.73,WINNER,0.68255
Over/Under 0.5 Goals,Under 0.5 Goals,1.2,WINNER,0.1869999999999999
Over/Under 2.5 Goals,Under 2.5 Goals,1.58,WINNER,0.5423000000000001
First Half Goals 0.5,Under 0.5 Goals,1.28,WINNER,0.2618
Over/Under 2.5 Goals,Over 2.5 Goals,1.28,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Over 2.5 Goals,1.31,WINNER,0.28985
Over/Under 1.5 Goals,Under 1.5 Goals,3.05,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
Over/Under 0.5 Goals,Over 0.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.45,WINNER,0.4207499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.4,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,2.2,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.56,WINNER,0.5236000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,2.74,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.21,WINNER,0.1963499999999999
First Half Goals 1.5,Under 1.5 Goals,2.18,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.05,WINNER,0.04675
Over/Under 1.5 Goals,Over 1.5 Goals,1.27,WINNER,0.25245
First Half Goals 0.5,Over 0.5 Goals,1.89,WINNER,0.83215
First Half Goals 0.5,Over 0.5 Goals,1.8,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.9,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.75,WINNER,0.70125
Over/Under 1.5 Goals,Over 1.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 1.5,Over 1.5 Goals,1.7,WINNER,0.6545
First Half Goals 0.5,Over 0.5 Goals,1.78,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.84,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.97,WINNER,0.90695
Over/Under 1.5 Goals,Over 1.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Over 2.5 Goals,1.3,LOSER,-1.0
First Half Goals 2.5,Under 2.5 Goals,1.3,WINNER,0.2805000000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.38,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,2.2,LOSER,-1.0
This CSV file contains the results of my investments.
The column containing the profit/loss is the column named back
And I want to test it like this:
Let's assume that I want to see the total profit only for the investments I would make if according to some filters this investment pattern was profitable in previous records.
Example:
The 51st investment is 'market_name' → Over/Under 2.5 Goals, 'runner_name' → Under 2.5 Goals and 'odds' → 1.24
So I want to sum the profit/loss of 50th previous investments if they have these same filters, if the sum of these filters is greater than zero, then I make investment 51.
And so on in each of the lines, 100th investment, I see if the previous 99 investments filtering the cited options will be profitable, if so, I add the back column of 100th to the list for final sum.
So I created this code:
import pandas as pd
df = pd.read_csv(test.csv')
df = df[df['result'].notnull()]
matches = []
for number in range(len(df)):
try:
dfilter = df[:number]
filter = dfilter[(dfilter['market_name'] == df['market_name'][number+1]) & (dfilter['runner_name'] == df['runner_name'][number+1]) & (dfilter['odds'] == df['odds'][number+1])]
back_sum = filter['back'].sum()
if back_sum > 0:
matches.append(df['back'][number+1])
except:
pass
print(sum(matches))
But the final sum is delivering a result that does not match my real results where I invest.
I can't find where the flaw is in the code because it looks correct to me visually.
A:
Slice df[:number] means to take elements up to number. And when referring to the current line, you must use number, not number+1. This can be checked, for example, print df[:3] and get all the lines up to the third one.
But if you use loc, then operations through the slice will not be up to, but inclusive (you should not forget about this). That is, with df.loc[:3, :] rows will be selected, including the third one.
That is, you need this:
for number in range(len(df)):
try:
dfilter = df[:number]
filter = dfilter[(dfilter['market_name'] == df['market_name'][number]) &
(dfilter['runner_name'] == df['runner_name'][number]) & (dfilter['odds'] == df['odds'][number])]
back_sum = filter['back'].sum()
if back_sum > 0:
matches.append(df['back'][number])
except:
pass
If the dataframe is large, the loop will be slow. I can recommend List comprehension, which is many times faster than a loop. Below I made a column 'invest', where, depending on back_sum, the values will be True or False.
df = pd.read_csv('test.csv')
#df = df[df['result'].notnull()].reset_index(drop=True)
"""
is whether all indexes of the original dataframe are needed,
if not, then you can add this line and use the filtered dataframe(if you need a filtered dataframe,
uncomment this line df = df[df['result'].notnull()].reset_index(drop=True))
"""
def my_func(i):
dfilter = df[:i]
filter = dfilter[(dfilter['market_name'] == df['market_name'][i]) &
(dfilter['runner_name'] == df['runner_name'][i]) & (dfilter['odds'] == df['odds'][i])]
back_sum = filter['back'].sum()
aaa = True
if back_sum <= 0:
aaa = False
return aaa
df['invest'] = [my_func(i) for i in range(len(df))]
| Add to the list, a value of a column of the current row of a DataFrame only if the previous rows pass the test | A brief example of my CSV file (there is no way to publish complete by the limit of characters):
market_name,runner_name,odds,result,back
First Half Goals 0.5,Over 0.5 Goals,1.7,WINNER,0.6545
Over/Under 6.5 Goals,Under 6.5 Goals,1.01,WINNER,0.00935
Over/Under 0.5 Goals,Over 0.5 Goals,1.71,WINNER,0.66385
Over/Under 2.5 Goals,Under 2.5 Goals,1.41,WINNER,0.3833499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.25,WINNER,0.23375
Over/Under 4.5 Goals,Under 4.5 Goals,1.34,WINNER,0.3179
First Half Goals 0.5,Under 0.5 Goals,1.96,WINNER,0.8976000000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.6,WINNER,0.5610000000000002
Over/Under 2.5 Goals,Over 2.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 3.5 Goals,Over 3.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 3.5 Goals,Under 3.5 Goals,1.98,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.09,WINNER,0.08415
Over/Under 3.5 Goals,Over 3.5 Goals,2.02,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,3.15,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.44,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.7,WINNER,0.6545
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
Over/Under 4.5 Goals,Over 4.5 Goals,2.06,WINNER,0.9911
Over/Under 3.5 Goals,Under 3.5 Goals,2.0,WINNER,0.935
Over/Under 1.5 Goals,Under 1.5 Goals,1.41,WINNER,0.3833499999999999
Over/Under 7.5 Goals,Under 7.5 Goals,1.27,WINNER,0.25245
Over/Under 5.5 Goals,Under 5.5 Goals,1.5,WINNER,0.4675
Over/Under 4.5 Goals,Under 4.5 Goals,1.29,WINNER,0.27115
Over/Under 1.5 Goals,Over 1.5 Goals,1.15,WINNER,0.1402499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.53,WINNER,0.49555
Over/Under 1.5 Goals,Over 1.5 Goals,1.57,WINNER,0.53295
First Half Goals 0.5,Over 0.5 Goals,1.44,WINNER,0.4114
Over/Under 0.5 Goals,Over 0.5 Goals,2.06,WINNER,0.9911
First Half Goals 0.5,Under 0.5 Goals,2.32,WINNER,1.2342
First Half Goals 0.5,Under 0.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 2.5 Goals,Under 2.5 Goals,1.2,WINNER,0.1869999999999999
First Half Goals 0.5,Under 0.5 Goals,1.08,WINNER,0.0748
First Half Goals 0.5,Over 0.5 Goals,2.02,WINNER,0.9537
Over/Under 1.5 Goals,Under 1.5 Goals,1.69,WINNER,0.64515
Over/Under 0.5 Goals,Over 0.5 Goals,1.25,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,4.7,WINNER,3.4595
First Half Goals 0.5,Over 0.5 Goals,1.74,WINNER,0.6919000000000001
First Half Goals 1.5,Under 1.5 Goals,1.41,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,4.3,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,2.44,WINNER,1.3464
Over/Under 1.5 Goals,Over 1.5 Goals,1.6,WINNER,0.5610000000000002
First Half Goals 0.5,Under 0.5 Goals,1.77,WINNER,0.7199500000000001
First Half Goals 1.5,Under 1.5 Goals,1.88,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.93,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.62,WINNER,0.5797000000000001
Over/Under 3.5 Goals,Under 3.5 Goals,1.93,WINNER,0.86955
First Half Goals 0.5,Under 0.5 Goals,1.4,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.31,WINNER,0.28985
First Half Goals 0.5,Under 0.5 Goals,1.23,WINNER,0.21505
Over/Under 1.5 Goals,Under 1.5 Goals,1.75,WINNER,0.70125
Over/Under 2.5 Goals,Under 2.5 Goals,1.24,WINNER,0.2244
Over/Under 3.5 Goals,Under 3.5 Goals,1.1,WINNER,0.0935
First Half Goals 0.5,Under 0.5 Goals,1.13,WINNER,0.1215499999999999
Over/Under 5.5 Goals,Under 5.5 Goals,1.15,WINNER,0.1402499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.4,WINNER,0.3739999999999999
Over/Under 0.5 Goals,Under 0.5 Goals,3.7,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.7,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,4.0,WINNER,2.805
First Half Goals 0.5,Over 0.5 Goals,1.73,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.17,WINNER,0.1589499999999999
First Half Goals 0.5,Over 0.5 Goals,1.97,WINNER,0.90695
Over/Under 0.5 Goals,Under 0.5 Goals,5.6,WINNER,4.301
Over/Under 1.5 Goals,Over 1.5 Goals,1.44,WINNER,0.4114
First Half Goals 0.5,Over 0.5 Goals,1.75,WINNER,0.70125
First Half Goals 0.5,Over 0.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,2.02,WINNER,0.9537
First Half Goals 0.5,Under 0.5 Goals,1.75,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.61,WINNER,0.5703500000000001
Over/Under 0.5 Goals,Under 0.5 Goals,2.26,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,2.02,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.8,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.4,WINNER,0.3739999999999999
Over/Under 2.5 Goals,Under 2.5 Goals,2.32,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.27,WINNER,0.25245
First Half Goals 0.5,Under 0.5 Goals,1.5,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.06,WINNER,0.0561
Over/Under 4.5 Goals,Over 4.5 Goals,3.3,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.41,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.9,WINNER,0.8414999999999999
First Half Goals 0.5,Under 0.5 Goals,1.04,WINNER,0.0374
First Half Goals 1.5,Over 1.5 Goals,2.02,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.73,WINNER,0.68255
Over/Under 4.5 Goals,Over 4.5 Goals,1.47,WINNER,0.43945
Over/Under 2.5 Goals,Over 2.5 Goals,1.33,WINNER,0.3085500000000001
Over/Under 1.5 Goals,Over 1.5 Goals,2.08,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.33,WINNER,0.3085500000000001
First Half Goals 1.5,Under 1.5 Goals,1.42,WINNER,0.3926999999999999
First Half Goals 0.5,Under 0.5 Goals,1.25,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.5,WINNER,0.4675
First Half Goals 0.5,Over 0.5 Goals,2.02,WINNER,0.9537
First Half Goals 0.5,Under 0.5 Goals,1.5,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.98,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.11,WINNER,0.1028500000000001
First Half Goals 0.5,Under 0.5 Goals,1.03,WINNER,0.02805
First Half Goals 0.5,Over 0.5 Goals,2.42,WINNER,1.3277
First Half Goals 0.5,Under 0.5 Goals,1.23,LOSER,-1.0
Over/Under 4.5 Goals,Over 4.5 Goals,4.7,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.71,WINNER,0.66385
Over/Under 2.5 Goals,Over 2.5 Goals,1.67,WINNER,0.62645
Over/Under 1.5 Goals,Under 1.5 Goals,1.54,WINNER,0.5049
First Half Goals 1.5,Under 1.5 Goals,1.46,WINNER,0.4301
First Half Goals 1.5,Under 1.5 Goals,1.13,WINNER,0.1215499999999999
First Half Goals 0.5,Under 0.5 Goals,1.2,WINNER,0.1869999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.84,WINNER,0.7854000000000001
First Half Goals 0.5,Over 0.5 Goals,1.68,WINNER,0.6358
Over/Under 1.5 Goals,Over 1.5 Goals,1.73,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,1.27,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.73,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
First Half Goals 0.5,Under 0.5 Goals,2.0,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.9,WINNER,0.8414999999999999
First Half Goals 1.5,Under 1.5 Goals,1.33,WINNER,0.3085500000000001
First Half Goals 0.5,Over 0.5 Goals,2.02,WINNER,0.9537
First Half Goals 0.5,Under 0.5 Goals,1.51,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Over 2.5 Goals,1.29,WINNER,0.27115
First Half Goals 1.5,Over 1.5 Goals,2.02,WINNER,0.9537
Over/Under 3.5 Goals,Over 3.5 Goals,1.68,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.16,WINNER,0.1495999999999999
First Half Goals 0.5,Under 0.5 Goals,2.3,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.77,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.94,WINNER,0.8789
First Half Goals 0.5,Under 0.5 Goals,1.87,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.14,WINNER,0.1308999999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.24,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.21,WINNER,0.1963499999999999
First Half Goals 1.5,Over 1.5 Goals,1.91,WINNER,0.85085
Over/Under 2.5 Goals,Over 2.5 Goals,1.77,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.82,WINNER,0.7667
Over/Under 1.5 Goals,Under 1.5 Goals,2.14,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.16,WINNER,0.1495999999999999
First Half Goals 1.5,Under 1.5 Goals,1.11,WINNER,0.1028500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.71,LOSER,-1.0
First Half Goals 1.5,Over 1.5 Goals,1.85,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.48,WINNER,0.4488
First Half Goals 1.5,Under 1.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.26,WINNER,0.2431
Over/Under 1.5 Goals,Over 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.47,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.58,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.46,WINNER,0.4301
Over/Under 1.5 Goals,Over 1.5 Goals,1.83,WINNER,0.7760500000000001
Over/Under 2.5 Goals,Over 2.5 Goals,2.58,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,1.42,LOSER,-1.0
First Half Goals 2.5,Under 2.5 Goals,1.42,WINNER,0.3926999999999999
First Half Goals 0.5,Over 0.5 Goals,2.1,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,2.04,WINNER,0.9724
Over/Under 1.5 Goals,Under 1.5 Goals,1.26,WINNER,0.2431
First Half Goals 0.5,Under 0.5 Goals,1.4,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.95,WINNER,0.88825
First Half Goals 1.5,Under 1.5 Goals,1.32,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.29,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.16,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 0.5,Over 0.5 Goals,1.45,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.58,WINNER,0.5423000000000001
Over/Under 2.5 Goals,Under 2.5 Goals,1.76,WINNER,0.7106
Over/Under 0.5 Goals,Over 0.5 Goals,1.54,WINNER,0.5049
Over/Under 4.5 Goals,Under 4.5 Goals,1.63,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.71,WINNER,0.66385
Over/Under 2.5 Goals,Under 2.5 Goals,1.95,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,2.8,WINNER,1.6829999999999998
Over/Under 3.5 Goals,Under 3.5 Goals,2.2,WINNER,1.1220000000000003
Over/Under 1.5 Goals,Over 1.5 Goals,2.16,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.5,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.27,WINNER,0.25245
Over/Under 6.5 Goals,Under 6.5 Goals,2.0,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,2.06,WINNER,0.9911
Over/Under 3.5 Goals,Under 3.5 Goals,1.9,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,2.08,WINNER,1.0098
Over/Under 1.5 Goals,Over 1.5 Goals,1.54,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.4,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,3.35,WINNER,2.1972500000000004
Over/Under 2.5 Goals,Over 2.5 Goals,1.44,WINNER,0.4114
Over/Under 5.5 Goals,Over 5.5 Goals,1.33,WINNER,0.3085500000000001
Over/Under 1.5 Goals,Under 1.5 Goals,1.94,WINNER,0.8789
Over/Under 3.5 Goals,Under 3.5 Goals,1.01,WINNER,0.00935
First Half Goals 1.5,Under 1.5 Goals,1.78,WINNER,0.7293000000000001
First Half Goals 0.5,Over 0.5 Goals,1.63,WINNER,0.58905
First Half Goals 0.5,Over 0.5 Goals,1.75,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,2.38,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.37,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.4,WINNER,0.3739999999999999
First Half Goals 2.5,Under 2.5 Goals,1.01,WINNER,0.00935
Over/Under 1.5 Goals,Under 1.5 Goals,1.93,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.14,WINNER,0.1308999999999999
First Half Goals 0.5,Over 0.5 Goals,1.47,WINNER,0.43945
Over/Under 3.5 Goals,Over 3.5 Goals,1.55,WINNER,0.5142500000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.1,WINNER,0.0935
First Half Goals 0.5,Under 0.5 Goals,1.4,WINNER,0.3739999999999999
Over/Under 2.5 Goals,Over 2.5 Goals,3.0,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.04,WINNER,0.0374
First Half Goals 0.5,Under 0.5 Goals,1.28,WINNER,0.2618
First Half Goals 1.5,Under 1.5 Goals,1.32,WINNER,0.2992000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.33,WINNER,0.3085500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.31,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.38,WINNER,0.3552999999999999
First Half Goals 2.5,Under 2.5 Goals,1.3,WINNER,0.2805000000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 0.5,Under 0.5 Goals,1.65,WINNER,0.6077499999999999
First Half Goals 0.5,Under 0.5 Goals,1.39,WINNER,0.3646499999999999
First Half Goals 0.5,Over 0.5 Goals,1.71,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.85,WINNER,0.7947500000000002
First Half Goals 0.5,Under 0.5 Goals,1.72,WINNER,0.6732
Over/Under 0.5 Goals,Under 0.5 Goals,1.66,WINNER,0.6171
Over/Under 0.5 Goals,Under 0.5 Goals,2.92,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,2.5,WINNER,1.4025
Over/Under 1.5 Goals,Under 1.5 Goals,1.31,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.22,WINNER,0.2057
Over/Under 2.5 Goals,Over 2.5 Goals,1.08,WINNER,0.0748
Over/Under 1.5 Goals,Over 1.5 Goals,1.17,WINNER,0.1589499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.24,WINNER,0.2244
Over/Under 1.5 Goals,Over 1.5 Goals,1.58,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.66,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.47,WINNER,0.43945
Over/Under 3.5 Goals,Over 3.5 Goals,1.12,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,2.16,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,3.25,WINNER,2.10375
Over/Under 1.5 Goals,Under 1.5 Goals,3.1,WINNER,1.9635
Over/Under 0.5 Goals,Over 0.5 Goals,1.48,WINNER,0.4488
Over/Under 2.5 Goals,Over 2.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 3.5 Goals,Over 3.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Under 2.5 Goals,4.7,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.29,WINNER,0.27115
Over/Under 3.5 Goals,Under 3.5 Goals,1.05,WINNER,0.04675
First Half Goals 0.5,Under 0.5 Goals,2.04,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.15,WINNER,0.1402499999999999
Over/Under 3.5 Goals,Over 3.5 Goals,1.43,WINNER,0.4020499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.49,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.28,WINNER,0.2618
Over/Under 1.5 Goals,Under 1.5 Goals,5.4,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.45,WINNER,0.4207499999999999
Over/Under 1.5 Goals,Under 1.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 2.5 Goals,Under 2.5 Goals,1.38,WINNER,0.3552999999999999
First Half Goals 0.5,Under 0.5 Goals,1.42,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.33,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.3,WINNER,0.2805000000000001
Over/Under 3.5 Goals,Over 3.5 Goals,1.28,WINNER,0.2618
Over/Under 1.5 Goals,Under 1.5 Goals,2.94,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.95,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.87,WINNER,0.8134500000000001
Over/Under 1.5 Goals,Under 1.5 Goals,1.27,WINNER,0.25245
Over/Under 4.5 Goals,Over 4.5 Goals,1.09,WINNER,0.08415
Over/Under 3.5 Goals,Under 3.5 Goals,1.01,WINNER,0.00935
Over/Under 1.5 Goals,Over 1.5 Goals,2.14,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.02,WINNER,0.0187
Over/Under 4.5 Goals,Over 4.5 Goals,1.17,WINNER,0.1589499999999999
Over/Under 5.5 Goals,Over 5.5 Goals,1.24,WINNER,0.2244
First Half Goals 0.5,Under 0.5 Goals,1.4,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.35,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.51,LOSER,-1.0
First Half Goals 1.5,Over 1.5 Goals,2.62,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.81,WINNER,0.7573500000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.2,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.36,WINNER,0.3366000000000001
First Half Goals 0.5,Over 0.5 Goals,1.63,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.8,WINNER,0.7480000000000001
First Half Goals 0.5,Under 0.5 Goals,1.68,WINNER,0.6358
Over/Under 0.5 Goals,Under 0.5 Goals,1.27,WINNER,0.25245
Over/Under 1.5 Goals,Over 1.5 Goals,1.59,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.59,WINNER,0.5516500000000001
Over/Under 2.5 Goals,Over 2.5 Goals,1.62,WINNER,0.5797000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.64,LOSER,-1.0
Over/Under 4.5 Goals,Over 4.5 Goals,1.34,WINNER,0.3179
Over/Under 4.5 Goals,Over 4.5 Goals,1.34,WINNER,0.3179
First Half Goals 0.5,Under 0.5 Goals,1.6,WINNER,0.5610000000000002
Over/Under 1.5 Goals,Over 1.5 Goals,2.1,LOSER,-1.0
Over/Under 5.5 Goals,Under 5.5 Goals,1.52,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.2,WINNER,0.1869999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.1,WINNER,0.0935
Over/Under 3.5 Goals,Under 3.5 Goals,1.27,LOSER,-1.0
Over/Under 7.5 Goals,Under 7.5 Goals,1.38,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.87,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,2.14,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.22,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.5,WINNER,0.4675
First Half Goals 0.5,Under 0.5 Goals,1.12,WINNER,0.1122000000000001
First Half Goals 2.5,Under 2.5 Goals,1.27,WINNER,0.25245
First Half Goals 0.5,Under 0.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 2.5 Goals,Over 2.5 Goals,1.25,WINNER,0.23375
Over/Under 2.5 Goals,Under 2.5 Goals,2.48,LOSER,-1.0
First Half Goals 1.5,Over 1.5 Goals,1.72,WINNER,0.6732
First Half Goals 0.5,Over 0.5 Goals,1.62,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.52,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,2.38,WINNER,1.2903
First Half Goals 0.5,Under 0.5 Goals,1.82,WINNER,0.7667
First Half Goals 0.5,Under 0.5 Goals,2.24,WINNER,1.1594000000000002
Over/Under 0.5 Goals,Over 0.5 Goals,1.28,WINNER,0.2618
Over/Under 0.5 Goals,Under 0.5 Goals,4.0,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.77,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,3.15,WINNER,2.01025
Over/Under 0.5 Goals,Over 0.5 Goals,1.66,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.15,WINNER,0.1402499999999999
First Half Goals 0.5,Under 0.5 Goals,2.42,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.28,LOSER,-1.0
Over/Under 2.5 Goals,Under 2.5 Goals,1.28,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.55,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.51,WINNER,0.47685
First Half Goals 1.5,Under 1.5 Goals,1.32,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.33,WINNER,0.3085500000000001
First Half Goals 0.5,Under 0.5 Goals,2.18,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.82,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.63,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.29,WINNER,0.27115
First Half Goals 1.5,Over 1.5 Goals,1.47,WINNER,0.43945
First Half Goals 1.5,Over 1.5 Goals,1.76,WINNER,0.7106
First Half Goals 0.5,Under 0.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 2.5,Under 2.5 Goals,1.54,WINNER,0.5049
Over/Under 4.5 Goals,Over 4.5 Goals,1.12,WINNER,0.1122000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,1.28,LOSER,-1.0
Over/Under 4.5 Goals,Over 4.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 0.5 Goals,Under 0.5 Goals,1.95,WINNER,0.88825
Over/Under 0.5 Goals,Under 0.5 Goals,2.36,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.57,WINNER,0.53295
First Half Goals 0.5,Under 0.5 Goals,1.44,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.86,WINNER,0.8041000000000001
Over/Under 2.5 Goals,Under 2.5 Goals,2.18,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.22,WINNER,0.2057
Over/Under 0.5 Goals,Over 0.5 Goals,1.23,WINNER,0.21505
First Half Goals 0.5,Over 0.5 Goals,2.26,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.45,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.58,WINNER,0.5423000000000001
First Half Goals 1.5,Under 1.5 Goals,1.3,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.25,WINNER,0.23375
First Half Goals 0.5,Over 0.5 Goals,1.32,WINNER,0.2992000000000001
Over/Under 0.5 Goals,Under 0.5 Goals,1.22,WINNER,0.2057
First Half Goals 1.5,Over 1.5 Goals,2.44,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.21,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.29,WINNER,0.27115
Over/Under 0.5 Goals,Over 0.5 Goals,1.72,WINNER,0.6732
Over/Under 0.5 Goals,Under 0.5 Goals,2.5,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.99,WINNER,0.92565
Over/Under 3.5 Goals,Over 3.5 Goals,1.65,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.54,WINNER,0.5049
Over/Under 1.5 Goals,Under 1.5 Goals,1.67,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.43,WINNER,0.4020499999999999
First Half Goals 0.5,Under 0.5 Goals,1.46,WINNER,0.4301
First Half Goals 1.5,Under 1.5 Goals,1.2,WINNER,0.1869999999999999
First Half Goals 2.5,Under 2.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 4.5 Goals,Over 4.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.23,WINNER,0.21505
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
Over/Under 4.5 Goals,Over 4.5 Goals,1.19,WINNER,0.1776499999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.61,WINNER,0.5703500000000001
Over/Under 3.5 Goals,Over 3.5 Goals,1.47,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.83,WINNER,0.7760500000000001
Over/Under 0.5 Goals,Under 0.5 Goals,2.0,WINNER,0.935
First Half Goals 2.5,Under 2.5 Goals,1.46,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,2.36,WINNER,1.2716
Over/Under 3.5 Goals,Under 3.5 Goals,1.6,LOSER,-1.0
Over/Under 3.5 Goals,Under 3.5 Goals,1.68,WINNER,0.6358
Over/Under 2.5 Goals,Under 2.5 Goals,1.52,WINNER,0.4862
Over/Under 3.5 Goals,Under 3.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 0.5,Over 0.5 Goals,1.82,WINNER,0.7667
Over/Under 1.5 Goals,Under 1.5 Goals,4.1,LOSER,-1.0
First Half Goals 0.5,Over 0.5 Goals,1.75,WINNER,0.70125
First Half Goals 0.5,Under 0.5 Goals,1.48,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.53,LOSER,-1.0
Over/Under 4.5 Goals,Under 4.5 Goals,1.22,WINNER,0.2057
First Half Goals 1.5,Under 1.5 Goals,1.86,WINNER,0.8041000000000001
Over/Under 0.5 Goals,Under 0.5 Goals,1.73,WINNER,0.68255
Over/Under 0.5 Goals,Under 0.5 Goals,1.2,WINNER,0.1869999999999999
Over/Under 2.5 Goals,Under 2.5 Goals,1.58,WINNER,0.5423000000000001
First Half Goals 0.5,Under 0.5 Goals,1.28,WINNER,0.2618
Over/Under 2.5 Goals,Over 2.5 Goals,1.28,LOSER,-1.0
Over/Under 3.5 Goals,Over 3.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Over 2.5 Goals,1.31,WINNER,0.28985
Over/Under 1.5 Goals,Under 1.5 Goals,3.05,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,1.16,WINNER,0.1495999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.24,WINNER,0.2244
Over/Under 0.5 Goals,Over 0.5 Goals,1.21,WINNER,0.1963499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.45,WINNER,0.4207499999999999
Over/Under 0.5 Goals,Over 0.5 Goals,1.4,LOSER,-1.0
Over/Under 0.5 Goals,Over 0.5 Goals,2.2,LOSER,-1.0
Over/Under 2.5 Goals,Over 2.5 Goals,1.56,WINNER,0.5236000000000001
Over/Under 0.5 Goals,Over 0.5 Goals,2.74,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.18,WINNER,0.1682999999999999
Over/Under 1.5 Goals,Over 1.5 Goals,1.21,WINNER,0.1963499999999999
First Half Goals 1.5,Under 1.5 Goals,2.18,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.05,WINNER,0.04675
Over/Under 1.5 Goals,Over 1.5 Goals,1.27,WINNER,0.25245
First Half Goals 0.5,Over 0.5 Goals,1.89,WINNER,0.83215
First Half Goals 0.5,Over 0.5 Goals,1.8,LOSER,-1.0
Over/Under 1.5 Goals,Over 1.5 Goals,1.9,LOSER,-1.0
Over/Under 0.5 Goals,Under 0.5 Goals,1.75,WINNER,0.70125
Over/Under 1.5 Goals,Over 1.5 Goals,1.3,WINNER,0.2805000000000001
First Half Goals 1.5,Over 1.5 Goals,1.7,WINNER,0.6545
First Half Goals 0.5,Over 0.5 Goals,1.78,LOSER,-1.0
First Half Goals 1.5,Under 1.5 Goals,1.84,LOSER,-1.0
First Half Goals 0.5,Under 0.5 Goals,1.97,WINNER,0.90695
Over/Under 1.5 Goals,Over 1.5 Goals,1.23,WINNER,0.21505
Over/Under 2.5 Goals,Over 2.5 Goals,1.3,LOSER,-1.0
First Half Goals 2.5,Under 2.5 Goals,1.3,WINNER,0.2805000000000001
Over/Under 1.5 Goals,Over 1.5 Goals,1.38,LOSER,-1.0
Over/Under 1.5 Goals,Under 1.5 Goals,2.2,LOSER,-1.0
This CSV file contains the results of my investments.
The column containing the profit/loss is the column named back
And I want to test it like this:
Let's assume that I want to see the total profit only for the investments I would make if according to some filters this investment pattern was profitable in previous records.
Example:
The 51st investment is 'market_name' → Over/Under 2.5 Goals, 'runner_name' → Under 2.5 Goals and 'odds' → 1.24
So I want to sum the profit/loss of 50th previous investments if they have these same filters, if the sum of these filters is greater than zero, then I make investment 51.
And so on in each of the lines, 100th investment, I see if the previous 99 investments filtering the cited options will be profitable, if so, I add the back column of 100th to the list for final sum.
So I created this code:
import pandas as pd
df = pd.read_csv(test.csv')
df = df[df['result'].notnull()]
matches = []
for number in range(len(df)):
try:
dfilter = df[:number]
filter = dfilter[(dfilter['market_name'] == df['market_name'][number+1]) & (dfilter['runner_name'] == df['runner_name'][number+1]) & (dfilter['odds'] == df['odds'][number+1])]
back_sum = filter['back'].sum()
if back_sum > 0:
matches.append(df['back'][number+1])
except:
pass
print(sum(matches))
But the final sum is delivering a result that does not match my real results where I invest.
I can't find where the flaw is in the code because it looks correct to me visually.
| [
"Slice df[:number] means to take elements up to number. And when referring to the current line, you must use number, not number+1. This can be checked, for example, print df[:3] and get all the lines up to the third one.\nBut if you use loc, then operations through the slice will not be up to, but inclusive (you should not forget about this). That is, with df.loc[:3, :] rows will be selected, including the third one.\nThat is, you need this:\nfor number in range(len(df)):\n try:\n dfilter = df[:number]\n filter = dfilter[(dfilter['market_name'] == df['market_name'][number]) &\n (dfilter['runner_name'] == df['runner_name'][number]) & (dfilter['odds'] == df['odds'][number])]\n\n back_sum = filter['back'].sum()\n if back_sum > 0:\n matches.append(df['back'][number])\n except:\n pass\n\nIf the dataframe is large, the loop will be slow. I can recommend List comprehension, which is many times faster than a loop. Below I made a column 'invest', where, depending on back_sum, the values will be True or False.\ndf = pd.read_csv('test.csv')\n\n#df = df[df['result'].notnull()].reset_index(drop=True)\n\"\"\"\nis whether all indexes of the original dataframe are needed,\nif not, then you can add this line and use the filtered dataframe(if you need a filtered dataframe,\nuncomment this line df = df[df['result'].notnull()].reset_index(drop=True))\n\"\"\"\n\ndef my_func(i):\n dfilter = df[:i]\n filter = dfilter[(dfilter['market_name'] == df['market_name'][i]) &\n (dfilter['runner_name'] == df['runner_name'][i]) & (dfilter['odds'] == df['odds'][i])]\n back_sum = filter['back'].sum()\n aaa = True\n if back_sum <= 0:\n aaa = False\n\n return aaa\n\ndf['invest'] = [my_func(i) for i in range(len(df))]\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074671716_pandas_python.txt |
Q:
how to import a module that using another module at the grandparent directories in Python
I'm trying to run a python file that imports a module using other modules in the grandparent folder. The file structure is:
directory_0
|
directory_1
| |
| directory_2
| |
| __init__.py (define the method A and import another method B from file_2.py)
| |
| file_1.py
|
directory_3
|
file_2.py (define the method B)
I want to run file_1.py that imports method A defined in __init__.py, and __init__.py it imports method B from file_2.py.
I'm currently at /directory_0/directory_1/directory_2 to run the command python file_1.py. It throws ModuleNotFoundError: No module named directory_3.file_2
How to make it run? and which path should I go to run this script (file_1.py).
A:
Probably you need __init__.py in all 1-3 directories.
Try to use next syntaxis
import ...directory_3.file_2
A:
my final approach is import in the way:
in file_1.py:
from directory_1.directory_2 import A
and jump to the path that can access all children modules to run file_1.py as a module:
from /directory_0 run by python -m directory_1.directory_2.file_1
| how to import a module that using another module at the grandparent directories in Python | I'm trying to run a python file that imports a module using other modules in the grandparent folder. The file structure is:
directory_0
|
directory_1
| |
| directory_2
| |
| __init__.py (define the method A and import another method B from file_2.py)
| |
| file_1.py
|
directory_3
|
file_2.py (define the method B)
I want to run file_1.py that imports method A defined in __init__.py, and __init__.py it imports method B from file_2.py.
I'm currently at /directory_0/directory_1/directory_2 to run the command python file_1.py. It throws ModuleNotFoundError: No module named directory_3.file_2
How to make it run? and which path should I go to run this script (file_1.py).
| [
"Probably you need __init__.py in all 1-3 directories.\nTry to use next syntaxis\nimport ...directory_3.file_2\n",
"my final approach is import in the way:\nin file_1.py:\nfrom directory_1.directory_2 import A\n\nand jump to the path that can access all children modules to run file_1.py as a module:\nfrom /directory_0 run by python -m directory_1.directory_2.file_1\n"
] | [
0,
0
] | [] | [] | [
"init",
"python",
"python_3.x",
"python_import",
"relative_import"
] | stackoverflow_0074674599_init_python_python_3.x_python_import_relative_import.txt |
Q:
executing the operation written in a column pandas
I have a series of column with numbers to put into
different formulas (in my example I use only sum and product).
And the final column should give me the result of the formula (I get "None" instead).
In my example, if it is written "2 + 1" I would simply like to have 3 as result of my operation
Can you suggest me the right solution please ?
import pandas as pd
operation = ["+", "*", "+", "*"]
op_number = ["Op1", "Op2", "Op3", "Op4"]
number_1 =[1,3,5,6]
number_2 =[2,4,2,3]
operation = pd.Series(operation)
op_number = pd.Series(op_number)
number_1 = pd.Series(number_1)
number_2 = pd.Series(number_2)
frame = { 'operation': operation, 'op_number': op_number,
'number_1' : number_1, 'number_2':number_2}
s1 = pd.DataFrame(frame)
s1['derived'] = s1['number_1'].astype(str) + " " + s1['operation'] + " " + s1['number_2'].astype(str)
s1['result'] = s1['derived'].apply(lambda x : exec(x) )
s1
A:
I would not recommend using pandas for this. However, if you want the solution in pandas then here is it:
You are doing exec() which works well but it always returns None.
Hence, replace exec() with eval().
Here's your updated code:
import pandas as pd
operation = ["+", "*", "+", "*"]
op_number = ["Op1", "Op2", "Op3", "Op4"]
number_1 =[1,3,5,6]
number_2 =[2,4,2,3]
operation = pd.Series(operation)
op_number = pd.Series(op_number)
number_1 = pd.Series(number_1)
number_2 = pd.Series(number_2)
frame = { 'operation': operation, 'op_number': op_number,
'number_1' : number_1, 'number_2':number_2}
s1 = pd.DataFrame(frame)
s1['derived'] = s1['number_1'].astype(str) + " " + s1['operation'] + " " + s1['number_2'].astype(str)
s1['result'] = s1['derived'].apply(lambda x : eval(x) )
print(s1)
Thanks to @MaxShawabkeh
| executing the operation written in a column pandas | I have a series of column with numbers to put into
different formulas (in my example I use only sum and product).
And the final column should give me the result of the formula (I get "None" instead).
In my example, if it is written "2 + 1" I would simply like to have 3 as result of my operation
Can you suggest me the right solution please ?
import pandas as pd
operation = ["+", "*", "+", "*"]
op_number = ["Op1", "Op2", "Op3", "Op4"]
number_1 =[1,3,5,6]
number_2 =[2,4,2,3]
operation = pd.Series(operation)
op_number = pd.Series(op_number)
number_1 = pd.Series(number_1)
number_2 = pd.Series(number_2)
frame = { 'operation': operation, 'op_number': op_number,
'number_1' : number_1, 'number_2':number_2}
s1 = pd.DataFrame(frame)
s1['derived'] = s1['number_1'].astype(str) + " " + s1['operation'] + " " + s1['number_2'].astype(str)
s1['result'] = s1['derived'].apply(lambda x : exec(x) )
s1
| [
"I would not recommend using pandas for this. However, if you want the solution in pandas then here is it:\nYou are doing exec() which works well but it always returns None.\nHence, replace exec() with eval().\nHere's your updated code:\nimport pandas as pd\noperation = [\"+\", \"*\", \"+\", \"*\"]\nop_number = [\"Op1\", \"Op2\", \"Op3\", \"Op4\"]\nnumber_1 =[1,3,5,6] \nnumber_2 =[2,4,2,3] \noperation = pd.Series(operation)\nop_number = pd.Series(op_number)\nnumber_1 = pd.Series(number_1)\nnumber_2 = pd.Series(number_2)\nframe = { 'operation': operation, 'op_number': op_number,\n 'number_1' : number_1, 'number_2':number_2} \ns1 = pd.DataFrame(frame)\ns1['derived'] = s1['number_1'].astype(str) + \" \" + s1['operation'] + \" \" + s1['number_2'].astype(str) \ns1['result'] = s1['derived'].apply(lambda x : eval(x) )\nprint(s1)\n\nThanks to @MaxShawabkeh\n"
] | [
0
] | [] | [] | [
"array_formulas",
"formula",
"pandas",
"python"
] | stackoverflow_0074675328_array_formulas_formula_pandas_python.txt |
Q:
How to count cells that are within 2 values, in a range of cells in a pandas dataframe?
I have a dataframe that looks like that:
col1
0 10
1 5
2 8
3 12
4 13
5 6
6 9
7 11
8 10
9 3
10 21
11 18
12 14
13 16
14 30
15 45
16 31
17 40
18 38
For each cell in 'col1' I calculate a range of values:
df['df_min'] = df.col1 - df.col1 * 0.2
df['df_max'] = df.col1 + df.col1 * 0.2
For each cell there's a range, I would like to count how many cells in 'col1' in the past xx cells (3 in this example) are within that range, but without a loop as it takes a very long time with my actual model.
I'm trying to achieve this result:
col1 df_min df_max counter
0 10 8.0 12.0 -1
1 5 4.0 6.0 -1
2 8 6.4 9.6 -1
3 12 9.6 14.4 1
4 13 10.4 15.6 1
5 6 4.8 7.2 0
6 9 7.2 10.8 0
7 11 8.8 13.2 2
8 10 8.0 12.0 2
9 3 2.4 3.6 0
10 21 16.8 25.2 0
11 18 14.4 21.6 1
12 14 11.2 16.8 0
13 16 12.8 19.2 2
14 30 24.0 36.0 0
15 45 36.0 54.0 0
16 31 24.8 37.2 1
17 40 32.0 48.0 1
18 38 30.4 45.6 3
Here's the (messy) code that I could come up with, but I'd really like a faster solution, if possible. Any help would be gladly appreciated.
df = pd.DataFrame({"col1":[10, 5, 8, 12, 13, 6, 9, 11, 10, 3, 21, 18, 14, 16, 30, 45, 31, 40, 38]})
back = 3 # numbers of cells to check back
df['df_min'] = df.col1 - df.col1 * 0.2
df['df_max'] = df.col1 + df.col1 * 0.2
l = []
for window in df.col1.rolling(window=back+1, center=False, closed='right'):
if window.empty:
pass
else:
a = window.iloc[-1]
range_min = a - a * 0.2
range_max = a + a * 0.2
c = 0
if len(window) == back+1:
for b in window:
if (b >= range_min and b <= range_max):
c += 1
c = c-1 # substract 1 because window includes the tested value which is always true
l.append(c)
df1 = pd.DataFrame(l, columns=['counter'])
df = df.join(df1)
print(df)
A:
loop with vectorization operation
Code
df['df_min'] = df.col1 - df.col1 * 0.2
df['df_max'] = df.col1 + df.col1 * 0.2
n = 3
s = pd.Series(dtype='float')
for i in range(0, n):
s1 = df.col1.shift(i+1).ge(df['df_min']) & df.col1.shift(i+1).le(df['df_max'])
s = s.add(s1, fill_value=0)
s[:n] = -1
df['counter'] = s
output(df):
col1 df_min df_max counter
0 10 8.0 12.0 -1.0
1 5 4.0 6.0 -1.0
2 8 6.4 9.6 -1.0
3 12 9.6 14.4 1.0
4 13 10.4 15.6 1.0
5 6 4.8 7.2 0.0
6 9 7.2 10.8 0.0
7 11 8.8 13.2 2.0
8 10 8.0 12.0 2.0
9 3 2.4 3.6 0.0
10 21 16.8 25.2 0.0
11 18 14.4 21.6 1.0
12 14 11.2 16.8 0.0
13 16 12.8 19.2 2.0
14 30 24.0 36.0 0.0
15 45 36.0 54.0 0.0
16 31 24.8 37.2 1.0
17 40 32.0 48.0 1.0
18 38 30.4 45.6 3.0
i don know your dataset. However, when im testing with 1,000,000 rows and n = 10, this code takes only 0.4sec.
test example
import numpy as np
df = pd.DataFrame(np.random.randint(20,100, 1000000), columns=['col1'])
A:
Another possible solution, based on pandas.core.window.rolling.Rolling.apply. From the tests I did on my computer, this solution appears to be very fast and efficient, even for large datasets.
n = 3
df['counter'] = (df['col1'].rolling(n+1)
.apply(lambda x:
np.sum((x[:n] >= (x[n] * 0.8)) & (x[:n] <= (x[n] * 1.2))),
raw=True).fillna(-1).astype(int))
Output:
col1 df_min df_max counter
0 10 8.0 12.0 -1
1 5 4.0 6.0 -1
2 8 6.4 9.6 -1
3 12 9.6 14.4 1
4 13 10.4 15.6 1
5 6 4.8 7.2 0
6 9 7.2 10.8 0
7 11 8.8 13.2 2
8 10 8.0 12.0 2
9 3 2.4 3.6 0
10 21 16.8 25.2 0
11 18 14.4 21.6 1
12 14 11.2 16.8 0
13 16 12.8 19.2 2
14 30 24.0 36.0 0
15 45 36.0 54.0 0
16 31 24.8 37.2 1
17 40 32.0 48.0 1
18 38 30.4 45.6 3
| How to count cells that are within 2 values, in a range of cells in a pandas dataframe? | I have a dataframe that looks like that:
col1
0 10
1 5
2 8
3 12
4 13
5 6
6 9
7 11
8 10
9 3
10 21
11 18
12 14
13 16
14 30
15 45
16 31
17 40
18 38
For each cell in 'col1' I calculate a range of values:
df['df_min'] = df.col1 - df.col1 * 0.2
df['df_max'] = df.col1 + df.col1 * 0.2
For each cell there's a range, I would like to count how many cells in 'col1' in the past xx cells (3 in this example) are within that range, but without a loop as it takes a very long time with my actual model.
I'm trying to achieve this result:
col1 df_min df_max counter
0 10 8.0 12.0 -1
1 5 4.0 6.0 -1
2 8 6.4 9.6 -1
3 12 9.6 14.4 1
4 13 10.4 15.6 1
5 6 4.8 7.2 0
6 9 7.2 10.8 0
7 11 8.8 13.2 2
8 10 8.0 12.0 2
9 3 2.4 3.6 0
10 21 16.8 25.2 0
11 18 14.4 21.6 1
12 14 11.2 16.8 0
13 16 12.8 19.2 2
14 30 24.0 36.0 0
15 45 36.0 54.0 0
16 31 24.8 37.2 1
17 40 32.0 48.0 1
18 38 30.4 45.6 3
Here's the (messy) code that I could come up with, but I'd really like a faster solution, if possible. Any help would be gladly appreciated.
df = pd.DataFrame({"col1":[10, 5, 8, 12, 13, 6, 9, 11, 10, 3, 21, 18, 14, 16, 30, 45, 31, 40, 38]})
back = 3 # numbers of cells to check back
df['df_min'] = df.col1 - df.col1 * 0.2
df['df_max'] = df.col1 + df.col1 * 0.2
l = []
for window in df.col1.rolling(window=back+1, center=False, closed='right'):
if window.empty:
pass
else:
a = window.iloc[-1]
range_min = a - a * 0.2
range_max = a + a * 0.2
c = 0
if len(window) == back+1:
for b in window:
if (b >= range_min and b <= range_max):
c += 1
c = c-1 # substract 1 because window includes the tested value which is always true
l.append(c)
df1 = pd.DataFrame(l, columns=['counter'])
df = df.join(df1)
print(df)
| [
"loop with vectorization operation\nCode\ndf['df_min'] = df.col1 - df.col1 * 0.2\ndf['df_max'] = df.col1 + df.col1 * 0.2\nn = 3\ns = pd.Series(dtype='float')\nfor i in range(0, n):\n s1 = df.col1.shift(i+1).ge(df['df_min']) & df.col1.shift(i+1).le(df['df_max'])\n s = s.add(s1, fill_value=0)\ns[:n] = -1\ndf['counter'] = s\n\noutput(df):\n col1 df_min df_max counter\n0 10 8.0 12.0 -1.0\n1 5 4.0 6.0 -1.0\n2 8 6.4 9.6 -1.0\n3 12 9.6 14.4 1.0\n4 13 10.4 15.6 1.0\n5 6 4.8 7.2 0.0\n6 9 7.2 10.8 0.0\n7 11 8.8 13.2 2.0\n8 10 8.0 12.0 2.0\n9 3 2.4 3.6 0.0\n10 21 16.8 25.2 0.0\n11 18 14.4 21.6 1.0\n12 14 11.2 16.8 0.0\n13 16 12.8 19.2 2.0\n14 30 24.0 36.0 0.0\n15 45 36.0 54.0 0.0\n16 31 24.8 37.2 1.0\n17 40 32.0 48.0 1.0\n18 38 30.4 45.6 3.0\n\n\ni don know your dataset. However, when im testing with 1,000,000 rows and n = 10, this code takes only 0.4sec.\n\ntest example\nimport numpy as np\ndf = pd.DataFrame(np.random.randint(20,100, 1000000), columns=['col1'])\n\n",
"Another possible solution, based on pandas.core.window.rolling.Rolling.apply. From the tests I did on my computer, this solution appears to be very fast and efficient, even for large datasets.\nn = 3\n\ndf['counter'] = (df['col1'].rolling(n+1)\n .apply(lambda x: \n np.sum((x[:n] >= (x[n] * 0.8)) & (x[:n] <= (x[n] * 1.2))), \n raw=True).fillna(-1).astype(int))\n\nOutput:\n col1 df_min df_max counter\n0 10 8.0 12.0 -1\n1 5 4.0 6.0 -1\n2 8 6.4 9.6 -1\n3 12 9.6 14.4 1\n4 13 10.4 15.6 1\n5 6 4.8 7.2 0\n6 9 7.2 10.8 0\n7 11 8.8 13.2 2\n8 10 8.0 12.0 2\n9 3 2.4 3.6 0\n10 21 16.8 25.2 0\n11 18 14.4 21.6 1\n12 14 11.2 16.8 0\n13 16 12.8 19.2 2\n14 30 24.0 36.0 0\n15 45 36.0 54.0 0\n16 31 24.8 37.2 1\n17 40 32.0 48.0 1\n18 38 30.4 45.6 3\n\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"for_loop",
"optimization",
"pandas",
"python"
] | stackoverflow_0074673689_dataframe_for_loop_optimization_pandas_python.txt |
Q:
How to get the contents of last line in tkinter text widget (Python 3)
I am working on a virtual console, which would use the systems builtin commands and then do the action and display output results on next line in console. This is all working, but how do I get the contents of the last line, and only the last line in the tkinter text widget? Thanks in advance. I am working in python 3.
I have treed using text.get(text.linestart, text.lineend) To no avail. Have these been deprecated? It spits out an error saying that AttributeError: 'Text' object has no attribute 'linestart'
A:
You can apply modifiers to the text widget indicies, such as linestart and lineend as well as adding and subtracting characters. The index after the last character is "end".
Putting that all together, you can get the start of the last line with "end-1c linestart".
A:
Test widget has a see(index) method.
text.see(END) will scroll the text to the last line.
A:
To get information from the last line you can use
text.get("end-1c linestart", "end-1c lineend")
| How to get the contents of last line in tkinter text widget (Python 3) | I am working on a virtual console, which would use the systems builtin commands and then do the action and display output results on next line in console. This is all working, but how do I get the contents of the last line, and only the last line in the tkinter text widget? Thanks in advance. I am working in python 3.
I have treed using text.get(text.linestart, text.lineend) To no avail. Have these been deprecated? It spits out an error saying that AttributeError: 'Text' object has no attribute 'linestart'
| [
"You can apply modifiers to the text widget indicies, such as linestart and lineend as well as adding and subtracting characters. The index after the last character is \"end\".\nPutting that all together, you can get the start of the last line with \"end-1c linestart\".\n",
"Test widget has a see(index) method.\ntext.see(END) will scroll the text to the last line. \n",
"To get information from the last line you can use\ntext.get(\"end-1c linestart\", \"end-1c lineend\")\n"
] | [
1,
0,
0
] | [] | [] | [
"console",
"python",
"python_3.x",
"tkinter"
] | stackoverflow_0040251259_console_python_python_3.x_tkinter.txt |
Q:
Where does this Python script go in the __init__py file?
So I am trying to create my first Azure function. I am currently following a tutorial online. I managed to create an environment, install the necessary packages via Visual Studio Code. I actually have my Python script which looks like this:
import sqlalchemyimport pandas as pd
sqlcon = sqlalchemy.create_engine('mssql://LBCUCRDBS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server')
path=r'C:\Users\H\Desktop\Correspondence\Correspondence_Received_with_Types.xlsx'
df=pd.read_excel(path)
df.to_sql('Correspondence',sqlcon,if_exists='append', index=False)
SQLQuery1="UPDATE Correspondence SET [Response By] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([Response By], 'Junaid Dar', 'Robson de Souza') , 'Jade Buckingham','Rivaldo Vitor Borba Ferreira'), 'Danielle Wallace','Ronaldo de Assis Moreira'), 'Sadiya Nurmahomed','Wayne Mark Rooney'),'Tendai Giwa','Joseph John Cole'), 'Tiffany Heslop', 'Lionel Andrés Messi'), 'Gareth Budiar','Steven George Gerrard'),'Jeneen Nicholson','Samuel Eto''o Fils'), 'James Beaton','Carlos Alberto Tevez'), 'Kirsty Moore','Vincent Jean Mpoy Kompany'),'Kelly McDonald','Leroy Aziz Sané'), 'Muzaffer Mehmet','Gareth Frank Bale'), 'Charmaine Banerji','Ronaldo Luís Nazário de Lima'),'Paul Hunt','Luís Carlos Almeida de Cunha'), 'Jessica Ararat-David', 'Dimitar Ivanov Berbatov'), 'Anita Hayler','José María Gutiérrez Hernández'), 'Vishal Chandel','Diego Armando Maradona'), 'Saleema Panjwani','Edson Arantes do Nascimento'), 'Mohammed Uddin','Thierry Daniel Henry'),'Edward Ford','Nicolas Sébastien Anelka'), 'Gillian Sutherland', 'Zinedine Yazid Zidane'), 'Abdul Jimoh','Gabriel Omar Batistuta'),'Lee Woolward','Emmanuel Laurent Petit'), 'Antonia Akintaju','Robert Emmanuel Pires'),'Maria Dayang','Roberto Carlos da Silva Rocha'), 'Latisha McFarlane','David Robert Joseph Beckham'), 'Janine Townsend', 'Francesco Totti'), 'Vaughan Anderson-Moore','Alessandro Del Piero')"
SQLQuery2="UPDATE Correspondence SET [CPZ Name] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([CPZ Name], 'North Camberwell', 'Dorne') , 'South Rotherhithe','King’s Landing'), 'Thorburn Square','Dragonstone'), 'All Non CPZ highway south of South Circular Road (A205)','Braavos'),'Housing Estates', 'Crownlands'), 'Parks Car Park', 'Winterfell'), 'South East Walworth', 'The Iron Islands'),'R - North Peckham','Casterly Rock'), 'CCTV Camera','Lannisport'),'Herne Hill', 'Volantis'),'South Camberwell', 'The Westerlands'),'South Bermondsey','Riverrun'), 'Rotherhithe', 'Highgarden'), ' Bankside','Storm’s End'), 'Trafalgar','The Kingsroad'), 'North Dulwich and Denmark Hill', 'Ashemark'), 'All Non CPZ highway north of South Circular Road (A205)','Blackmont'),'North Dulwich and Denmark Hill', 'Asshai'),'Newington', 'Doune Castle'), 'Peckham Road South', 'Horn Hill'),'South East Bermondsey', 'Banefort'), 'East dulwich','Red Keep'), ' East Camberwell', 'South Park'), 'Car Parks', 'Springfield'), 'Borough', 'Hogwarts'), 'Lucas Gardens', 'Neverland'), 'Peckham West', 'Gotham City'), ' East Camberwell', 'Wonderland'), 'Dog Kennel Hill', 'Stars Hollow'), 'West Walworth', 'Jurassic Park'), 'Bermondsey', 'Atlantis'), 'South Walworth', 'Asgard'), 'The Grange', 'The Shire'), 'London Bridge', 'Emerald City'), 'Walworth','Hogsmeade'), 'Peckham','Mordor'), 'Camberwell','Lilliput')"
SQLQuery3="UPDATE Correspondence SET [CPZ Code] = CASE WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'A' THEN 'A1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'B' THEN 'A2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C1' THEN 'C1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C2' THEN 'C2' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'CAM' THEN 'D1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'CP' THEN 'D2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'D' THEN 'E1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'E' THEN 'E2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'EC' THEN 'F1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'ED' THEN 'F2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'F' THEN 'G1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'G' THEN 'G2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'GR' THEN 'H1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'H' THEN 'H2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HE' THEN 'I1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HH' THEN 'I2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'J' THEN 'J1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'K' THEN 'J2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'L' THEN 'K1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'LG' THEN 'K2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M1' THEN 'L1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M2' THEN 'L2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'N' THEN 'M1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'NC' THEN 'M2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'P' THEN 'N1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'PCP' THEN 'N2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PR' THEN 'O1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PW' THEN 'O2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Q' THEN 'P1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'R' THEN 'P2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'SB' THEN 'Q1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'SEB' THEN 'Q2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'T' THEN 'R1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'TS' THEN 'R2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Z' THEN 'S1' ELSE [CPZ Code] END"
SQLQuery4="UPDATE Correspondence SET [Ticket #]=CONCAT('A3',STUFF([Ticket #],1,2,'')) "
sqlcon.execute(SQLQuery1)
sqlcon.execute(SQLQuery2)
sqlcon.execute(SQLQuery3)
sqlcon.execute(SQLQuery4)
The default __init__py file looks like this:
import logging
import azure.function as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name in the query string or in the request body",
status_code=400
)
I pasted my code in the azure function like so:
import logging
import azure.function as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request')
import sqlalchemy
import pandas as pd
sqlcon = sqlalchemy.create_engine('mssql://LBCUCRDBS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server')
path=r'C:\Users\H\Desktop\Correspondence\Correspondence_Received_with_Types.xlsx'
df=pd.read_excel(path)
df.to_sql('Correspondence',sqlcon,if_exists='append', index=False)
SQLQuery1="UPDATE Correspondence SET [Response By] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([Response By], 'Junaid Dar', 'Robson de Souza') , 'Jade Buckingham','Rivaldo Vitor Borba Ferreira'), 'Danielle Wallace','Ronaldo de Assis Moreira'), 'Sadiya Nurmahomed','Wayne Mark Rooney'),'Tendai Giwa','Joseph John Cole'), 'Tiffany Heslop', 'Lionel Andrés Messi'), 'Gareth Budiar','Steven George Gerrard'),'Jeneen Nicholson','Samuel Eto''o Fils'), 'James Beaton','Carlos Alberto Tevez'), 'Kirsty Moore','Vincent Jean Mpoy Kompany'),'Kelly McDonald','Leroy Aziz Sané'), 'Muzaffer Mehmet','Gareth Frank Bale'), 'Charmaine Banerji','Ronaldo Luís Nazário de Lima'),'Paul Hunt','Luís Carlos Almeida de Cunha'), 'Jessica Ararat-David', 'Dimitar Ivanov Berbatov'), 'Anita Hayler','José María Gutiérrez Hernández'), 'Vishal Chandel','Diego Armando Maradona'), 'Saleema Panjwani','Edson Arantes do Nascimento'), 'Mohammed Uddin','Thierry Daniel Henry'),'Edward Ford','Nicolas Sébastien Anelka'), 'Gillian Sutherland', 'Zinedine Yazid Zidane'), 'Abdul Jimoh','Gabriel Omar Batistuta'),'Lee Woolward','Emmanuel Laurent Petit'), 'Antonia Akintaju','Robert Emmanuel Pires'),'Maria Dayang','Roberto Carlos da Silva Rocha'), 'Latisha McFarlane','David Robert Joseph Beckham'), 'Janine Townsend', 'Francesco Totti'), 'Vaughan Anderson-Moore','Alessandro Del Piero')"
SQLQuery2="UPDATE Correspondence SET [CPZ Name] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([CPZ Name], 'North Camberwell', 'Dorne') , 'South Rotherhithe','King’s Landing'), 'Thorburn Square','Dragonstone'), 'All Non CPZ highway south of South Circular Road (A205)','Braavos'),'Housing Estates', 'Crownlands'), 'Parks Car Park', 'Winterfell'), 'South East Walworth', 'The Iron Islands'),'R - North Peckham','Casterly Rock'), 'CCTV Camera','Lannisport'),'Herne Hill', 'Volantis'),'South Camberwell', 'The Westerlands'),'South Bermondsey','Riverrun'), 'Rotherhithe', 'Highgarden'), ' Bankside','Storm’s End'), 'Trafalgar','The Kingsroad'), 'North Dulwich and Denmark Hill', 'Ashemark'), 'All Non CPZ highway north of South Circular Road (A205)','Blackmont'),'North Dulwich and Denmark Hill', 'Asshai'),'Newington', 'Doune Castle'), 'Peckham Road South', 'Horn Hill'),'South East Bermondsey', 'Banefort'), 'East dulwich','Red Keep'), ' East Camberwell', 'South Park'), 'Car Parks', 'Springfield'), 'Borough', 'Hogwarts'), 'Lucas Gardens', 'Neverland'), 'Peckham West', 'Gotham City'), ' East Camberwell', 'Wonderland'), 'Dog Kennel Hill', 'Stars Hollow'), 'West Walworth', 'Jurassic Park'), 'Bermondsey', 'Atlantis'), 'South Walworth', 'Asgard'), 'The Grange', 'The Shire'), 'London Bridge', 'Emerald City'), 'Walworth','Hogsmeade'), 'Peckham','Mordor'), 'Camberwell','Lilliput')"
SQLQuery3="UPDATE Correspondence SET [CPZ Code] = CASE WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'A' THEN 'A1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'B' THEN 'A2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C1' THEN 'C1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C2' THEN 'C2' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'CAM' THEN 'D1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'CP' THEN 'D2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'D' THEN 'E1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'E' THEN 'E2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'EC' THEN 'F1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'ED' THEN 'F2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'F' THEN 'G1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'G' THEN 'G2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'GR' THEN 'H1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'H' THEN 'H2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HE' THEN 'I1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HH' THEN 'I2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'J' THEN 'J1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'K' THEN 'J2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'L' THEN 'K1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'LG' THEN 'K2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M1' THEN 'L1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M2' THEN 'L2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'N' THEN 'M1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'NC' THEN 'M2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'P' THEN 'N1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'PCP' THEN 'N2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PR' THEN 'O1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PW' THEN 'O2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Q' THEN 'P1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'R' THEN 'P2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'SB' THEN 'Q1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'SEB' THEN 'Q2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'T' THEN 'R1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'TS' THEN 'R2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Z' THEN 'S1' ELSE [CPZ Code] END"
SQLQuery4="UPDATE Correspondence SET [Ticket #]=CONCAT('A3',STUFF([Ticket #],1,2,'')) "
sqlcon.execute(SQLQuery1)
sqlcon.execute(SQLQuery2)
sqlcon.execute(SQLQuery3)
sqlcon.execute(SQLQuery4)
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name in the query string or in the request body",
status_code=400
)
When editing the code in Visual Studio Code I actually do not get an error but when I called the Azure function nothing happens. I was expecting it to access the SQL database and update the data.
A:
Based on your code, this might be because for the code you have used. Make sure you place all the import statements at the top and run the azure functions. Make sure you are actually sending the HTTP request by hitting the function and giving the name while calling the function as per your code. Below is how it should look like,
import logging
import sqlalchemy
import pandas as pd
import azure.function as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request')
name = req.params.get('name')
sqlcon = sqlalchemy.create_engine('mssql://LBCUCRDBS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server')
path=r'C:\Users\H\Desktop\Correspondence\Correspondence_Received_with_Types.xlsx'
df=pd.read_excel(path)
df.to_sql('Correspondence',sqlcon,if_exists='append', index=False)
SQLQuery1="UPDATE Correspondence SET [Response By] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([Response By], 'Junaid Dar', 'Robson de Souza') , 'Jade Buckingham','Rivaldo Vitor Borba Ferreira'), 'Danielle Wallace','Ronaldo de Assis Moreira'), 'Sadiya Nurmahomed','Wayne Mark Rooney'),'Tendai Giwa','Joseph John Cole'), 'Tiffany Heslop', 'Lionel Andrés Messi'), 'Gareth Budiar','Steven George Gerrard'),'Jeneen Nicholson','Samuel Eto''o Fils'), 'James Beaton','Carlos Alberto Tevez'), 'Kirsty Moore','Vincent Jean Mpoy Kompany'),'Kelly McDonald','Leroy Aziz Sané'), 'Muzaffer Mehmet','Gareth Frank Bale'), 'Charmaine Banerji','Ronaldo Luís Nazário de Lima'),'Paul Hunt','Luís Carlos Almeida de Cunha'), 'Jessica Ararat-David', 'Dimitar Ivanov Berbatov'), 'Anita Hayler','José María Gutiérrez Hernández'), 'Vishal Chandel','Diego Armando Maradona'), 'Saleema Panjwani','Edson Arantes do Nascimento'), 'Mohammed Uddin','Thierry Daniel Henry'),'Edward Ford','Nicolas Sébastien Anelka'), 'Gillian Sutherland', 'Zinedine Yazid Zidane'), 'Abdul Jimoh','Gabriel Omar Batistuta'),'Lee Woolward','Emmanuel Laurent Petit'), 'Antonia Akintaju','Robert Emmanuel Pires'),'Maria Dayang','Roberto Carlos da Silva Rocha'), 'Latisha McFarlane','David Robert Joseph Beckham'), 'Janine Townsend', 'Francesco Totti'), 'Vaughan Anderson-Moore','Alessandro Del Piero')"
SQLQuery2="UPDATE Correspondence SET [CPZ Name] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([CPZ Name], 'North Camberwell', 'Dorne') , 'South Rotherhithe','King’s Landing'), 'Thorburn Square','Dragonstone'), 'All Non CPZ highway south of South Circular Road (A205)','Braavos'),'Housing Estates', 'Crownlands'), 'Parks Car Park', 'Winterfell'), 'South East Walworth', 'The Iron Islands'),'R - North Peckham','Casterly Rock'), 'CCTV Camera','Lannisport'),'Herne Hill', 'Volantis'),'South Camberwell', 'The Westerlands'),'South Bermondsey','Riverrun'), 'Rotherhithe', 'Highgarden'), ' Bankside','Storm’s End'), 'Trafalgar','The Kingsroad'), 'North Dulwich and Denmark Hill', 'Ashemark'), 'All Non CPZ highway north of South Circular Road (A205)','Blackmont'),'North Dulwich and Denmark Hill', 'Asshai'),'Newington', 'Doune Castle'), 'Peckham Road South', 'Horn Hill'),'South East Bermondsey', 'Banefort'), 'East dulwich','Red Keep'), ' East Camberwell', 'South Park'), 'Car Parks', 'Springfield'), 'Borough', 'Hogwarts'), 'Lucas Gardens', 'Neverland'), 'Peckham West', 'Gotham City'), ' East Camberwell', 'Wonderland'), 'Dog Kennel Hill', 'Stars Hollow'), 'West Walworth', 'Jurassic Park'), 'Bermondsey', 'Atlantis'), 'South Walworth', 'Asgard'), 'The Grange', 'The Shire'), 'London Bridge', 'Emerald City'), 'Walworth','Hogsmeade'), 'Peckham','Mordor'), 'Camberwell','Lilliput')"
SQLQuery3="UPDATE Correspondence SET [CPZ Code] = CASE WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'A' THEN 'A1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'B' THEN 'A2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C1' THEN 'C1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C2' THEN 'C2' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'CAM' THEN 'D1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'CP' THEN 'D2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'D' THEN 'E1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'E' THEN 'E2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'EC' THEN 'F1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'ED' THEN 'F2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'F' THEN 'G1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'G' THEN 'G2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'GR' THEN 'H1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'H' THEN 'H2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HE' THEN 'I1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HH' THEN 'I2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'J' THEN 'J1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'K' THEN 'J2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'L' THEN 'K1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'LG' THEN 'K2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M1' THEN 'L1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M2' THEN 'L2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'N' THEN 'M1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'NC' THEN 'M2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'P' THEN 'N1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'PCP' THEN 'N2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PR' THEN 'O1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PW' THEN 'O2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Q' THEN 'P1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'R' THEN 'P2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'SB' THEN 'Q1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'SEB' THEN 'Q2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'T' THEN 'R1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'TS' THEN 'R2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Z' THEN 'S1' ELSE [CPZ Code] END"
SQLQuery4="UPDATE Correspondence SET [Ticket #]=CONCAT('A3',STUFF([Ticket #],1,2,'')) "
sqlcon.execute(SQLQuery1)
sqlcon.execute(SQLQuery2)
sqlcon.execute(SQLQuery3)
sqlcon.execute(SQLQuery4)
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name in the query string or in the request body",
status_code=400
)
To run azure function and make a call to your HTTP Function
You can use the below url to call your function.
http://localhost:7071/api/HttpTrigger1?name=someName
If you are running through power automate make sure you pass a name to the function to get it called.
| Where does this Python script go in the __init__py file? | So I am trying to create my first Azure function. I am currently following a tutorial online. I managed to create an environment, install the necessary packages via Visual Studio Code. I actually have my Python script which looks like this:
import sqlalchemyimport pandas as pd
sqlcon = sqlalchemy.create_engine('mssql://LBCUCRDBS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server')
path=r'C:\Users\H\Desktop\Correspondence\Correspondence_Received_with_Types.xlsx'
df=pd.read_excel(path)
df.to_sql('Correspondence',sqlcon,if_exists='append', index=False)
SQLQuery1="UPDATE Correspondence SET [Response By] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([Response By], 'Junaid Dar', 'Robson de Souza') , 'Jade Buckingham','Rivaldo Vitor Borba Ferreira'), 'Danielle Wallace','Ronaldo de Assis Moreira'), 'Sadiya Nurmahomed','Wayne Mark Rooney'),'Tendai Giwa','Joseph John Cole'), 'Tiffany Heslop', 'Lionel Andrés Messi'), 'Gareth Budiar','Steven George Gerrard'),'Jeneen Nicholson','Samuel Eto''o Fils'), 'James Beaton','Carlos Alberto Tevez'), 'Kirsty Moore','Vincent Jean Mpoy Kompany'),'Kelly McDonald','Leroy Aziz Sané'), 'Muzaffer Mehmet','Gareth Frank Bale'), 'Charmaine Banerji','Ronaldo Luís Nazário de Lima'),'Paul Hunt','Luís Carlos Almeida de Cunha'), 'Jessica Ararat-David', 'Dimitar Ivanov Berbatov'), 'Anita Hayler','José María Gutiérrez Hernández'), 'Vishal Chandel','Diego Armando Maradona'), 'Saleema Panjwani','Edson Arantes do Nascimento'), 'Mohammed Uddin','Thierry Daniel Henry'),'Edward Ford','Nicolas Sébastien Anelka'), 'Gillian Sutherland', 'Zinedine Yazid Zidane'), 'Abdul Jimoh','Gabriel Omar Batistuta'),'Lee Woolward','Emmanuel Laurent Petit'), 'Antonia Akintaju','Robert Emmanuel Pires'),'Maria Dayang','Roberto Carlos da Silva Rocha'), 'Latisha McFarlane','David Robert Joseph Beckham'), 'Janine Townsend', 'Francesco Totti'), 'Vaughan Anderson-Moore','Alessandro Del Piero')"
SQLQuery2="UPDATE Correspondence SET [CPZ Name] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([CPZ Name], 'North Camberwell', 'Dorne') , 'South Rotherhithe','King’s Landing'), 'Thorburn Square','Dragonstone'), 'All Non CPZ highway south of South Circular Road (A205)','Braavos'),'Housing Estates', 'Crownlands'), 'Parks Car Park', 'Winterfell'), 'South East Walworth', 'The Iron Islands'),'R - North Peckham','Casterly Rock'), 'CCTV Camera','Lannisport'),'Herne Hill', 'Volantis'),'South Camberwell', 'The Westerlands'),'South Bermondsey','Riverrun'), 'Rotherhithe', 'Highgarden'), ' Bankside','Storm’s End'), 'Trafalgar','The Kingsroad'), 'North Dulwich and Denmark Hill', 'Ashemark'), 'All Non CPZ highway north of South Circular Road (A205)','Blackmont'),'North Dulwich and Denmark Hill', 'Asshai'),'Newington', 'Doune Castle'), 'Peckham Road South', 'Horn Hill'),'South East Bermondsey', 'Banefort'), 'East dulwich','Red Keep'), ' East Camberwell', 'South Park'), 'Car Parks', 'Springfield'), 'Borough', 'Hogwarts'), 'Lucas Gardens', 'Neverland'), 'Peckham West', 'Gotham City'), ' East Camberwell', 'Wonderland'), 'Dog Kennel Hill', 'Stars Hollow'), 'West Walworth', 'Jurassic Park'), 'Bermondsey', 'Atlantis'), 'South Walworth', 'Asgard'), 'The Grange', 'The Shire'), 'London Bridge', 'Emerald City'), 'Walworth','Hogsmeade'), 'Peckham','Mordor'), 'Camberwell','Lilliput')"
SQLQuery3="UPDATE Correspondence SET [CPZ Code] = CASE WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'A' THEN 'A1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'B' THEN 'A2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C1' THEN 'C1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C2' THEN 'C2' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'CAM' THEN 'D1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'CP' THEN 'D2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'D' THEN 'E1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'E' THEN 'E2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'EC' THEN 'F1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'ED' THEN 'F2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'F' THEN 'G1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'G' THEN 'G2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'GR' THEN 'H1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'H' THEN 'H2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HE' THEN 'I1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HH' THEN 'I2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'J' THEN 'J1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'K' THEN 'J2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'L' THEN 'K1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'LG' THEN 'K2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M1' THEN 'L1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M2' THEN 'L2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'N' THEN 'M1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'NC' THEN 'M2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'P' THEN 'N1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'PCP' THEN 'N2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PR' THEN 'O1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PW' THEN 'O2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Q' THEN 'P1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'R' THEN 'P2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'SB' THEN 'Q1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'SEB' THEN 'Q2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'T' THEN 'R1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'TS' THEN 'R2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Z' THEN 'S1' ELSE [CPZ Code] END"
SQLQuery4="UPDATE Correspondence SET [Ticket #]=CONCAT('A3',STUFF([Ticket #],1,2,'')) "
sqlcon.execute(SQLQuery1)
sqlcon.execute(SQLQuery2)
sqlcon.execute(SQLQuery3)
sqlcon.execute(SQLQuery4)
The default __init__py file looks like this:
import logging
import azure.function as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name in the query string or in the request body",
status_code=400
)
I pasted my code in the azure function like so:
import logging
import azure.function as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request')
import sqlalchemy
import pandas as pd
sqlcon = sqlalchemy.create_engine('mssql://LBCUCRDBS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server')
path=r'C:\Users\H\Desktop\Correspondence\Correspondence_Received_with_Types.xlsx'
df=pd.read_excel(path)
df.to_sql('Correspondence',sqlcon,if_exists='append', index=False)
SQLQuery1="UPDATE Correspondence SET [Response By] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([Response By], 'Junaid Dar', 'Robson de Souza') , 'Jade Buckingham','Rivaldo Vitor Borba Ferreira'), 'Danielle Wallace','Ronaldo de Assis Moreira'), 'Sadiya Nurmahomed','Wayne Mark Rooney'),'Tendai Giwa','Joseph John Cole'), 'Tiffany Heslop', 'Lionel Andrés Messi'), 'Gareth Budiar','Steven George Gerrard'),'Jeneen Nicholson','Samuel Eto''o Fils'), 'James Beaton','Carlos Alberto Tevez'), 'Kirsty Moore','Vincent Jean Mpoy Kompany'),'Kelly McDonald','Leroy Aziz Sané'), 'Muzaffer Mehmet','Gareth Frank Bale'), 'Charmaine Banerji','Ronaldo Luís Nazário de Lima'),'Paul Hunt','Luís Carlos Almeida de Cunha'), 'Jessica Ararat-David', 'Dimitar Ivanov Berbatov'), 'Anita Hayler','José María Gutiérrez Hernández'), 'Vishal Chandel','Diego Armando Maradona'), 'Saleema Panjwani','Edson Arantes do Nascimento'), 'Mohammed Uddin','Thierry Daniel Henry'),'Edward Ford','Nicolas Sébastien Anelka'), 'Gillian Sutherland', 'Zinedine Yazid Zidane'), 'Abdul Jimoh','Gabriel Omar Batistuta'),'Lee Woolward','Emmanuel Laurent Petit'), 'Antonia Akintaju','Robert Emmanuel Pires'),'Maria Dayang','Roberto Carlos da Silva Rocha'), 'Latisha McFarlane','David Robert Joseph Beckham'), 'Janine Townsend', 'Francesco Totti'), 'Vaughan Anderson-Moore','Alessandro Del Piero')"
SQLQuery2="UPDATE Correspondence SET [CPZ Name] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([CPZ Name], 'North Camberwell', 'Dorne') , 'South Rotherhithe','King’s Landing'), 'Thorburn Square','Dragonstone'), 'All Non CPZ highway south of South Circular Road (A205)','Braavos'),'Housing Estates', 'Crownlands'), 'Parks Car Park', 'Winterfell'), 'South East Walworth', 'The Iron Islands'),'R - North Peckham','Casterly Rock'), 'CCTV Camera','Lannisport'),'Herne Hill', 'Volantis'),'South Camberwell', 'The Westerlands'),'South Bermondsey','Riverrun'), 'Rotherhithe', 'Highgarden'), ' Bankside','Storm’s End'), 'Trafalgar','The Kingsroad'), 'North Dulwich and Denmark Hill', 'Ashemark'), 'All Non CPZ highway north of South Circular Road (A205)','Blackmont'),'North Dulwich and Denmark Hill', 'Asshai'),'Newington', 'Doune Castle'), 'Peckham Road South', 'Horn Hill'),'South East Bermondsey', 'Banefort'), 'East dulwich','Red Keep'), ' East Camberwell', 'South Park'), 'Car Parks', 'Springfield'), 'Borough', 'Hogwarts'), 'Lucas Gardens', 'Neverland'), 'Peckham West', 'Gotham City'), ' East Camberwell', 'Wonderland'), 'Dog Kennel Hill', 'Stars Hollow'), 'West Walworth', 'Jurassic Park'), 'Bermondsey', 'Atlantis'), 'South Walworth', 'Asgard'), 'The Grange', 'The Shire'), 'London Bridge', 'Emerald City'), 'Walworth','Hogsmeade'), 'Peckham','Mordor'), 'Camberwell','Lilliput')"
SQLQuery3="UPDATE Correspondence SET [CPZ Code] = CASE WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'A' THEN 'A1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'B' THEN 'A2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C1' THEN 'C1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C2' THEN 'C2' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'CAM' THEN 'D1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'CP' THEN 'D2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'D' THEN 'E1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'E' THEN 'E2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'EC' THEN 'F1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'ED' THEN 'F2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'F' THEN 'G1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'G' THEN 'G2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'GR' THEN 'H1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'H' THEN 'H2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HE' THEN 'I1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HH' THEN 'I2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'J' THEN 'J1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'K' THEN 'J2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'L' THEN 'K1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'LG' THEN 'K2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M1' THEN 'L1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M2' THEN 'L2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'N' THEN 'M1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'NC' THEN 'M2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'P' THEN 'N1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'PCP' THEN 'N2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PR' THEN 'O1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PW' THEN 'O2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Q' THEN 'P1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'R' THEN 'P2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'SB' THEN 'Q1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'SEB' THEN 'Q2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'T' THEN 'R1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'TS' THEN 'R2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Z' THEN 'S1' ELSE [CPZ Code] END"
SQLQuery4="UPDATE Correspondence SET [Ticket #]=CONCAT('A3',STUFF([Ticket #],1,2,'')) "
sqlcon.execute(SQLQuery1)
sqlcon.execute(SQLQuery2)
sqlcon.execute(SQLQuery3)
sqlcon.execute(SQLQuery4)
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name in the query string or in the request body",
status_code=400
)
When editing the code in Visual Studio Code I actually do not get an error but when I called the Azure function nothing happens. I was expecting it to access the SQL database and update the data.
| [
"Based on your code, this might be because for the code you have used. Make sure you place all the import statements at the top and run the azure functions. Make sure you are actually sending the HTTP request by hitting the function and giving the name while calling the function as per your code. Below is how it should look like,\nimport logging\nimport sqlalchemy\nimport pandas as pd\nimport azure.function as func\n\ndef main(req: func.HttpRequest) -> func.HttpResponse:\n logging.info('Python HTTP trigger function processed a request')\n \n name = req.params.get('name')\n \n sqlcon = sqlalchemy.create_engine('mssql://LBCUCRDBS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server') \n\n path=r'C:\\Users\\H\\Desktop\\Correspondence\\Correspondence_Received_with_Types.xlsx'\n\n df=pd.read_excel(path)\n\n df.to_sql('Correspondence',sqlcon,if_exists='append', index=False)\n\n SQLQuery1=\"UPDATE Correspondence SET [Response By] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([Response By], 'Junaid Dar', 'Robson de Souza') , 'Jade Buckingham','Rivaldo Vitor Borba Ferreira'), 'Danielle Wallace','Ronaldo de Assis Moreira'), 'Sadiya Nurmahomed','Wayne Mark Rooney'),'Tendai Giwa','Joseph John Cole'), 'Tiffany Heslop', 'Lionel Andrés Messi'), 'Gareth Budiar','Steven George Gerrard'),'Jeneen Nicholson','Samuel Eto''o Fils'), 'James Beaton','Carlos Alberto Tevez'), 'Kirsty Moore','Vincent Jean Mpoy Kompany'),'Kelly McDonald','Leroy Aziz Sané'), 'Muzaffer Mehmet','Gareth Frank Bale'), 'Charmaine Banerji','Ronaldo Luís Nazário de Lima'),'Paul Hunt','Luís Carlos Almeida de Cunha'), 'Jessica Ararat-David', 'Dimitar Ivanov Berbatov'), 'Anita Hayler','José María Gutiérrez Hernández'), 'Vishal Chandel','Diego Armando Maradona'), 'Saleema Panjwani','Edson Arantes do Nascimento'), 'Mohammed Uddin','Thierry Daniel Henry'),'Edward Ford','Nicolas Sébastien Anelka'), 'Gillian Sutherland', 'Zinedine Yazid Zidane'), 'Abdul Jimoh','Gabriel Omar Batistuta'),'Lee Woolward','Emmanuel Laurent Petit'), 'Antonia Akintaju','Robert Emmanuel Pires'),'Maria Dayang','Roberto Carlos da Silva Rocha'), 'Latisha McFarlane','David Robert Joseph Beckham'), 'Janine Townsend', 'Francesco Totti'), 'Vaughan Anderson-Moore','Alessandro Del Piero')\"\n SQLQuery2=\"UPDATE Correspondence SET [CPZ Name] = REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE (REPLACE(REPLACE([CPZ Name], 'North Camberwell', 'Dorne') , 'South Rotherhithe','King’s Landing'), 'Thorburn Square','Dragonstone'), 'All Non CPZ highway south of South Circular Road (A205)','Braavos'),'Housing Estates', 'Crownlands'), 'Parks Car Park', 'Winterfell'), 'South East Walworth', 'The Iron Islands'),'R - North Peckham','Casterly Rock'), 'CCTV Camera','Lannisport'),'Herne Hill', 'Volantis'),'South Camberwell', 'The Westerlands'),'South Bermondsey','Riverrun'), 'Rotherhithe', 'Highgarden'), ' Bankside','Storm’s End'), 'Trafalgar','The Kingsroad'), 'North Dulwich and Denmark Hill', 'Ashemark'), 'All Non CPZ highway north of South Circular Road (A205)','Blackmont'),'North Dulwich and Denmark Hill', 'Asshai'),'Newington', 'Doune Castle'), 'Peckham Road South', 'Horn Hill'),'South East Bermondsey', 'Banefort'), 'East dulwich','Red Keep'), ' East Camberwell', 'South Park'), 'Car Parks', 'Springfield'), 'Borough', 'Hogwarts'), 'Lucas Gardens', 'Neverland'), 'Peckham West', 'Gotham City'), ' East Camberwell', 'Wonderland'), 'Dog Kennel Hill', 'Stars Hollow'), 'West Walworth', 'Jurassic Park'), 'Bermondsey', 'Atlantis'), 'South Walworth', 'Asgard'), 'The Grange', 'The Shire'), 'London Bridge', 'Emerald City'), 'Walworth','Hogsmeade'), 'Peckham','Mordor'), 'Camberwell','Lilliput')\"\n SQLQuery3=\"UPDATE Correspondence SET [CPZ Code] = CASE WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'A' THEN 'A1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'B' THEN 'A2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C1' THEN 'C1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'C2' THEN 'C2' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'CAM' THEN 'D1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'CP' THEN 'D2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'D' THEN 'E1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'E' THEN 'E2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'EC' THEN 'F1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'ED' THEN 'F2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'F' THEN 'G1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'G' THEN 'G2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'GR' THEN 'H1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'H' THEN 'H2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HE' THEN 'I1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'HH' THEN 'I2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'J' THEN 'J1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'K' THEN 'J2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'L' THEN 'K1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'LG' THEN 'K2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M1' THEN 'L1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'M2' THEN 'L2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'N' THEN 'M1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'NC' THEN 'M2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'P' THEN 'N1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'PCP' THEN 'N2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PR' THEN 'O1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'PW' THEN 'O2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Q' THEN 'P1' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'R' THEN 'P2' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'SB' THEN 'Q1' WHEN LEN([CPZ Code]) = 3 AND [CPZ Code] = 'SEB' THEN 'Q2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'T' THEN 'R1' WHEN LEN([CPZ Code]) = 2 AND [CPZ Code] = 'TS' THEN 'R2' WHEN LEN([CPZ Code]) = 1 AND [CPZ Code] = 'Z' THEN 'S1' ELSE [CPZ Code] END\"\n SQLQuery4=\"UPDATE Correspondence SET [Ticket #]=CONCAT('A3',STUFF([Ticket #],1,2,'')) \"\n\n sqlcon.execute(SQLQuery1)\n sqlcon.execute(SQLQuery2)\n sqlcon.execute(SQLQuery3)\n sqlcon.execute(SQLQuery4)\n\n if not name:\n try:\n req_body = req.get_json()\n except ValueError:\n pass\n else:\n name = req_body.get('name')\n \n if name:\n return func.HttpResponse(f\"Hello {name}!\")\n else:\n return func.HttpResponse(\n \"Please pass a name in the query string or in the request body\",\n status_code=400\n )\n\nTo run azure function and make a call to your HTTP Function\n\nYou can use the below url to call your function.\nhttp://localhost:7071/api/HttpTrigger1?name=someName\n\nIf you are running through power automate make sure you pass a name to the function to get it called.\n"
] | [
0
] | [] | [] | [
"azure",
"azure_functions",
"function",
"python"
] | stackoverflow_0074359362_azure_azure_functions_function_python.txt |
Q:
Move 3d plot on the xy plane
I am trying to plot my data, but my 3d plot is out of bounds, meaning its above the z-axis 0 point. I want it to be on the xy plane, meaning an offset of -160. Is there a way of adding an offset?(Please check MyImage to visualise what I am trying to do)
My code:
ax = plt.figure().add_subplot(projection='3d')
ax.set(xlim=(-3, 2), ylim=(0.25, 2), zlim=(-160, 0), xlabel='CV1', ylabel='CV2', zlabel='free energy (kJ/mol)')
ax.plot_surface(xvals, yvals, zvals, edgecolor='royalblue', lw=0.8, rstride=1, cstride=1,alpha=0.3, cmap= 'plasma')
ax.contour(xvals, yvals, zvals, zdir='z', offset= -160, cmap='plasma')
plt.savefig('myplot.png')
plt.show()
(The x_vals, y_vals, z_vals are reshaped ndarrays, their shape((101,101))
MyImage
I searched the documentation of the plot_surface function but could not find a way to add some offset.
A:
To add an offset to your 3D plot in Matplotlib, you can use the zoffset parameter of the plot_surface() function. This parameter specifies the z-coordinate at which the surface is drawn.
Here's an example of how you can use the zoffset parameter to set the offset of your plot:
ax = plt.figure().add_subplot(projection='3d')
ax.set(xlim=(-3, 2), ylim=(0.25, 2), zlim=(-160, 0), xlabel='CV1', ylabel='CV2', zlabel='free energy (kJ/mol)')
# Create a Poly3DCollection object for the surface plot
surface = ax.plot_surface(xvals, yvals, zvals, edgecolor='royalblue', lw=0.8, rstride=1, cstride=1, alpha=0.3, cmap= 'plasma')
# Use the set_zoffset() method to set the z-offset of the surface plot
surface.set_zoffset(-160)
ax.contour(xvals, yvals, zvals, zdir='z', offset= -160, cmap='plasma')
plt.savefig('myplot.png')
plt.show()
Alternatively, you can also use the offset_z parameter of the plot_surface() function to set the z-offset of the surface plot.
ax = plt.figure().add_subplot(projection='3d')
ax.set(xlim=(-3, 2), ylim=(0.25, 2), zlim=(-160, 0), xlabel='CV1', ylabel='CV2', zlabel='free energy (kJ/mol)')
# Use the offset_z parameter to set the z-offset of the surface plot
ax.plot_surface(xvals, yvals, zvals, edgecolor='royalblue', lw=0.8, rstride=1, cstride=1, alpha=0.3, cmap= 'plasma', offset_z=-160)
ax.contour(xvals, yvals, zvals, zdir='z', offset= -160, cmap='plasma')
plt.savefig('myplot.png')
plt.show()
| Move 3d plot on the xy plane | I am trying to plot my data, but my 3d plot is out of bounds, meaning its above the z-axis 0 point. I want it to be on the xy plane, meaning an offset of -160. Is there a way of adding an offset?(Please check MyImage to visualise what I am trying to do)
My code:
ax = plt.figure().add_subplot(projection='3d')
ax.set(xlim=(-3, 2), ylim=(0.25, 2), zlim=(-160, 0), xlabel='CV1', ylabel='CV2', zlabel='free energy (kJ/mol)')
ax.plot_surface(xvals, yvals, zvals, edgecolor='royalblue', lw=0.8, rstride=1, cstride=1,alpha=0.3, cmap= 'plasma')
ax.contour(xvals, yvals, zvals, zdir='z', offset= -160, cmap='plasma')
plt.savefig('myplot.png')
plt.show()
(The x_vals, y_vals, z_vals are reshaped ndarrays, their shape((101,101))
MyImage
I searched the documentation of the plot_surface function but could not find a way to add some offset.
| [
"To add an offset to your 3D plot in Matplotlib, you can use the zoffset parameter of the plot_surface() function. This parameter specifies the z-coordinate at which the surface is drawn.\nHere's an example of how you can use the zoffset parameter to set the offset of your plot:\nax = plt.figure().add_subplot(projection='3d')\nax.set(xlim=(-3, 2), ylim=(0.25, 2), zlim=(-160, 0), xlabel='CV1', ylabel='CV2', zlabel='free energy (kJ/mol)')\n\n# Create a Poly3DCollection object for the surface plot\nsurface = ax.plot_surface(xvals, yvals, zvals, edgecolor='royalblue', lw=0.8, rstride=1, cstride=1, alpha=0.3, cmap= 'plasma')\n\n# Use the set_zoffset() method to set the z-offset of the surface plot\nsurface.set_zoffset(-160)\n\nax.contour(xvals, yvals, zvals, zdir='z', offset= -160, cmap='plasma')\nplt.savefig('myplot.png')\nplt.show()\n\nAlternatively, you can also use the offset_z parameter of the plot_surface() function to set the z-offset of the surface plot.\nax = plt.figure().add_subplot(projection='3d')\nax.set(xlim=(-3, 2), ylim=(0.25, 2), zlim=(-160, 0), xlabel='CV1', ylabel='CV2', zlabel='free energy (kJ/mol)')\n\n# Use the offset_z parameter to set the z-offset of the surface plot\nax.plot_surface(xvals, yvals, zvals, edgecolor='royalblue', lw=0.8, rstride=1, cstride=1, alpha=0.3, cmap= 'plasma', offset_z=-160)\n\nax.contour(xvals, yvals, zvals, zdir='z', offset= -160, cmap='plasma')\nplt.savefig('myplot.png')\nplt.show()\n\n"
] | [
0
] | [] | [] | [
"3d",
"matplotlib",
"move",
"plot",
"python"
] | stackoverflow_0074675441_3d_matplotlib_move_plot_python.txt |
Q:
Simplify if conditions in python
Is there a neat way to simplify this if statement?
I need n to be >= 2 and <= 100 and value to increase by one for every seventh step (except the first one which should be between 2 and 7 and last one which should be between 98 and 100).
if n >= 2 and n <= 7:
value = 1
elif n > 7 and n <= 14:
value = 2
elif n > 14 and n <= 21:
value = 3
elif n > 21 and n <= 28:
value = 4
elif n > 28 and n <= 35:
value = 5
elif n > 35 and n <= 42:
value = 6
elif n > 42 and n <= 49:
value = 7
elif n > 49 and n <= 56:
value = 8
elif n > 56 and n <= 63:
value = 9
elif n > 63 and n <= 70:
value = 10
elif n > 70 and n <= 77:
value = 11
elif n > 77 and n <= 84:
value = 12
elif n > 84 and n <= 91:
value = 13
elif n > 91 and n <= 98:
value = 14
elif n > 98 and n <= 100:
value = 15
EDIT:
Furthermore I'm having trouble finding the smallest number possible from the following question:
Matchsticks are ideal tools to represent numbers. A common way to represent the ten decimal digits with matchsticks is the following:
This is identical to how numbers are displayed on an ordinary alarm clock. With a given number of matchsticks you can generate a wide range of numbers. We are wondering what the smallest and largest numbers are that can be created by using all your matchsticks.
Input:
On the first line one positive number: the number of testcases, at most 100. After that per testcase:
One line with an integer n (2 <= n <= 100): the number of matchsticks you have.
Output:
Per testcase:
One line with the smallest and largest numbers you can create, separated by a single space. Both numbers should be positive and contain no leading zeroes.
I've tried multiple different ways to try to solve this problem, I'm currently trying to:
find the minimal number of digits (value) for the smallest number:
#Swifty
def values(n):
if 2 <= n <= 100:
value = (n + 6) // 7
minimum(n, value)
now i want to send value to a function minimum() which should generate all the different combinations of numbers that are the length of value. I want to store all these numbers in a list and then take min() to get the smallest one. I'm not getting this part to work, and would appreciate some inspiration.
Something to remember is that the number can't start with a 0.
A:
if 2 <= n <= 100:
value = (n+6)//7
Unless I'm mistaken, this should work for part 2:
numdict = {2:1,3:7,4:4,5:2,6:0,7:8,8:10,9:18,10:22,11:20,12:28,13:68}
def min_stick(n):
if 2 <= n <= 13:
return numdict[n]
digits = (n + 6) // 7
base = str(numdict[7+n%7])
return int(base+"8"*(digits - len(base)))
And, though this one's a no-brainer:
def max_stick(n):
if n%2:
return int("7"+"1"*((n-3)//2))
return int("1"*(n//2))
A:
Here is one way to simplify the if statement you provided in Python:
value = (n // 7) + 1 if 2 <= n <= 100 else None
This code uses integer division (the "//" operator) to divide the value of "n" by 7 and add 1 to the result. If the value of "n" is less than or equal to 100, this will give the same result as the original if statement. Otherwise, it will set "value" to None.
Here is an example of how this code would work:
# n = 14
value = (14 // 7) + 1 # This evaluates to 2
# n = 100
value = (100 // 7) + 1 # This evaluates to 15
# n = 101
value = (101 // 7) + 1 # This evaluates to None
I hope this helps! Let me know if you have any other questions.
| Simplify if conditions in python | Is there a neat way to simplify this if statement?
I need n to be >= 2 and <= 100 and value to increase by one for every seventh step (except the first one which should be between 2 and 7 and last one which should be between 98 and 100).
if n >= 2 and n <= 7:
value = 1
elif n > 7 and n <= 14:
value = 2
elif n > 14 and n <= 21:
value = 3
elif n > 21 and n <= 28:
value = 4
elif n > 28 and n <= 35:
value = 5
elif n > 35 and n <= 42:
value = 6
elif n > 42 and n <= 49:
value = 7
elif n > 49 and n <= 56:
value = 8
elif n > 56 and n <= 63:
value = 9
elif n > 63 and n <= 70:
value = 10
elif n > 70 and n <= 77:
value = 11
elif n > 77 and n <= 84:
value = 12
elif n > 84 and n <= 91:
value = 13
elif n > 91 and n <= 98:
value = 14
elif n > 98 and n <= 100:
value = 15
EDIT:
Furthermore I'm having trouble finding the smallest number possible from the following question:
Matchsticks are ideal tools to represent numbers. A common way to represent the ten decimal digits with matchsticks is the following:
This is identical to how numbers are displayed on an ordinary alarm clock. With a given number of matchsticks you can generate a wide range of numbers. We are wondering what the smallest and largest numbers are that can be created by using all your matchsticks.
Input:
On the first line one positive number: the number of testcases, at most 100. After that per testcase:
One line with an integer n (2 <= n <= 100): the number of matchsticks you have.
Output:
Per testcase:
One line with the smallest and largest numbers you can create, separated by a single space. Both numbers should be positive and contain no leading zeroes.
I've tried multiple different ways to try to solve this problem, I'm currently trying to:
find the minimal number of digits (value) for the smallest number:
#Swifty
def values(n):
if 2 <= n <= 100:
value = (n + 6) // 7
minimum(n, value)
now i want to send value to a function minimum() which should generate all the different combinations of numbers that are the length of value. I want to store all these numbers in a list and then take min() to get the smallest one. I'm not getting this part to work, and would appreciate some inspiration.
Something to remember is that the number can't start with a 0.
| [
"if 2 <= n <= 100:\n value = (n+6)//7\n\nUnless I'm mistaken, this should work for part 2:\nnumdict = {2:1,3:7,4:4,5:2,6:0,7:8,8:10,9:18,10:22,11:20,12:28,13:68}\n\ndef min_stick(n):\n if 2 <= n <= 13:\n return numdict[n]\n digits = (n + 6) // 7\n base = str(numdict[7+n%7])\n return int(base+\"8\"*(digits - len(base)))\n\nAnd, though this one's a no-brainer:\ndef max_stick(n):\n if n%2:\n return int(\"7\"+\"1\"*((n-3)//2))\n return int(\"1\"*(n//2))\n\n",
"Here is one way to simplify the if statement you provided in Python:\nvalue = (n // 7) + 1 if 2 <= n <= 100 else None\n\nThis code uses integer division (the \"//\" operator) to divide the value of \"n\" by 7 and add 1 to the result. If the value of \"n\" is less than or equal to 100, this will give the same result as the original if statement. Otherwise, it will set \"value\" to None.\nHere is an example of how this code would work:\n# n = 14\nvalue = (14 // 7) + 1 # This evaluates to 2\n\n# n = 100\nvalue = (100 // 7) + 1 # This evaluates to 15\n\n# n = 101\nvalue = (101 // 7) + 1 # This evaluates to None\n\nI hope this helps! Let me know if you have any other questions.\n"
] | [
1,
0
] | [] | [] | [
"if_statement",
"python",
"python_3.x"
] | stackoverflow_0074674910_if_statement_python_python_3.x.txt |
Q:
tf vesion problem. Using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution
This is the code that create problem.
def cost_func(x=None, y=None):
if not x:
tf.compat.v1.disable_eager_execution()
x = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
if not y:
tf.compat.v1.disable_eager_execution()
y = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
# two local minima near (0, 0)
# z = __f1(x, y)
# 3rd local minimum at (-0.5, -0.8)
z = -1 * __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.35, y_sig=0.35)
# one steep gaussian trench at (0, 0)
# z -= __f2(x, y, x_mean=0, y_mean=0, x_sig=0.2, y_sig=0.2)
# three steep gaussian trenches
z -= __f2(x, y, x_mean=1.0, y_mean=-0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-1.0, y_mean=0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.2, y_sig=0.2)
return x, y, z
My goal is:
For visualizing contour plot, call f() and collect placeholder nodes for fast GPU calc.
To incorporate variables to optimize, pass them in as argument to attach as x and y.
Args:
x: None if placeholder tensor is used as input. Specify x to use x as input tensor.
y: None if placeholder tensor is used as input. Specify y to use y as input tensor.
Returns:
Tuple (x, y, z) where x and y are input tensors and z is output tensor.
The error is:
using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
The fully code is:
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
# cost function
def cost_func(x=None, y=None):
'''Cost function.
For visualizing contour plot, call f() and collect placeholder nodes for fast GPU calc.
To incorporate variables to optimize, pass them in as argument to attach as x and y.
Args:
x: None if placeholder tensor is used as input. Specify x to use x as input tensor.
y: None if placeholder tensor is used as input. Specify y to use y as input tensor.
Returns:
Tuple (x, y, z) where x and y are input tensors and z is output tensor.
'''
if not x:
tf.compat.v1.disable_eager_execution()
x = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
if not y:
tf.compat.v1.disable_eager_execution()
y = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
# two local minima near (0, 0)
# z = __f1(x, y)
# 3rd local minimum at (-0.5, -0.8)
z = -1 * __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.35, y_sig=0.35)
# one steep gaussian trench at (0, 0)
# z -= __f2(x, y, x_mean=0, y_mean=0, x_sig=0.2, y_sig=0.2)
# three steep gaussian trenches
z -= __f2(x, y, x_mean=1.0, y_mean=-0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-1.0, y_mean=0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.2, y_sig=0.2)
return x, y, z
# noisy hills of the cost function
def __f1(x, y):
return -1 * tf.sin(x * x) * tf.cos(3 * y * y) * tf.exp(-(x * y) * (x * y)) - tf.exp(-(x + y) * (x + y))
# bivar gaussian hills of the cost function
def __f2(x, y, x_mean, y_mean, x_sig, y_sig):
normalizing = 1 / (2 * np.pi * x_sig * y_sig)
x_exp = (-1 * tf.square(x - x_mean)) / (2 * tf.square(x_sig))
y_exp = (-1 * tf.square(y - y_mean)) / (2 * tf.square(y_sig))
return normalizing * tf.exp(x_exp + y_exp)
# pyplot settings
plt.ion()
fig = plt.figure(figsize=(3, 2), dpi=300)
ax = fig.add_subplot(111, projection='3d')
plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0)
params = {'legend.fontsize': 3,
'legend.handlelength': 3}
plt.rcParams.update(params)
plt.axis('off')
# input (x, y) and output (z) nodes of cost-function graph
x, y, z = cost_func()
# visualize cost function as a contour plot
x_val = y_val = np.arange(-1.5, 1.5, 0.005, dtype=np.float32)
x_val_mesh, y_val_mesh = np.meshgrid(x_val, y_val)
x_val_mesh_flat = x_val_mesh.reshape([-1, 1])
y_val_mesh_flat = y_val_mesh.reshape([-1, 1])
with tf.compat.v1.Session() as sess:
z_val_mesh_flat = sess.run(z, feed_dict={x: x_val_mesh_flat, y: y_val_mesh_flat})
z_val_mesh = z_val_mesh_flat.reshape(x_val_mesh.shape)
levels = np.arange(-10, 1, 0.05)
# ax.contour(x_val_mesh, y_val_mesh, z_val_mesh, levels, alpha=.7, linewidths=0.4)
# ax.plot_wireframe(x_val_mesh, y_val_mesh, z_val_mesh, alpha=.5, linewidths=0.4, antialiased=True)
ax.plot_surface(x_val_mesh, y_val_mesh, z_val_mesh, alpha=.4, cmap=cm.coolwarm)
plt.draw()
# starting location for variables
x_i = 0.75
y_i = 1.0
# create variable pair (x, y) for each optimizer
x_var, y_var = [], []
for i in range(7):
x_var.append(tf.Variable(x_i, [1], dtype=tf.float32))
y_var.append(tf.Variable(y_i, [1], dtype=tf.float32))
# create separate graph for each variable pairs
cost = []
for i in range(7):
cost.append(cost_func(x_var[i], y_var[i])[2])
# define method of gradient descent for each graph
# optimizer label name, learning rate, color
ops_param = np.array([['Adadelta', 50.0, 'b'],
['Adagrad', 0.10, 'g'],
['Adam', 0.05, 'r'],
['Ftrl', 0.5, 'c'],
['GD', 0.05, 'm'],
['Momentum', 0.01, 'y'],
['RMSProp', 0.02, 'k']])
ops = []
ops.append(tf.compat.v1.train.AdadeltaOptimizer(float(ops_param[0, 1])).minimize(cost[0]))
ops.append(tf.compat.v1.train.AdagradOptimizer(float(ops_param[1, 1])).minimize(cost[1]))
ops.append(tf.compat.v1.train.AdamOptimizer(float(ops_param[2, 1])).minimize(cost[2]))
ops.append(tf.compat.v1.train.FtrlOptimizer(float(ops_param[3, 1])).minimize(cost[3]))
ops.append(tf.compat.v1.train.GradientDescentOptimizer(float(ops_param[4, 1])).minimize(cost[4]))
ops.append(tf.compat.v1.train.MomentumOptimizer(float(ops_param[5, 1]), momentum=0.95).minimize(cost[5]))
ops.append(tf.compat.v1.train.RMSPropOptimizer(float(ops_param[6, 1])).minimize(cost[6]))
# 3d plot camera zoom, angle
xlm = ax.get_xlim3d()
ylm = ax.get_ylim3d()
zlm = ax.get_zlim3d()
ax.set_xlim3d(xlm[0] * 0.5, xlm[1] * 0.5)
ax.set_ylim3d(ylm[0] * 0.5, ylm[1] * 0.5)
ax.set_zlim3d(zlm[0] * 0.5, zlm[1] * 0.5)
azm = ax.azim
ele = ax.elev + 40
ax.view_init(elev=ele, azim=azm)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# use last location to draw a line to the current location
last_x, last_y, last_z = [], [], []
plot_cache = [None for _ in range(len(ops))]
# loop each step of the optimization algorithm
steps = 1000
for iter in range(steps):
for i, op in enumerate(ops):
# run a step of optimization and collect new x and y variable values
_, x_val, y_val, z_val = sess.run([op, x_var[i], y_var[i], cost[i]])
# move dot to the current value
if plot_cache[i]:
plot_cache[i].remove()
plot_cache[i] = ax.scatter(x_val, y_val, z_val, s=3, depthshade=True, label=ops_param[i, 0],
color=ops_param[i, 2])
# draw a line from the previous value
if iter == 0:
last_z.append(z_val)
last_x.append(x_i)
last_y.append(y_i)
ax.plot([last_x[i], x_val], [last_y[i], y_val], [last_z[i], z_val], linewidth=0.5, color=ops_param[i, 2])
last_x[i] = x_val
last_y[i] = y_val
last_z[i] = z_val
if iter == 0:
legend = np.vstack((ops_param[:, 0], ops_param[:, 1])).transpose()
plt.legend(plot_cache, legend)
plt.savefig('figures/' + str(iter) + '.png')
print('iteration: {}'.format(iter))
plt.pause(0.0001)
print("done")
If x and y are placeholder tensor and they are used as input, output None. I have tried converting x and y to other types (such as string) if they are placeholder tensors, and then outputting them directly if they are not placeholder tensors. But I found that this didn't work. Then I tried not using keyword arguments, but making the determination within the function, and I found that that didn't work either.
As I understand, I will solve this problem if I use tensorflow v1, but I want to use tensorflow v2. Please help me!!!!
A:
To resolve this error, you can either use eager execution or decorate the function using @tf.function. Eager execution is enabled by default, so if you're using versions of TensorFlow older than 1.10.0, you may need to explicitly enable it in your code. To enable it, you can add the following line of code:
tf.enable_eager_execution()
The @tf.function decorator allows for the conversion of a Python function into a TensorFlow graph. It accepts parameters, such as input signature and autograph, that can be used to control how the graph is generated. For more information, you can refer to the official TensorFlow documentation
| tf vesion problem. Using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution | This is the code that create problem.
def cost_func(x=None, y=None):
if not x:
tf.compat.v1.disable_eager_execution()
x = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
if not y:
tf.compat.v1.disable_eager_execution()
y = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
# two local minima near (0, 0)
# z = __f1(x, y)
# 3rd local minimum at (-0.5, -0.8)
z = -1 * __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.35, y_sig=0.35)
# one steep gaussian trench at (0, 0)
# z -= __f2(x, y, x_mean=0, y_mean=0, x_sig=0.2, y_sig=0.2)
# three steep gaussian trenches
z -= __f2(x, y, x_mean=1.0, y_mean=-0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-1.0, y_mean=0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.2, y_sig=0.2)
return x, y, z
My goal is:
For visualizing contour plot, call f() and collect placeholder nodes for fast GPU calc.
To incorporate variables to optimize, pass them in as argument to attach as x and y.
Args:
x: None if placeholder tensor is used as input. Specify x to use x as input tensor.
y: None if placeholder tensor is used as input. Specify y to use y as input tensor.
Returns:
Tuple (x, y, z) where x and y are input tensors and z is output tensor.
The error is:
using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
The fully code is:
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
# cost function
def cost_func(x=None, y=None):
'''Cost function.
For visualizing contour plot, call f() and collect placeholder nodes for fast GPU calc.
To incorporate variables to optimize, pass them in as argument to attach as x and y.
Args:
x: None if placeholder tensor is used as input. Specify x to use x as input tensor.
y: None if placeholder tensor is used as input. Specify y to use y as input tensor.
Returns:
Tuple (x, y, z) where x and y are input tensors and z is output tensor.
'''
if not x:
tf.compat.v1.disable_eager_execution()
x = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
if not y:
tf.compat.v1.disable_eager_execution()
y = tf.compat.v1.placeholder(tf.float32, shape=[None, 1])
# two local minima near (0, 0)
# z = __f1(x, y)
# 3rd local minimum at (-0.5, -0.8)
z = -1 * __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.35, y_sig=0.35)
# one steep gaussian trench at (0, 0)
# z -= __f2(x, y, x_mean=0, y_mean=0, x_sig=0.2, y_sig=0.2)
# three steep gaussian trenches
z -= __f2(x, y, x_mean=1.0, y_mean=-0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-1.0, y_mean=0.5, x_sig=0.2, y_sig=0.2)
z -= __f2(x, y, x_mean=-0.5, y_mean=-0.8, x_sig=0.2, y_sig=0.2)
return x, y, z
# noisy hills of the cost function
def __f1(x, y):
return -1 * tf.sin(x * x) * tf.cos(3 * y * y) * tf.exp(-(x * y) * (x * y)) - tf.exp(-(x + y) * (x + y))
# bivar gaussian hills of the cost function
def __f2(x, y, x_mean, y_mean, x_sig, y_sig):
normalizing = 1 / (2 * np.pi * x_sig * y_sig)
x_exp = (-1 * tf.square(x - x_mean)) / (2 * tf.square(x_sig))
y_exp = (-1 * tf.square(y - y_mean)) / (2 * tf.square(y_sig))
return normalizing * tf.exp(x_exp + y_exp)
# pyplot settings
plt.ion()
fig = plt.figure(figsize=(3, 2), dpi=300)
ax = fig.add_subplot(111, projection='3d')
plt.subplots_adjust(left=0, bottom=0, right=1, top=1, wspace=0, hspace=0)
params = {'legend.fontsize': 3,
'legend.handlelength': 3}
plt.rcParams.update(params)
plt.axis('off')
# input (x, y) and output (z) nodes of cost-function graph
x, y, z = cost_func()
# visualize cost function as a contour plot
x_val = y_val = np.arange(-1.5, 1.5, 0.005, dtype=np.float32)
x_val_mesh, y_val_mesh = np.meshgrid(x_val, y_val)
x_val_mesh_flat = x_val_mesh.reshape([-1, 1])
y_val_mesh_flat = y_val_mesh.reshape([-1, 1])
with tf.compat.v1.Session() as sess:
z_val_mesh_flat = sess.run(z, feed_dict={x: x_val_mesh_flat, y: y_val_mesh_flat})
z_val_mesh = z_val_mesh_flat.reshape(x_val_mesh.shape)
levels = np.arange(-10, 1, 0.05)
# ax.contour(x_val_mesh, y_val_mesh, z_val_mesh, levels, alpha=.7, linewidths=0.4)
# ax.plot_wireframe(x_val_mesh, y_val_mesh, z_val_mesh, alpha=.5, linewidths=0.4, antialiased=True)
ax.plot_surface(x_val_mesh, y_val_mesh, z_val_mesh, alpha=.4, cmap=cm.coolwarm)
plt.draw()
# starting location for variables
x_i = 0.75
y_i = 1.0
# create variable pair (x, y) for each optimizer
x_var, y_var = [], []
for i in range(7):
x_var.append(tf.Variable(x_i, [1], dtype=tf.float32))
y_var.append(tf.Variable(y_i, [1], dtype=tf.float32))
# create separate graph for each variable pairs
cost = []
for i in range(7):
cost.append(cost_func(x_var[i], y_var[i])[2])
# define method of gradient descent for each graph
# optimizer label name, learning rate, color
ops_param = np.array([['Adadelta', 50.0, 'b'],
['Adagrad', 0.10, 'g'],
['Adam', 0.05, 'r'],
['Ftrl', 0.5, 'c'],
['GD', 0.05, 'm'],
['Momentum', 0.01, 'y'],
['RMSProp', 0.02, 'k']])
ops = []
ops.append(tf.compat.v1.train.AdadeltaOptimizer(float(ops_param[0, 1])).minimize(cost[0]))
ops.append(tf.compat.v1.train.AdagradOptimizer(float(ops_param[1, 1])).minimize(cost[1]))
ops.append(tf.compat.v1.train.AdamOptimizer(float(ops_param[2, 1])).minimize(cost[2]))
ops.append(tf.compat.v1.train.FtrlOptimizer(float(ops_param[3, 1])).minimize(cost[3]))
ops.append(tf.compat.v1.train.GradientDescentOptimizer(float(ops_param[4, 1])).minimize(cost[4]))
ops.append(tf.compat.v1.train.MomentumOptimizer(float(ops_param[5, 1]), momentum=0.95).minimize(cost[5]))
ops.append(tf.compat.v1.train.RMSPropOptimizer(float(ops_param[6, 1])).minimize(cost[6]))
# 3d plot camera zoom, angle
xlm = ax.get_xlim3d()
ylm = ax.get_ylim3d()
zlm = ax.get_zlim3d()
ax.set_xlim3d(xlm[0] * 0.5, xlm[1] * 0.5)
ax.set_ylim3d(ylm[0] * 0.5, ylm[1] * 0.5)
ax.set_zlim3d(zlm[0] * 0.5, zlm[1] * 0.5)
azm = ax.azim
ele = ax.elev + 40
ax.view_init(elev=ele, azim=azm)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# use last location to draw a line to the current location
last_x, last_y, last_z = [], [], []
plot_cache = [None for _ in range(len(ops))]
# loop each step of the optimization algorithm
steps = 1000
for iter in range(steps):
for i, op in enumerate(ops):
# run a step of optimization and collect new x and y variable values
_, x_val, y_val, z_val = sess.run([op, x_var[i], y_var[i], cost[i]])
# move dot to the current value
if plot_cache[i]:
plot_cache[i].remove()
plot_cache[i] = ax.scatter(x_val, y_val, z_val, s=3, depthshade=True, label=ops_param[i, 0],
color=ops_param[i, 2])
# draw a line from the previous value
if iter == 0:
last_z.append(z_val)
last_x.append(x_i)
last_y.append(y_i)
ax.plot([last_x[i], x_val], [last_y[i], y_val], [last_z[i], z_val], linewidth=0.5, color=ops_param[i, 2])
last_x[i] = x_val
last_y[i] = y_val
last_z[i] = z_val
if iter == 0:
legend = np.vstack((ops_param[:, 0], ops_param[:, 1])).transpose()
plt.legend(plot_cache, legend)
plt.savefig('figures/' + str(iter) + '.png')
print('iteration: {}'.format(iter))
plt.pause(0.0001)
print("done")
If x and y are placeholder tensor and they are used as input, output None. I have tried converting x and y to other types (such as string) if they are placeholder tensors, and then outputting them directly if they are not placeholder tensors. But I found that this didn't work. Then I tried not using keyword arguments, but making the determination within the function, and I found that that didn't work either.
As I understand, I will solve this problem if I use tensorflow v1, but I want to use tensorflow v2. Please help me!!!!
| [
"To resolve this error, you can either use eager execution or decorate the function using @tf.function. Eager execution is enabled by default, so if you're using versions of TensorFlow older than 1.10.0, you may need to explicitly enable it in your code. To enable it, you can add the following line of code:\ntf.enable_eager_execution()\nThe @tf.function decorator allows for the conversion of a Python function into a TensorFlow graph. It accepts parameters, such as input signature and autograph, that can be used to control how the graph is generated. For more information, you can refer to the official TensorFlow documentation\n"
] | [
0
] | [] | [] | [
"python",
"tensorflow"
] | stackoverflow_0074675427_python_tensorflow.txt |
Q:
count number specific value within columns for each row in pandas
Hello I have a dataframe such as :
Species COL1 COL2 COL3 COL4 COL5
SP1 0 0 0 1-2 0-1-2
SP2 1-2 2 0 1 0
SP3 0-1 1 2 0 1-2
and I would like to add new columns to count for each row the number of specific unique values such as :
Species COL1 COL2 COL3 COL4 COL5 count_0 count_1-2 count_0-1-2 count_1 count_2
SP1 0 0 0 1-2 0-1-2 3 1 1 0 0
SP2 1-2 2 0 1 0 2 1 0 1 1
SP3 0-1 1 2 0 1-2 1 1 0 2 1
Does someone have na idea please ?
A:
You can use the value_counts() method in the pandas library to count the number of occurrences of each unique value in each row of your dataframe.
# Loop through each row of the dataframe
for index, row in df.iterrows():
# Create a series object for the current row
series = pd.Series(row)
# Count the number of occurrences of each unique value in the row
counts = series.value_counts()
# Add the count values to the current row of the dataframe
df.loc[index, 'count_0'] = counts[0] if 0 in counts else 0
df.loc[index, 'count_1-2'] = counts['1-2'] if '1-2' in counts else 0
df.loc[index, 'count_0-1-2'] = counts['0-1-2'] if '0-1-2' in counts else 0
df.loc[index, 'count_1'] = counts[1] if 1 in counts else 0
df.loc[index, 'count_2'] = counts[2] if 2 in counts else 0
A:
Example
data = {'Species': {0: 'SP1', 1: 'SP2', 2: 'SP3'},
'COL1': {0: '0', 1: '1-2', 2: '0-1'},
'COL2': {0: '0', 1: '2', 2: '1'},
'COL3': {0: '0', 1: '0', 2: '2'},
'COL4': {0: '1-2', 1: '1', 2: '0'},
'COL5': {0: '0-1-2', 1: '0', 2: '1-2'}}
df = pd.DataFrame(data)
Code
df1 = (df.set_index('Species').apply(lambda x: x.value_counts(), axis=1)
.add_prefix('count_').fillna(0).astype('int'))
df1
count_0 count_0-1 count_0-1-2 count_1 count_1-2 count_2
Species
SP1 3 0 1 0 1 0
SP2 2 0 0 1 1 1
SP3 1 1 0 1 1 1
make desired output
concat df & df1
pd.concat([df.set_index('Species'), df1], axis=1)
| count number specific value within columns for each row in pandas | Hello I have a dataframe such as :
Species COL1 COL2 COL3 COL4 COL5
SP1 0 0 0 1-2 0-1-2
SP2 1-2 2 0 1 0
SP3 0-1 1 2 0 1-2
and I would like to add new columns to count for each row the number of specific unique values such as :
Species COL1 COL2 COL3 COL4 COL5 count_0 count_1-2 count_0-1-2 count_1 count_2
SP1 0 0 0 1-2 0-1-2 3 1 1 0 0
SP2 1-2 2 0 1 0 2 1 0 1 1
SP3 0-1 1 2 0 1-2 1 1 0 2 1
Does someone have na idea please ?
| [
"You can use the value_counts() method in the pandas library to count the number of occurrences of each unique value in each row of your dataframe.\n# Loop through each row of the dataframe\nfor index, row in df.iterrows():\n # Create a series object for the current row\n series = pd.Series(row)\n\n # Count the number of occurrences of each unique value in the row\n counts = series.value_counts()\n\n # Add the count values to the current row of the dataframe\n df.loc[index, 'count_0'] = counts[0] if 0 in counts else 0\n df.loc[index, 'count_1-2'] = counts['1-2'] if '1-2' in counts else 0\n df.loc[index, 'count_0-1-2'] = counts['0-1-2'] if '0-1-2' in counts else 0\n df.loc[index, 'count_1'] = counts[1] if 1 in counts else 0\n df.loc[index, 'count_2'] = counts[2] if 2 in counts else 0\n\n",
"Example\ndata = {'Species': {0: 'SP1', 1: 'SP2', 2: 'SP3'},\n 'COL1': {0: '0', 1: '1-2', 2: '0-1'},\n 'COL2': {0: '0', 1: '2', 2: '1'},\n 'COL3': {0: '0', 1: '0', 2: '2'},\n 'COL4': {0: '1-2', 1: '1', 2: '0'},\n 'COL5': {0: '0-1-2', 1: '0', 2: '1-2'}}\ndf = pd.DataFrame(data)\n\nCode\ndf1 = (df.set_index('Species').apply(lambda x: x.value_counts(), axis=1)\n .add_prefix('count_').fillna(0).astype('int'))\n\ndf1\n count_0 count_0-1 count_0-1-2 count_1 count_1-2 count_2\nSpecies \nSP1 3 0 1 0 1 0\nSP2 2 0 0 1 1 1\nSP3 1 1 0 1 1 1\n\nmake desired output\nconcat df & df1\npd.concat([df.set_index('Species'), df1], axis=1)\n\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074675276_pandas_python_python_3.x.txt |
Q:
Quicksort Algorithm: Need to know the time complexity of the following code and if it is optimized or not
I was practicing Quicksort algorithm and I suddenly came up with this solution, I need to know the time complexity of this algorithm and space complexity, and whether it is optimized or not.
def quickSort(arr):
if len(arr) < 2:
return arr
else:
pivot_index = 0
swap_index = 0
for i in range(1, len(arr)):
if arr[pivot_index] > arr[i]:
swap_index += 1
if not swap_index == i:
arr[i], arr[swap_index] = arr[swap_index], arr[i]
arr[pivot_index], arr[swap_index] = arr[swap_index], arr[pivot_index]
return quickSort(arr[:swap_index]) + [arr[swap_index]] + quickSort(arr[swap_index + 1:])
print(quickSort([1,3,3,3]))
| Quicksort Algorithm: Need to know the time complexity of the following code and if it is optimized or not | I was practicing Quicksort algorithm and I suddenly came up with this solution, I need to know the time complexity of this algorithm and space complexity, and whether it is optimized or not.
def quickSort(arr):
if len(arr) < 2:
return arr
else:
pivot_index = 0
swap_index = 0
for i in range(1, len(arr)):
if arr[pivot_index] > arr[i]:
swap_index += 1
if not swap_index == i:
arr[i], arr[swap_index] = arr[swap_index], arr[i]
arr[pivot_index], arr[swap_index] = arr[swap_index], arr[pivot_index]
return quickSort(arr[:swap_index]) + [arr[swap_index]] + quickSort(arr[swap_index + 1:])
print(quickSort([1,3,3,3]))
| [] | [] | [
"These pages explain it pretty good. Also with some graph plotting of your time and space measurement of your sort function, you would be able to analyse it by yourself.\nhttps://www.geeksforgeeks.org/time-complexity-and-space-complexity/\nhttps://iq.opengenus.org/time-and-space-complexity-of-quick-sort/\n"
] | [
-1
] | [
"python",
"quicksort"
] | stackoverflow_0074675096_python_quicksort.txt |
Q:
Python Executable Crashes in Conda Environment
Let's say I have two files we'll call test1.py and test2.py, and I want to run both of these files as executables. I'm familiar with the standard procedure of adding a shebang followed by the path to the desired python interpreter and then running chmod u="rwx" file.py.
I also know that when using conda, each environment gets its own unique interpreter with which to run scripts. So naturally, I activate my environment, run which python and add that command's output to my script like so...
test1.py
#!/home/my_name/anaconda3/envs/env_name/bin/python
print("foo")
Which when I run it as ./test1.py gives me the following error...
./test1.py: line 2: syntax error near unexpected token `"foo"'
./test1.py: line 2: `print("foo")'
However simply running python test1.py gives...
foo
Now let's say I return to my base environment and following the same procedure as above, I create the following script...
test2.py
#!/home/my_name/anaconda3/bin/python
print("foo")
This script runs without error and gives the correct output regardless of whether or not I run it as an executable.
What do I need to do in order to run my python scripts without these errors?
EDIT
Running which python in env_name gives
/home/my_name/anaconda3/envs/env_name/bin/python
Whereas running the same command in base gives
/home/my_name/anaconda3/bin/python
A:
I had the same issue when line 1 was empty, and the interpreter was set in line 2. This results in bash assuming that it's a bash script, and as a result, you get a "syntax error" from trying to execute python commands in bash.
| Python Executable Crashes in Conda Environment | Let's say I have two files we'll call test1.py and test2.py, and I want to run both of these files as executables. I'm familiar with the standard procedure of adding a shebang followed by the path to the desired python interpreter and then running chmod u="rwx" file.py.
I also know that when using conda, each environment gets its own unique interpreter with which to run scripts. So naturally, I activate my environment, run which python and add that command's output to my script like so...
test1.py
#!/home/my_name/anaconda3/envs/env_name/bin/python
print("foo")
Which when I run it as ./test1.py gives me the following error...
./test1.py: line 2: syntax error near unexpected token `"foo"'
./test1.py: line 2: `print("foo")'
However simply running python test1.py gives...
foo
Now let's say I return to my base environment and following the same procedure as above, I create the following script...
test2.py
#!/home/my_name/anaconda3/bin/python
print("foo")
This script runs without error and gives the correct output regardless of whether or not I run it as an executable.
What do I need to do in order to run my python scripts without these errors?
EDIT
Running which python in env_name gives
/home/my_name/anaconda3/envs/env_name/bin/python
Whereas running the same command in base gives
/home/my_name/anaconda3/bin/python
| [
"I had the same issue when line 1 was empty, and the interpreter was set in line 2. This results in bash assuming that it's a bash script, and as a result, you get a \"syntax error\" from trying to execute python commands in bash.\n"
] | [
0
] | [] | [] | [
"conda",
"executable",
"python"
] | stackoverflow_0071374828_conda_executable_python.txt |
Q:
Converting an RGB color tuple to a hexidecimal string
I need to convert (0, 128, 64) to something like this "#008040". I'm not sure what to call the latter, making searching difficult.
A:
Use the format operator %:
>>> '#%02x%02x%02x' % (0, 128, 64)
'#008040'
Note that it won't check bounds...
>>> '#%02x%02x%02x' % (0, -1, 9999)
'#00-1270f'
A:
def clamp(x):
return max(0, min(x, 255))
"#{0:02x}{1:02x}{2:02x}".format(clamp(r), clamp(g), clamp(b))
This uses the preferred method of string formatting, as described in PEP 3101. It also uses min() and max to ensure that 0 <= {r,g,b} <= 255.
Update added the clamp function as suggested below.
Update From the title of the question and the context given, it should be obvious that this expects 3 ints in [0,255] and will always return a color when passed 3 such ints. However, from the comments, this may not be obvious to everyone, so let it be explicitly stated:
Provided three int values, this will return a valid hex triplet representing a color. If those values are between [0,255], then it will treat those as RGB values and return the color corresponding to those values.
A:
I have created a full python program for it the following functions can convert rgb to hex and vice versa.
def rgb2hex(r,g,b):
return "#{:02x}{:02x}{:02x}".format(r,g,b)
def hex2rgb(hexcode):
return tuple(map(ord,hexcode[1:].decode('hex')))
You can see the full code and tutorial at the following link : RGB to Hex and Hex to RGB conversion using Python
A:
This is an old question but for information, I developed a package with some utilities related to colors and colormaps and contains the rgb2hex function you were looking to convert triplet into hexa value (which can be found in many other packages, e.g. matplotlib). It's on pypi
pip install colormap
and then
>>> from colormap import rgb2hex
>>> rgb2hex(0, 128, 64)
'##008040'
Validity of the inputs is checked (values must be between 0 and 255).
A:
I'm truly surprised no one suggested this approach:
For Python 2 and 3:
'#' + ''.join('{:02X}'.format(i) for i in colortuple)
Python 3.6+:
'#' + ''.join(f'{i:02X}' for i in colortuple)
As a function:
def hextriplet(colortuple):
return '#' + ''.join(f'{i:02X}' for i in colortuple)
color = (0, 128, 64)
print(hextriplet(color))
#008040
A:
triplet = (0, 128, 64)
print '#'+''.join(map(chr, triplet)).encode('hex')
or
from struct import pack
print '#'+pack("BBB",*triplet).encode('hex')
python3 is slightly different
from base64 import b16encode
print(b'#'+b16encode(bytes(triplet)))
A:
you can use lambda and f-strings(available in python 3.6+)
rgb2hex = lambda r,g,b: f"#{r:02x}{g:02x}{b:02x}"
hex2rgb = lambda hx: (int(hx[0:2],16),int(hx[2:4],16),int(hx[4:6],16))
usage
rgb2hex(r,g,b) #output = #hexcolor
hex2rgb("#hex") #output = (r,g,b) hexcolor must be in #hex format
A:
In Python 3.6, you can use f-strings to make this cleaner:
rgb = (0,128, 64)
f'#{rgb[0]:02x}{rgb[1]:02x}{rgb[2]:02x}'
Of course you can put that into a function, and as a bonus, values get rounded and converted to int:
def rgb2hex(r,g,b):
return f'#{int(round(r)):02x}{int(round(g)):02x}{int(round(b)):02x}'
rgb2hex(*rgb)
A:
Here is a more complete function for handling situations in which you may have RGB values in the range [0,1] or the range [0,255].
def RGBtoHex(vals, rgbtype=1):
"""Converts RGB values in a variety of formats to Hex values.
@param vals An RGB/RGBA tuple
@param rgbtype Valid valus are:
1 - Inputs are in the range 0 to 1
256 - Inputs are in the range 0 to 255
@return A hex string in the form '#RRGGBB' or '#RRGGBBAA'
"""
if len(vals)!=3 and len(vals)!=4:
raise Exception("RGB or RGBA inputs to RGBtoHex must have three or four elements!")
if rgbtype!=1 and rgbtype!=256:
raise Exception("rgbtype must be 1 or 256!")
#Convert from 0-1 RGB/RGBA to 0-255 RGB/RGBA
if rgbtype==1:
vals = [255*x for x in vals]
#Ensure values are rounded integers, convert to hex, and concatenate
return '#' + ''.join(['{:02X}'.format(int(round(x))) for x in vals])
print(RGBtoHex((0.1,0.3, 1)))
print(RGBtoHex((0.8,0.5, 0)))
print(RGBtoHex(( 3, 20,147), rgbtype=256))
print(RGBtoHex(( 3, 20,147,43), rgbtype=256))
A:
Note that this only works with python3.6 and above.
def rgb2hex(color):
"""Converts a list or tuple of color to an RGB string
Args:
color (list|tuple): the list or tuple of integers (e.g. (127, 127, 127))
Returns:
str: the rgb string
"""
return f"#{''.join(f'{hex(c)[2:].upper():0>2}' for c in color)}"
The above is the equivalent of:
def rgb2hex(color):
string = '#'
for value in color:
hex_string = hex(value) # e.g. 0x7f
reduced_hex_string = hex_string[2:] # e.g. 7f
capitalized_hex_string = reduced_hex_string.upper() # e.g. 7F
string += capitalized_hex_string # e.g. #7F7F7F
return string
A:
You can also use bit wise operators which is pretty efficient, even though I doubt you'd be worried about efficiency with something like this. It's also relatively clean. Note it doesn't clamp or check bounds. This has been supported since at least Python 2.7.17.
hex(r << 16 | g << 8 | b)
And to change it so it starts with a # you can do:
"#" + hex(243 << 16 | 103 << 8 | 67)[2:]
A:
def RGB(red,green,blue): return '#%02x%02x%02x' % (red,green,blue)
background = RGB(0, 128, 64)
I know one-liners in Python aren't necessarily looked upon kindly. But there are times where I can't resist taking advantage of what the Python parser does allow. It's the same answer as Dietrich Epp's solution (the best), but wrapped up in a single line function. So, thank you Dietrich!
I'm using it now with tkinter :-)
A:
There is a package called webcolors. https://github.com/ubernostrum/webcolors
It has a method webcolors.rgb_to_hex
>>> import webcolors
>>> webcolors.rgb_to_hex((12,232,23))
'#0ce817'
A:
''.join('%02x'%i for i in input)
can be used for hex conversion from int number
A:
If typing the formatting string three times seems a bit verbose...
The combination of bit shifts and an f-string will do the job nicely:
# Example setup.
>>> r, g, b = 0, 0, 195
# Create the hex string.
>>> f'#{r << 16 | g << 8 | b:06x}'
'#0000c3'
This also illustrates a method by which 'leading' zero bits are not dropped, if either the red or green channels are zero.
A:
Use this line
'#%02X%02X%02X' % (r,g,b)
A:
#My course task required doing this without using for loops and other stuff,so here is my bizarre solution lol.
color1 = int(input())
color2 = int(input())
color3 = int(input())
color1 = hex(color1).upper()
color2 = hex(color2).upper()
color3 = hex(color3).upper()
print('#'+ color1[2:].zfill(2)+color2[2:].zfill(2)+color3[2:].zfill(2))
| Converting an RGB color tuple to a hexidecimal string | I need to convert (0, 128, 64) to something like this "#008040". I'm not sure what to call the latter, making searching difficult.
| [
"Use the format operator %:\n>>> '#%02x%02x%02x' % (0, 128, 64)\n'#008040'\n\nNote that it won't check bounds...\n>>> '#%02x%02x%02x' % (0, -1, 9999)\n'#00-1270f'\n\n",
"def clamp(x): \n return max(0, min(x, 255))\n\n\"#{0:02x}{1:02x}{2:02x}\".format(clamp(r), clamp(g), clamp(b))\n\nThis uses the preferred method of string formatting, as described in PEP 3101. It also uses min() and max to ensure that 0 <= {r,g,b} <= 255.\nUpdate added the clamp function as suggested below.\nUpdate From the title of the question and the context given, it should be obvious that this expects 3 ints in [0,255] and will always return a color when passed 3 such ints. However, from the comments, this may not be obvious to everyone, so let it be explicitly stated: \n\nProvided three int values, this will return a valid hex triplet representing a color. If those values are between [0,255], then it will treat those as RGB values and return the color corresponding to those values.\n\n",
"I have created a full python program for it the following functions can convert rgb to hex and vice versa.\ndef rgb2hex(r,g,b):\n return \"#{:02x}{:02x}{:02x}\".format(r,g,b)\n\ndef hex2rgb(hexcode):\n return tuple(map(ord,hexcode[1:].decode('hex')))\n\nYou can see the full code and tutorial at the following link : RGB to Hex and Hex to RGB conversion using Python\n",
"This is an old question but for information, I developed a package with some utilities related to colors and colormaps and contains the rgb2hex function you were looking to convert triplet into hexa value (which can be found in many other packages, e.g. matplotlib). It's on pypi\npip install colormap\n\nand then\n>>> from colormap import rgb2hex\n>>> rgb2hex(0, 128, 64)\n'##008040'\n\nValidity of the inputs is checked (values must be between 0 and 255).\n",
"I'm truly surprised no one suggested this approach:\nFor Python 2 and 3:\n'#' + ''.join('{:02X}'.format(i) for i in colortuple)\n\nPython 3.6+:\n'#' + ''.join(f'{i:02X}' for i in colortuple)\n\nAs a function:\ndef hextriplet(colortuple):\n return '#' + ''.join(f'{i:02X}' for i in colortuple)\n\ncolor = (0, 128, 64)\nprint(hextriplet(color))\n\n#008040\n\n",
"triplet = (0, 128, 64)\nprint '#'+''.join(map(chr, triplet)).encode('hex')\n\nor\nfrom struct import pack\nprint '#'+pack(\"BBB\",*triplet).encode('hex')\n\npython3 is slightly different\nfrom base64 import b16encode\nprint(b'#'+b16encode(bytes(triplet)))\n\n",
"you can use lambda and f-strings(available in python 3.6+)\nrgb2hex = lambda r,g,b: f\"#{r:02x}{g:02x}{b:02x}\"\nhex2rgb = lambda hx: (int(hx[0:2],16),int(hx[2:4],16),int(hx[4:6],16))\n\nusage\nrgb2hex(r,g,b) #output = #hexcolor\n hex2rgb(\"#hex\") #output = (r,g,b) hexcolor must be in #hex format\n",
"In Python 3.6, you can use f-strings to make this cleaner:\nrgb = (0,128, 64)\nf'#{rgb[0]:02x}{rgb[1]:02x}{rgb[2]:02x}'\n\n\nOf course you can put that into a function, and as a bonus, values get rounded and converted to int: \ndef rgb2hex(r,g,b):\n return f'#{int(round(r)):02x}{int(round(g)):02x}{int(round(b)):02x}'\n\nrgb2hex(*rgb)\n\n",
"Here is a more complete function for handling situations in which you may have RGB values in the range [0,1] or the range [0,255].\ndef RGBtoHex(vals, rgbtype=1):\n \"\"\"Converts RGB values in a variety of formats to Hex values.\n\n @param vals An RGB/RGBA tuple\n @param rgbtype Valid valus are:\n 1 - Inputs are in the range 0 to 1\n 256 - Inputs are in the range 0 to 255\n\n @return A hex string in the form '#RRGGBB' or '#RRGGBBAA'\n\"\"\"\n\n if len(vals)!=3 and len(vals)!=4:\n raise Exception(\"RGB or RGBA inputs to RGBtoHex must have three or four elements!\")\n if rgbtype!=1 and rgbtype!=256:\n raise Exception(\"rgbtype must be 1 or 256!\")\n\n #Convert from 0-1 RGB/RGBA to 0-255 RGB/RGBA\n if rgbtype==1:\n vals = [255*x for x in vals]\n\n #Ensure values are rounded integers, convert to hex, and concatenate\n return '#' + ''.join(['{:02X}'.format(int(round(x))) for x in vals])\n\nprint(RGBtoHex((0.1,0.3, 1)))\nprint(RGBtoHex((0.8,0.5, 0)))\nprint(RGBtoHex(( 3, 20,147), rgbtype=256))\nprint(RGBtoHex(( 3, 20,147,43), rgbtype=256))\n\n",
"Note that this only works with python3.6 and above.\ndef rgb2hex(color):\n \"\"\"Converts a list or tuple of color to an RGB string\n\n Args:\n color (list|tuple): the list or tuple of integers (e.g. (127, 127, 127))\n\n Returns:\n str: the rgb string\n \"\"\"\n return f\"#{''.join(f'{hex(c)[2:].upper():0>2}' for c in color)}\"\n\nThe above is the equivalent of:\ndef rgb2hex(color):\n string = '#'\n for value in color:\n hex_string = hex(value) # e.g. 0x7f\n reduced_hex_string = hex_string[2:] # e.g. 7f\n capitalized_hex_string = reduced_hex_string.upper() # e.g. 7F\n string += capitalized_hex_string # e.g. #7F7F7F\n return string\n\n",
"You can also use bit wise operators which is pretty efficient, even though I doubt you'd be worried about efficiency with something like this. It's also relatively clean. Note it doesn't clamp or check bounds. This has been supported since at least Python 2.7.17.\nhex(r << 16 | g << 8 | b)\n\nAnd to change it so it starts with a # you can do:\n\"#\" + hex(243 << 16 | 103 << 8 | 67)[2:]\n\n",
"def RGB(red,green,blue): return '#%02x%02x%02x' % (red,green,blue)\n\nbackground = RGB(0, 128, 64)\n\nI know one-liners in Python aren't necessarily looked upon kindly. But there are times where I can't resist taking advantage of what the Python parser does allow. It's the same answer as Dietrich Epp's solution (the best), but wrapped up in a single line function. So, thank you Dietrich!\nI'm using it now with tkinter :-)\n",
"There is a package called webcolors. https://github.com/ubernostrum/webcolors\nIt has a method webcolors.rgb_to_hex\n>>> import webcolors\n>>> webcolors.rgb_to_hex((12,232,23))\n'#0ce817'\n\n",
"''.join('%02x'%i for i in input)\n\ncan be used for hex conversion from int number\n",
"If typing the formatting string three times seems a bit verbose...\nThe combination of bit shifts and an f-string will do the job nicely:\n# Example setup.\n>>> r, g, b = 0, 0, 195\n\n# Create the hex string.\n>>> f'#{r << 16 | g << 8 | b:06x}'\n'#0000c3'\n\nThis also illustrates a method by which 'leading' zero bits are not dropped, if either the red or green channels are zero.\n",
"Use this line\n'#%02X%02X%02X' % (r,g,b)\n",
"#My course task required doing this without using for loops and other stuff,so here is my bizarre solution lol.\ncolor1 = int(input())\ncolor2 = int(input())\ncolor3 = int(input())\n\ncolor1 = hex(color1).upper()\ncolor2 = hex(color2).upper()\ncolor3 = hex(color3).upper()\n\n\nprint('#'+ color1[2:].zfill(2)+color2[2:].zfill(2)+color3[2:].zfill(2))\n\n"
] | [
257,
67,
24,
23,
12,
10,
6,
5,
4,
4,
3,
2,
1,
0,
0,
0,
0
] | [] | [] | [
"colors",
"hex",
"python",
"rgb"
] | stackoverflow_0003380726_colors_hex_python_rgb.txt |
Q:
length of the longest substring of given string so that rearrangement of its characters form PALINDROME
Only lower case string as input.
Only words as input
Invalid if characters like "@","#"... are present
Find the length of the longest substring of given string so that the characters in it can be rearranged to form a palindrome.
Output the length
I am unable to put it in terms of programming in python.
please help.
My line of thinking was to keep an initial counter as 1(as even a word with completely different letters will have 1 by default)
then add 2 for every 1 match in letters
Sample input: "letter"
Sample output: 5 #(1(by default + 2(for 2"t"s) + 2(for 2"e"s))
A:
You can use this code to figure out the length of the longest substring which can be rearranged to form a palindrome:
def longestSubstring(s: str):
# To keep track of the last
# index of each xor
n = len(s)
index = dict()
# Initialize answer with 0
answer = 0
mask = 0
index[mask] = -1
# Now iterate through each character
# of the string
for i in range(n):
# Convert the character from
# [a, z] to [0, 25]
temp = ord(s[i]) - 97
# Turn the temp-th bit on if
# character occurs odd number
# of times and turn off the temp-th
# bit off if the character occurs
# even number of times
mask ^= (1 << temp)
# If a mask is present in the index
# Therefore a palindrome is
# found from index[mask] to i
if mask in index.keys():
answer = max(answer,
i - index[mask])
# If x is not found then add its
# position in the index dict.
else:
index[mask] = i
# Check for the palindrome of
# odd length
for j in range(26):
# We cancel the occurrence
# of a character if it occurs
# odd number times
mask2 = mask ^ (1 << j)
if mask2 in index.keys():
answer = max(answer,
i - index[mask2])
return answer
This algorithm is basically O(N*26). XOR basically checks if the amount of a certain character is even or odd. For every character in your string there is a certain XOR sequence of every character up to that point which tells you which characters have appeared an odd number of time and which characters have appeared an even number of times. If the same sequence has already been encountered in the past then you know that you have found a palindrome, because you have become back to where you started in the XOR sequence aka there are an even number of every character which appears between this point and the start point. If there is an even number of every character which appear between two points in the string, then you can form a palindrome out of them. The odd length check is just a special case to check for palindromes which are of odd length. It works by just pretending sequentially if a character appears an odd number of times, that it just assumes it to occur an even number of times to handle the special case of the character in the middle of an odd length palindrome.
Edit: here is the link to the original code and explanation.
| length of the longest substring of given string so that rearrangement of its characters form PALINDROME |
Only lower case string as input.
Only words as input
Invalid if characters like "@","#"... are present
Find the length of the longest substring of given string so that the characters in it can be rearranged to form a palindrome.
Output the length
I am unable to put it in terms of programming in python.
please help.
My line of thinking was to keep an initial counter as 1(as even a word with completely different letters will have 1 by default)
then add 2 for every 1 match in letters
Sample input: "letter"
Sample output: 5 #(1(by default + 2(for 2"t"s) + 2(for 2"e"s))
| [
"You can use this code to figure out the length of the longest substring which can be rearranged to form a palindrome:\ndef longestSubstring(s: str):\n \n # To keep track of the last\n # index of each xor\n n = len(s)\n index = dict()\n \n # Initialize answer with 0\n answer = 0\n \n mask = 0\n index[mask] = -1\n \n # Now iterate through each character\n # of the string\n for i in range(n):\n \n # Convert the character from\n # [a, z] to [0, 25]\n temp = ord(s[i]) - 97\n \n # Turn the temp-th bit on if\n # character occurs odd number\n # of times and turn off the temp-th\n # bit off if the character occurs\n # even number of times\n mask ^= (1 << temp)\n \n # If a mask is present in the index\n # Therefore a palindrome is\n # found from index[mask] to i\n if mask in index.keys():\n answer = max(answer,\n i - index[mask])\n \n # If x is not found then add its\n # position in the index dict.\n else:\n index[mask] = i\n \n # Check for the palindrome of\n # odd length\n for j in range(26):\n \n # We cancel the occurrence\n # of a character if it occurs\n # odd number times\n mask2 = mask ^ (1 << j)\n if mask2 in index.keys():\n \n answer = max(answer,\n i - index[mask2])\n \n return answer\n\nThis algorithm is basically O(N*26). XOR basically checks if the amount of a certain character is even or odd. For every character in your string there is a certain XOR sequence of every character up to that point which tells you which characters have appeared an odd number of time and which characters have appeared an even number of times. If the same sequence has already been encountered in the past then you know that you have found a palindrome, because you have become back to where you started in the XOR sequence aka there are an even number of every character which appears between this point and the start point. If there is an even number of every character which appear between two points in the string, then you can form a palindrome out of them. The odd length check is just a special case to check for palindromes which are of odd length. It works by just pretending sequentially if a character appears an odd number of times, that it just assumes it to occur an even number of times to handle the special case of the character in the middle of an odd length palindrome.\nEdit: here is the link to the original code and explanation.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074674638_python.txt |
Q:
Python CSV writer creates new line for each data item
I'm running Python 3.9.2 on a Raspberry Pi and I've written a script that will read water temperatures from my boiler and write them to a CSV file. However, each data item in the output file appears on a new line.
Here's my code:
from subprocess import check_output
import csv
header = ['Flow', 'Return']
cmd = ["/usr/bin/ebusctl", "read", "-f"]
data = [
[float(check_output([*cmd, "FlowTemp", "temp"]))],
[float(check_output([*cmd, "ReturnTemp", "temp"]))]
]
with open('sandbox.csv', 'w', encoding='utf8', newline='') as f:
writer = csv.writer(f)
writer.writerow(header)
writer.writerows(data)
And here's the content of sandbox.csv:
Flow,Return
57.19
43.12
How could I fix this?
A:
CSV files contain one record per row.
The writerow method writes one record to the file which it expects to be represented as one list or tuple of values, e.g. for one record a, b, c:
[a, b, c]
The writerows method does the same for multiple records at once, which it expects to be represented as a list or tuple of records, e.g. for two records a, b, c and x, y, z:
[[a, b, c], [x, y, z]]
The data you are passing to writerows is structured like this:
[[Flow], [Return]]
This means that writerows will write two records to the file (each in one line), each containing only one value.
If you intend Flow, Return to be one record (written to one line), your data needs to be structured differently:
Either you create data = [Flow, Return] (a single record) and pass it to writerow.
Or you create data = [[Flow, Return]] (a list of records, containing only one record) and pass it to writerows.
A:
To fix the issue with each data item appearing on a new line in your CSV file, you will need to modify the way you are writing the data to the file. Currently, you are using the "writerows" method of the CSV writer to write the data, which writes each element of the "data" list as a separate row in the CSV file.
To write the data as a single row in the CSV file, you will need to modify your code to use the "writerow" method of the CSV writer instead. This method takes a single argument, which is a list of values to write to the current row in the CSV file.
Here is how you could modify your code to use the "writerow" method to write the data as a single row in the CSV file:
from subprocess import check_output
import csv
header = ['Flow', 'Return']
cmd = ["/usr/bin/ebusctl", "read", "-f"]
data = [
float(check_output([*cmd, "FlowTemp", "temp"])),
float(check_output([*cmd, "ReturnTemp", "temp"]))
]
with open('sandbox.csv', 'w', encoding='utf8', newline='') as f:
writer = csv.writer(f)
writer.writerow(header)
writer.writerow(data)
In this modified code, the "data" variable is defined as a list of the values to write to the CSV file, rather than a list of lists. This allows the "writerow" method to write the values as a single row in the CSV file, rather than as separate rows.
| Python CSV writer creates new line for each data item | I'm running Python 3.9.2 on a Raspberry Pi and I've written a script that will read water temperatures from my boiler and write them to a CSV file. However, each data item in the output file appears on a new line.
Here's my code:
from subprocess import check_output
import csv
header = ['Flow', 'Return']
cmd = ["/usr/bin/ebusctl", "read", "-f"]
data = [
[float(check_output([*cmd, "FlowTemp", "temp"]))],
[float(check_output([*cmd, "ReturnTemp", "temp"]))]
]
with open('sandbox.csv', 'w', encoding='utf8', newline='') as f:
writer = csv.writer(f)
writer.writerow(header)
writer.writerows(data)
And here's the content of sandbox.csv:
Flow,Return
57.19
43.12
How could I fix this?
| [
"CSV files contain one record per row.\nThe writerow method writes one record to the file which it expects to be represented as one list or tuple of values, e.g. for one record a, b, c:\n[a, b, c]\n\nThe writerows method does the same for multiple records at once, which it expects to be represented as a list or tuple of records, e.g. for two records a, b, c and x, y, z:\n[[a, b, c], [x, y, z]]\n\nThe data you are passing to writerows is structured like this:\n[[Flow], [Return]]\n\nThis means that writerows will write two records to the file (each in one line), each containing only one value.\nIf you intend Flow, Return to be one record (written to one line), your data needs to be structured differently:\n\nEither you create data = [Flow, Return] (a single record) and pass it to writerow.\n\nOr you create data = [[Flow, Return]] (a list of records, containing only one record) and pass it to writerows.\n\n\n",
"To fix the issue with each data item appearing on a new line in your CSV file, you will need to modify the way you are writing the data to the file. Currently, you are using the \"writerows\" method of the CSV writer to write the data, which writes each element of the \"data\" list as a separate row in the CSV file.\nTo write the data as a single row in the CSV file, you will need to modify your code to use the \"writerow\" method of the CSV writer instead. This method takes a single argument, which is a list of values to write to the current row in the CSV file.\nHere is how you could modify your code to use the \"writerow\" method to write the data as a single row in the CSV file:\nfrom subprocess import check_output\nimport csv\n\nheader = ['Flow', 'Return']\ncmd = [\"/usr/bin/ebusctl\", \"read\", \"-f\"]\ndata = [\n float(check_output([*cmd, \"FlowTemp\", \"temp\"])),\n float(check_output([*cmd, \"ReturnTemp\", \"temp\"]))\n]\n\nwith open('sandbox.csv', 'w', encoding='utf8', newline='') as f:\n writer = csv.writer(f)\n writer.writerow(header)\n writer.writerow(data)\n\nIn this modified code, the \"data\" variable is defined as a list of the values to write to the CSV file, rather than a list of lists. This allows the \"writerow\" method to write the values as a single row in the CSV file, rather than as separate rows.\n"
] | [
0,
0
] | [] | [] | [
"csv",
"newline",
"python"
] | stackoverflow_0074675239_csv_newline_python.txt |
Q:
Using VisualStudio+ Python -- how to handle "overriding stdlib module" Pylance(reportShadowedImports) warning?
When running ipynbs in VS Code, I've started noticing Pylance warnings on standard library imports. I am using a conda virtual environment, and I believe the warning is related to that. An example using the glob library reads:
"env\Lib\glob.py" is overriding the stdlib "glob" modulePylance(reportShadowedImports)
So far my notebooks run as expected, but I am curious if this warning is indicative of poor layout or is just stating the obvious more of an "FYI you are not using the base install of python".
I have turned off linting and the problem stills persists. And almost nothing returns from my searches of the error "reportShadowedImports".
A:
The reason you find nothing by searching is because this check has just been implemented recently (see Github). I ran into the same problem as you because code.py from Micropython/Circuitpython also overrides the module "code" in stdlib.
The solution is simple, though you then loose out on this specific check. Just add reportShadowedImports to your pyright config. For VS Code, that would be adding it to .vscode/settings.json:
{
"python.languageServer": "Pylance",
[...]
"python.analysis.diagnosticSeverityOverrides": {
"reportShadowedImports": "none"
},
[...]
}
| Using VisualStudio+ Python -- how to handle "overriding stdlib module" Pylance(reportShadowedImports) warning? | When running ipynbs in VS Code, I've started noticing Pylance warnings on standard library imports. I am using a conda virtual environment, and I believe the warning is related to that. An example using the glob library reads:
"env\Lib\glob.py" is overriding the stdlib "glob" modulePylance(reportShadowedImports)
So far my notebooks run as expected, but I am curious if this warning is indicative of poor layout or is just stating the obvious more of an "FYI you are not using the base install of python".
I have turned off linting and the problem stills persists. And almost nothing returns from my searches of the error "reportShadowedImports".
| [
"The reason you find nothing by searching is because this check has just been implemented recently (see Github). I ran into the same problem as you because code.py from Micropython/Circuitpython also overrides the module \"code\" in stdlib.\nThe solution is simple, though you then loose out on this specific check. Just add reportShadowedImports to your pyright config. For VS Code, that would be adding it to .vscode/settings.json:\n{\n \"python.languageServer\": \"Pylance\",\n [...]\n \"python.analysis.diagnosticSeverityOverrides\": {\n \"reportShadowedImports\": \"none\"\n },\n [...]\n}\n\n"
] | [
0
] | [] | [] | [
"conda",
"pylance",
"python",
"visual_studio"
] | stackoverflow_0074660176_conda_pylance_python_visual_studio.txt |
Q:
Remotely controlling passwords with python
I wrote a code that does a long automatic work for the game with python. I will convert this code to exe and send it to people, but I will give a password with a certain lifetime for them to use. At the same time, when I send this application to everyone, I have to remotely check passwords, if necessary, I have to delete them before they expire. How can I perform this remote password check?
For example, let's say I threw this exe to someone and gave a password to open it. Is there a way to remotely control it and delete it before it expires if necessary?
A:
It sounds like you are looking for a way to remotely manage the passwords that are used to access your Python application. There are a few different ways you could do this, depending on your specific needs and the resources that are available to you.
One approach you could take is to create a server that manages the passwords and provides an API for your application to access. Your application could then use this API to authenticate users and check the status of their passwords. If necessary, you could also use this API to remotely delete passwords or disable access for specific users.
Alternatively, you could also use a cloud service such as AWS or Azure to manage the passwords and provide an API for your application to access. This would allow you to take advantage of the scalability and reliability of a cloud platform, and would also provide you with tools for managing and securing the passwords.
I hope this helps! Let me know if you have any other questions.
| Remotely controlling passwords with python | I wrote a code that does a long automatic work for the game with python. I will convert this code to exe and send it to people, but I will give a password with a certain lifetime for them to use. At the same time, when I send this application to everyone, I have to remotely check passwords, if necessary, I have to delete them before they expire. How can I perform this remote password check?
For example, let's say I threw this exe to someone and gave a password to open it. Is there a way to remotely control it and delete it before it expires if necessary?
| [
"It sounds like you are looking for a way to remotely manage the passwords that are used to access your Python application. There are a few different ways you could do this, depending on your specific needs and the resources that are available to you.\nOne approach you could take is to create a server that manages the passwords and provides an API for your application to access. Your application could then use this API to authenticate users and check the status of their passwords. If necessary, you could also use this API to remotely delete passwords or disable access for specific users.\nAlternatively, you could also use a cloud service such as AWS or Azure to manage the passwords and provide an API for your application to access. This would allow you to take advantage of the scalability and reliability of a cloud platform, and would also provide you with tools for managing and securing the passwords.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074674905_python.txt |
Q:
How to change default discount in python using classes and inheriance?
I was trying to change the discount rate of a particular sub class, while the default is set at 0, and in subclass it changes to 5, however, this is not refelected.
I cannot switch the discount scheme on Class B, because all class need to have access on it.
class A:
def __init__(self, x, y, discount=0):
self.discount=0
if self.discount>0:
discount = self.discount
else:
discount=0
self.discount=discount
self.x=x
self.y=y
self.discount=discount
discount=5
class B(A):
def __init__ (self,x,y,z):
super().__init__(x,y)
B.z=z
B.discount=5
class C(A):
def __init__ (self,x,y,a):
super().__init__(x,y)
C.a=a
C.discount = 10
a = y*10
one=A(1,2)
print(one.x)
print(one.discount)
two = B(1,2,3)
print(two.x)
print(two.z)
print(two.discount)
three = C(4,5,6)
print(three.x)
print(three.discount)
Output:
1
0
1
3
0
4
0
Tried to do some calculations and integrate methods, but it only works for methoid but not on the class, as you can see, the discount is set to 0 and doesn't change.
A:
The constructor of your A class seems quite confusing. If the only thing you need is to check if the discount parameter is greater than 0 and set it to your instance's discount variable, you can simplify your code like this:
class A:
def __init__(self, x, y, discount=0):
self.discount=0
if discount>0:
self.discount = discount
self.x=x
self.y=y
Moreover, when you try to modify an instance variable in a subclass, you can still use self:
class B(A):
def __init__ (self,x,y,z):
super().__init__(x,y)
self.z=z
self.discount=5
class C(A):
def __init__ (self,x,y,a):
super().__init__(x,y)
self.a=a
self.discount = 10
a = y*10
Running the commands again, you 'll get:
1
0
1
3
5
4
10
You can find more about instance and class variables here and inheritance here.
| How to change default discount in python using classes and inheriance? | I was trying to change the discount rate of a particular sub class, while the default is set at 0, and in subclass it changes to 5, however, this is not refelected.
I cannot switch the discount scheme on Class B, because all class need to have access on it.
class A:
def __init__(self, x, y, discount=0):
self.discount=0
if self.discount>0:
discount = self.discount
else:
discount=0
self.discount=discount
self.x=x
self.y=y
self.discount=discount
discount=5
class B(A):
def __init__ (self,x,y,z):
super().__init__(x,y)
B.z=z
B.discount=5
class C(A):
def __init__ (self,x,y,a):
super().__init__(x,y)
C.a=a
C.discount = 10
a = y*10
one=A(1,2)
print(one.x)
print(one.discount)
two = B(1,2,3)
print(two.x)
print(two.z)
print(two.discount)
three = C(4,5,6)
print(three.x)
print(three.discount)
Output:
1
0
1
3
0
4
0
Tried to do some calculations and integrate methods, but it only works for methoid but not on the class, as you can see, the discount is set to 0 and doesn't change.
| [
"The constructor of your A class seems quite confusing. If the only thing you need is to check if the discount parameter is greater than 0 and set it to your instance's discount variable, you can simplify your code like this:\nclass A:\n def __init__(self, x, y, discount=0):\n self.discount=0\n if discount>0:\n self.discount = discount\n self.x=x\n self.y=y\n\nMoreover, when you try to modify an instance variable in a subclass, you can still use self:\nclass B(A):\n def __init__ (self,x,y,z):\n super().__init__(x,y)\n self.z=z\n self.discount=5\n \nclass C(A):\n def __init__ (self,x,y,a):\n super().__init__(x,y) \n self.a=a\n self.discount = 10\n a = y*10\n\nRunning the commands again, you 'll get:\n1\n0\n1\n3\n5\n4\n10\n\nYou can find more about instance and class variables here and inheritance here.\n"
] | [
0
] | [] | [] | [
"attributes",
"class",
"inheritance",
"oop",
"python"
] | stackoverflow_0074675515_attributes_class_inheritance_oop_python.txt |
Q:
Calculate distance between a point and a line segment in latitude and longitude
I have a line segments defined with a start and an end point:
A:
x1 = 10.7196405787775
y1 = 59.9050401935882
B:
x2 = 10.7109989561813
y2 = 59.9018650448204
where x defines longitude and y defines latitude.
I also have a point:
P:
x0 = 10.6542116666667
y0 = 59.429105
How do I compute the shortest distance between the line segment and the point? I know how to do this in Cartesian coordinates, but not in long/lat coordinates.
A:
Here is an implementation of a formula off Wikipedia:
def distance(p0, p1, p2): # p3 is the point
x0, y0 = p0
x1, y1 = p1
x2, y2 = p2
nom = abs((y2 - y1) * x0 - (x2 - x1) * y0 + x2 * y1 - y2 * x1)
denom = ((y2 - y1)**2 + (x2 - x1) ** 2) ** 0.5
result = nom / denom
return result
print distance((0, 0), (3, 4), (5, 6))
# should probably test less obvious cases
assert 1 == distance((0, 0), (1, 1), (2, 1))
# change 0.001 to whatever accuracy you demand on testing.
# Notice that it's never perfect...
assert 0.5 * (2 ** 0.5) - distance((0, 0), (1, 0), (0, 1)) < 0.001
A:
To compute the shortest distance between a line segment and a point in longitude/latitude coordinates using Python, you can first convert the longitude/latitude coordinates to Cartesian coordinates using a map projection. Then, you can use the formula for the shortest distance between a line segment and a point in Cartesian coordinates to calculate the distance.
Here is an example of how you can do this using the pyproj package to convert the coordinates and the math module to calculate the distance:
import pyproj
import math
# Define the start and end points of the line segment
x1 = 10.7196405787775
y1 = 59.9050401935882
x2 = 10.7109989561813
y2 = 59.9018650448204
# Define the point
x0 = 10.6542116666667
y0 = 59.429105
# Create a map projection object
projection = pyproj.Proj(init="epsg:4326")
# Convert the longitude/latitude coordinates to Cartesian coordinates
x1, y1 = projection(x1, y1)
x2, y2 = projection(x2, y2)
x0, y0 = projection(x0, y0)
# Compute the shortest distance between the line segment and the point
dx = x2 - x1
dy = y2 - y1
t = ((x0 - x1) * dx + (y0 - y1) * dy) / (dx * dx + dy * dy)
if t < 0:
# The shortest distance is the distance between the point and the start point of the line segment
distance = math.sqrt((x1 - x0) ** 2 + (y1 - y0) ** 2)
elif t > 1:
# The shortest distance is the distance between the point and the end point of the line segment
distance = math.sqrt((x2 - x0) ** 2 + (y2 - y0) ** 2)
else:
# The shortest distance is the distance between the point and the line segment
x = x1 + t * dx
y = y1 + t * dy
distance = math.sqrt((x - x0) ** 2 + (y - y0) ** 2)
# Print the distance
print(distance)
In this example, the longitude/latitude coordinates are converted to Cartesian coordinates using the pyproj package and a map projection with the EPSG code 4326 (WGS 84). Then, the formula for the shortest distance between a line segment and a point in Cartesian coordinates is used to calculate the distance
A:
Using the helpful Python geocoding library geopy, and the formula for the midpoint of a great circle from Chris Veness's geodesy formulae, we can find the distance between a great circle arc and a given point:
from math import sin, cos, atan2, sqrt, degrees, radians, pi
from geopy.distance import great_circle as distance
from geopy.point import Point
def midpoint(a, b):
a_lat, a_lon = radians(a.latitude), radians(a.longitude)
b_lat, b_lon = radians(b.latitude), radians(b.longitude)
delta_lon = b_lon - a_lon
B_x = cos(b_lat) * cos(delta_lon)
B_y = cos(b_lat) * sin(delta_lon)
mid_lat = atan2(
sin(a_lat) + sin(b_lat),
sqrt(((cos(a_lat) + B_x)**2 + B_y**2))
)
mid_lon = a_lon + atan2(B_y, cos(a_lat) + B_x)
# Normalise
mid_lon = (mid_lon + 3*pi) % (2*pi) - pi
return Point(latitude=degrees(mid_lat), longitude=degrees(mid_lon))
Which in this example gives:
# Example:
a = Point(latitude=59.9050401935882, longitude=10.7196405787775)
b = Point(latitude=59.9018650448204, longitude=10.7109989561813)
p = Point(latitude=59.429105, longitude=10.6542116666667)
d = distance(midpoint(a, b), p)
print d.km
# 52.8714586903
| Calculate distance between a point and a line segment in latitude and longitude | I have a line segments defined with a start and an end point:
A:
x1 = 10.7196405787775
y1 = 59.9050401935882
B:
x2 = 10.7109989561813
y2 = 59.9018650448204
where x defines longitude and y defines latitude.
I also have a point:
P:
x0 = 10.6542116666667
y0 = 59.429105
How do I compute the shortest distance between the line segment and the point? I know how to do this in Cartesian coordinates, but not in long/lat coordinates.
| [
"Here is an implementation of a formula off Wikipedia:\ndef distance(p0, p1, p2): # p3 is the point\n x0, y0 = p0\n x1, y1 = p1\n x2, y2 = p2\n nom = abs((y2 - y1) * x0 - (x2 - x1) * y0 + x2 * y1 - y2 * x1)\n denom = ((y2 - y1)**2 + (x2 - x1) ** 2) ** 0.5\n result = nom / denom\n return result\n\nprint distance((0, 0), (3, 4), (5, 6))\n\n# should probably test less obvious cases\nassert 1 == distance((0, 0), (1, 1), (2, 1))\n# change 0.001 to whatever accuracy you demand on testing.\n# Notice that it's never perfect...\nassert 0.5 * (2 ** 0.5) - distance((0, 0), (1, 0), (0, 1)) < 0.001\n\n",
"To compute the shortest distance between a line segment and a point in longitude/latitude coordinates using Python, you can first convert the longitude/latitude coordinates to Cartesian coordinates using a map projection. Then, you can use the formula for the shortest distance between a line segment and a point in Cartesian coordinates to calculate the distance.\nHere is an example of how you can do this using the pyproj package to convert the coordinates and the math module to calculate the distance:\nimport pyproj\nimport math\n\n# Define the start and end points of the line segment\nx1 = 10.7196405787775\ny1 = 59.9050401935882\nx2 = 10.7109989561813\ny2 = 59.9018650448204\n\n# Define the point\nx0 = 10.6542116666667\ny0 = 59.429105\n\n# Create a map projection object\nprojection = pyproj.Proj(init=\"epsg:4326\")\n\n# Convert the longitude/latitude coordinates to Cartesian coordinates\nx1, y1 = projection(x1, y1)\nx2, y2 = projection(x2, y2)\nx0, y0 = projection(x0, y0)\n\n# Compute the shortest distance between the line segment and the point\ndx = x2 - x1\ndy = y2 - y1\n\nt = ((x0 - x1) * dx + (y0 - y1) * dy) / (dx * dx + dy * dy)\n\nif t < 0:\n # The shortest distance is the distance between the point and the start point of the line segment\n distance = math.sqrt((x1 - x0) ** 2 + (y1 - y0) ** 2)\n\nelif t > 1:\n # The shortest distance is the distance between the point and the end point of the line segment\n distance = math.sqrt((x2 - x0) ** 2 + (y2 - y0) ** 2)\n\nelse:\n # The shortest distance is the distance between the point and the line segment\n x = x1 + t * dx\n y = y1 + t * dy\n distance = math.sqrt((x - x0) ** 2 + (y - y0) ** 2)\n\n# Print the distance\nprint(distance)\n\nIn this example, the longitude/latitude coordinates are converted to Cartesian coordinates using the pyproj package and a map projection with the EPSG code 4326 (WGS 84). Then, the formula for the shortest distance between a line segment and a point in Cartesian coordinates is used to calculate the distance\n",
"Using the helpful Python geocoding library geopy, and the formula for the midpoint of a great circle from Chris Veness's geodesy formulae, we can find the distance between a great circle arc and a given point:\nfrom math import sin, cos, atan2, sqrt, degrees, radians, pi\nfrom geopy.distance import great_circle as distance\nfrom geopy.point import Point\n\n\ndef midpoint(a, b):\n a_lat, a_lon = radians(a.latitude), radians(a.longitude)\n b_lat, b_lon = radians(b.latitude), radians(b.longitude)\n delta_lon = b_lon - a_lon\n B_x = cos(b_lat) * cos(delta_lon)\n B_y = cos(b_lat) * sin(delta_lon)\n mid_lat = atan2(\n sin(a_lat) + sin(b_lat),\n sqrt(((cos(a_lat) + B_x)**2 + B_y**2))\n )\n mid_lon = a_lon + atan2(B_y, cos(a_lat) + B_x)\n # Normalise\n mid_lon = (mid_lon + 3*pi) % (2*pi) - pi\n return Point(latitude=degrees(mid_lat), longitude=degrees(mid_lon))\n\nWhich in this example gives:\n# Example:\na = Point(latitude=59.9050401935882, longitude=10.7196405787775)\nb = Point(latitude=59.9018650448204, longitude=10.7109989561813)\np = Point(latitude=59.429105, longitude=10.6542116666667)\n\nd = distance(midpoint(a, b), p)\nprint d.km\n# 52.8714586903\n\n"
] | [
0,
0,
-1
] | [] | [] | [
"latitude_longitude",
"python"
] | stackoverflow_0027461634_latitude_longitude_python.txt |
Q:
Is it possible to check the return of a function, if true assigning the value to a variable?
For something simple like:
def my_func(x):
if x == 4:
return 25
return False
result = my_func(4)
if result:
print(result)
is it possible to check the result and assign the return value in one line?
Something like:
if my_func(4) => x:
print(x)
A:
Sure, a ternary expression can be used, such as:
return 25 if x == 4 else False
| Is it possible to check the return of a function, if true assigning the value to a variable? | For something simple like:
def my_func(x):
if x == 4:
return 25
return False
result = my_func(4)
if result:
print(result)
is it possible to check the result and assign the return value in one line?
Something like:
if my_func(4) => x:
print(x)
| [
"Sure, a ternary expression can be used, such as:\nreturn 25 if x == 4 else False\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074675550_python_python_3.x.txt |
Q:
pyenv: python :command not found
I want to use Python3 with pyenv.
$ pyenv root
/Users/asari/.pyenv
$ pyenv versions
system
2.7.15
3.6.2
3.6.3
3.6.4
* 3.6.6 (set by /Users/asari/workspace/hoge/.python-version)
$ python -V
pyenv: python: command not found
The `python' command exists in these Python versions:
2.7.15
but, python command not found.
I read it in .pyenv/shims/python, thought that there was not python in .pyenv/versions/3.6.6/bin/, but I did not know why python was missing.
$ pwd
/Users/asari/.pyenv/versions/3.6.6/bin
$ ls -la
total 12096
drwxr-xr-x 19 asari staff 608 8 16 00:51 .
drwxr-xr-x 6 asari staff 192 8 16 00:51 ..
lrwxr-xr-x 1 asari staff 8 8 16 00:51 2to3 -> 2to3-3.6
-rwxr-xr-x 1 asari staff 135 8 16 00:51 2to3-3.6
-rwxr-xr-x 1 asari staff 276 8 16 00:51 easy_install-3.6
lrwxr-xr-x 1 asari staff 7 8 16 00:51 idle3 -> idle3.6
-rwxr-xr-x 1 asari staff 133 8 16 00:51 idle3.6
-rwxr-xr-x 1 asari staff 258 8 16 00:51 pip3
-rwxr-xr-x 1 asari staff 258 8 16 00:51 pip3.6
lrwxr-xr-x 1 asari staff 8 8 16 00:51 pydoc3 -> pydoc3.6
-rwxr-xr-x 1 asari staff 118 8 16 00:51 pydoc3.6
lrwxr-xr-x 1 asari staff 9 8 16 00:51 python3 -> python3.6
lrwxr-xr-x 1 asari staff 16 8 16 00:51 python3-config -> python3.6-config
-rwxr-xr-x 2 asari staff 3078944 8 16 00:51 python3.6
lrwxr-xr-x 1 asari staff 17 8 16 00:51 python3.6-config -> python3.6m-config
-rwxr-xr-x 2 asari staff 3078944 8 16 00:51 python3.6m
-rwxr-xr-x 1 asari staff 2076 8 16 00:51 python3.6m-config
lrwxr-xr-x 1 asari staff 10 8 16 00:51 pyvenv -> pyvenv-3.6
-rwxr-xr-x 1 asari staff 475 8 16 00:51 pyvenv-3.6
$PATH
$ echo $PATH | perl -p -e 's/:/\n/g'
/Users/asari/.pyenv/shims
/Users/asari/.pyenv/bin
/Users/asari/.rbenv/shims
/Users/asari/.cargo/bin
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin
install log
$ pyenv install 3.6.6
python-build: use openssl from homebrew
python-build: use readline from homebrew
Downloading Python-3.6.6.tar.xz...
-> https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tar.xz
Installing Python-3.6.6...
python-build: use readline from homebrew
Installed Python-3.6.6 to /Users/asari/.pyenv/versions/3.6.6
$ pyenv --version
pyenv 1.2.7
$ brew list | grep py
python
python@2
pyenv clone and installed from github(I have not installed pyenv on brew)
.zshrc
# python
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
Thank you for your time.
UPDATE
I created python 's symlink, and python worked.
Why is there no python s symlink?
(I was wondering if install failed, I am running install and uninstall many times
create symlink
$ pwd
/Users/asari/.pyenv/versions/3.6.6/bin
$ ln -s python3 python
Work,
$ pwd
/Users/asari/workspace/hoge
$ python -V
Python 3.6.6
A:
Added to ~/.bashrc
alias python="$(pyenv which python)"
alias pip="$(pyenv which pip)"
A:
Under mac OS 10.15
We add the following to .bashrc file or .zshrc file
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/shims:$PATH"
if which pyenv > /dev/null; then eval "$(pyenv init -)"; fi
A:
on MAC OS. I solved it by adding to below lines to the ~/.bash_profile
At the terminal call vi ~/.bash_profile
Insert 2 below lines
alias python="$(pyenv which python)"
alias pip="$(pyenv which pip)"
Call this command after saving above file source ~/.bash_profile
A:
For me the config in my .zshrc.local file needed an update. Using the info on configure your shell's environment for Pyenv page I changed the pyenv init stuff to this:
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init --path)"
if [ -n "$PS1" -a -n "$BASH_VERSION" ]; then source ~/.bashrc; fi
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
A:
when I use python with pyenv, show me:
/Users/zhezhezhu/.pyenv/shims/python: line 21: /usr/local/Cellar/pyenv/1.2.18/libexec/pyenv: No such file or directory
I solved it by : pyenv rehash
A:
I was using Ubuntu 20.04 and it seemed like I had wrongly setup the commands in my ~/.bashrc. I followed configure your shell's environment for Pyenv:
If your ~/.profile sources ~/.bashrc (Debian, Ubuntu, Mint):
# the sed invocation inserts the lines at the start of the file
# after any initial comment lines
sed -Ei -e '/^([^#]|$)/ {a \
export PYENV_ROOT="$HOME/.pyenv"
a \
export PATH="$PYENV_ROOT/bin:$PATH"
a \
' -e ':a' -e '$!{n;ba};}' ~/.profile
echo 'eval "$(pyenv init --path)"' >>~/.profile
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
If you are using MacOs, Fedora, or CentOS, please follow the link above. The commands may also change in the future, depending on pyenv/os/distributions udpates.
A:
On MacOS Montery 12.5 there is no longer a python binary in /usr/local/bin/. A quick fix is to ensure that is a default ref that points to the system python3.
ln -s python3 /usr/local/bin/python
A:
I was experiencing the same error. I just had to follow the error message pyenv was saying:
Note: See 'pyenv help global' for tips on allowing both...
Which made me run the following command:
pyenv global 2.7.18 3.10.6
which are versions existing in my pyenv versions
A:
Using Ubuntu 18.04 environment, using pyenv 2.2.5 here is what you need to update the .bashrc:
export PATH="~/.pyenv/bin:$PATH"
if ! command -v pyenv &> /dev/null
then # If no pyenv is found
echo "pyenv could not be found, cannot initialize env"
else
eval "$(pyenv init --path)"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
fi
If you are using some other OS, then it may be that you need to add this to another dotfile such as .bash_profile. For more detailed info, check out the docs: https://github.com/pyenv/pyenv#basic-github-checkout
A:
add below 2 lines to ./zshrc IS THE ANSWER, GIVEN THAT you already have eval "$(pyenv init -)"
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/shims:$PATH"
Trick is to let shims play its deserving part.
A:
I solved it.
I used the following grep option in .zshrc
export GREP_OPTIONS = '- color = auto'
It seems that even if ANSI escape code was included in the search result of grep used in pyenv, it was not used properly as a character string.
I think that you all know, but GREP_OPTIONS is deprecated.
A:
In my case shims were initialized correctly:
$ which python
/home/me/.pyenv/shims/python
Yet, similar to richard-domingo's answer, I was seeing:
$ pyenv which python
pyenv: python: command not found
The `python' command exists in these Python versions:
3.8.13
Note: See 'pyenv help global' for tips on allowing both
python2 and python3 to be found.
The problem here was that the active python version was system, and my system python3 did not have a python alias (symlink).
Two options to fix this:
either create a python symlink for the system's python3, similar to cmcginty's answer, for example:
ln -s python3 /usr/bin/python
or explicitly set the python version to one of the versions listed in the error message, using e.g. pyenv shell, pyenv local, or pyenv global, depending on your use-case. This is explained in the documentation and discussed here and here.
In this case
pyenv local 3.8.13
whould create/modify a .python-version file in the current directory.
It is really worth reading the following section in the docs: Understanding Python version selection.
A:
I had previously installed pyenv and then installed my required python distribution as follows:
pyenv install 3.7.10
To remove subsequent pyenv: python :command not found errors -- I first had to run:
pyenv global 3.7.10
That solved my problem.
| pyenv: python :command not found | I want to use Python3 with pyenv.
$ pyenv root
/Users/asari/.pyenv
$ pyenv versions
system
2.7.15
3.6.2
3.6.3
3.6.4
* 3.6.6 (set by /Users/asari/workspace/hoge/.python-version)
$ python -V
pyenv: python: command not found
The `python' command exists in these Python versions:
2.7.15
but, python command not found.
I read it in .pyenv/shims/python, thought that there was not python in .pyenv/versions/3.6.6/bin/, but I did not know why python was missing.
$ pwd
/Users/asari/.pyenv/versions/3.6.6/bin
$ ls -la
total 12096
drwxr-xr-x 19 asari staff 608 8 16 00:51 .
drwxr-xr-x 6 asari staff 192 8 16 00:51 ..
lrwxr-xr-x 1 asari staff 8 8 16 00:51 2to3 -> 2to3-3.6
-rwxr-xr-x 1 asari staff 135 8 16 00:51 2to3-3.6
-rwxr-xr-x 1 asari staff 276 8 16 00:51 easy_install-3.6
lrwxr-xr-x 1 asari staff 7 8 16 00:51 idle3 -> idle3.6
-rwxr-xr-x 1 asari staff 133 8 16 00:51 idle3.6
-rwxr-xr-x 1 asari staff 258 8 16 00:51 pip3
-rwxr-xr-x 1 asari staff 258 8 16 00:51 pip3.6
lrwxr-xr-x 1 asari staff 8 8 16 00:51 pydoc3 -> pydoc3.6
-rwxr-xr-x 1 asari staff 118 8 16 00:51 pydoc3.6
lrwxr-xr-x 1 asari staff 9 8 16 00:51 python3 -> python3.6
lrwxr-xr-x 1 asari staff 16 8 16 00:51 python3-config -> python3.6-config
-rwxr-xr-x 2 asari staff 3078944 8 16 00:51 python3.6
lrwxr-xr-x 1 asari staff 17 8 16 00:51 python3.6-config -> python3.6m-config
-rwxr-xr-x 2 asari staff 3078944 8 16 00:51 python3.6m
-rwxr-xr-x 1 asari staff 2076 8 16 00:51 python3.6m-config
lrwxr-xr-x 1 asari staff 10 8 16 00:51 pyvenv -> pyvenv-3.6
-rwxr-xr-x 1 asari staff 475 8 16 00:51 pyvenv-3.6
$PATH
$ echo $PATH | perl -p -e 's/:/\n/g'
/Users/asari/.pyenv/shims
/Users/asari/.pyenv/bin
/Users/asari/.rbenv/shims
/Users/asari/.cargo/bin
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin
install log
$ pyenv install 3.6.6
python-build: use openssl from homebrew
python-build: use readline from homebrew
Downloading Python-3.6.6.tar.xz...
-> https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tar.xz
Installing Python-3.6.6...
python-build: use readline from homebrew
Installed Python-3.6.6 to /Users/asari/.pyenv/versions/3.6.6
$ pyenv --version
pyenv 1.2.7
$ brew list | grep py
python
python@2
pyenv clone and installed from github(I have not installed pyenv on brew)
.zshrc
# python
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
Thank you for your time.
UPDATE
I created python 's symlink, and python worked.
Why is there no python s symlink?
(I was wondering if install failed, I am running install and uninstall many times
create symlink
$ pwd
/Users/asari/.pyenv/versions/3.6.6/bin
$ ln -s python3 python
Work,
$ pwd
/Users/asari/workspace/hoge
$ python -V
Python 3.6.6
| [
"Added to ~/.bashrc\nalias python=\"$(pyenv which python)\"\nalias pip=\"$(pyenv which pip)\"\n\n",
"Under mac OS 10.15\nWe add the following to .bashrc file or .zshrc file\nexport PYENV_ROOT=\"$HOME/.pyenv\"\nexport PATH=\"$PYENV_ROOT/shims:$PATH\"\n\nif which pyenv > /dev/null; then eval \"$(pyenv init -)\"; fi\n\n",
"on MAC OS. I solved it by adding to below lines to the ~/.bash_profile\nAt the terminal call vi ~/.bash_profile\nInsert 2 below lines\nalias python=\"$(pyenv which python)\"\nalias pip=\"$(pyenv which pip)\"\n\nCall this command after saving above file source ~/.bash_profile\n",
"For me the config in my .zshrc.local file needed an update. Using the info on configure your shell's environment for Pyenv page I changed the pyenv init stuff to this:\nexport PYENV_ROOT=\"$HOME/.pyenv\"\nexport PATH=\"$PYENV_ROOT/bin:$PATH\"\neval \"$(pyenv init --path)\"\nif [ -n \"$PS1\" -a -n \"$BASH_VERSION\" ]; then source ~/.bashrc; fi\n\neval \"$(pyenv init -)\"\neval \"$(pyenv virtualenv-init -)\"\n\n",
"when I use python with pyenv, show me:\n/Users/zhezhezhu/.pyenv/shims/python: line 21: /usr/local/Cellar/pyenv/1.2.18/libexec/pyenv: No such file or directory\n\nI solved it by : pyenv rehash\n",
"I was using Ubuntu 20.04 and it seemed like I had wrongly setup the commands in my ~/.bashrc. I followed configure your shell's environment for Pyenv:\nIf your ~/.profile sources ~/.bashrc (Debian, Ubuntu, Mint):\n# the sed invocation inserts the lines at the start of the file\n# after any initial comment lines\nsed -Ei -e '/^([^#]|$)/ {a \\\nexport PYENV_ROOT=\"$HOME/.pyenv\"\na \\\nexport PATH=\"$PYENV_ROOT/bin:$PATH\"\na \\\n' -e ':a' -e '$!{n;ba};}' ~/.profile\necho 'eval \"$(pyenv init --path)\"' >>~/.profile\n\necho 'eval \"$(pyenv init -)\"' >> ~/.bashrc\n\nIf you are using MacOs, Fedora, or CentOS, please follow the link above. The commands may also change in the future, depending on pyenv/os/distributions udpates.\n",
"On MacOS Montery 12.5 there is no longer a python binary in /usr/local/bin/. A quick fix is to ensure that is a default ref that points to the system python3.\nln -s python3 /usr/local/bin/python\n\n",
"I was experiencing the same error. I just had to follow the error message pyenv was saying:\nNote: See 'pyenv help global' for tips on allowing both...\n\nWhich made me run the following command:\npyenv global 2.7.18 3.10.6\n\nwhich are versions existing in my pyenv versions\n",
"Using Ubuntu 18.04 environment, using pyenv 2.2.5 here is what you need to update the .bashrc:\nexport PATH=\"~/.pyenv/bin:$PATH\"\nif ! command -v pyenv &> /dev/null\nthen # If no pyenv is found \n echo \"pyenv could not be found, cannot initialize env\"\nelse\n eval \"$(pyenv init --path)\"\n eval \"$(pyenv init -)\"\n eval \"$(pyenv virtualenv-init -)\"\nfi\n\nIf you are using some other OS, then it may be that you need to add this to another dotfile such as .bash_profile. For more detailed info, check out the docs: https://github.com/pyenv/pyenv#basic-github-checkout\n",
"add below 2 lines to ./zshrc IS THE ANSWER, GIVEN THAT you already have eval \"$(pyenv init -)\"\nexport PYENV_ROOT=\"$HOME/.pyenv\"\nexport PATH=\"$PYENV_ROOT/shims:$PATH\"\n\nTrick is to let shims play its deserving part.\n",
"I solved it.\nI used the following grep option in .zshrc\nexport GREP_OPTIONS = '- color = auto'\n\nIt seems that even if ANSI escape code was included in the search result of grep used in pyenv, it was not used properly as a character string.\nI think that you all know, but GREP_OPTIONS is deprecated.\n",
"In my case shims were initialized correctly:\n$ which python\n/home/me/.pyenv/shims/python\n\nYet, similar to richard-domingo's answer, I was seeing:\n$ pyenv which python\npyenv: python: command not found\n\nThe `python' command exists in these Python versions:\n 3.8.13\n\nNote: See 'pyenv help global' for tips on allowing both\n python2 and python3 to be found.\n\nThe problem here was that the active python version was system, and my system python3 did not have a python alias (symlink).\nTwo options to fix this:\n\neither create a python symlink for the system's python3, similar to cmcginty's answer, for example:\nln -s python3 /usr/bin/python\n\n\nor explicitly set the python version to one of the versions listed in the error message, using e.g. pyenv shell, pyenv local, or pyenv global, depending on your use-case. This is explained in the documentation and discussed here and here.\nIn this case\npyenv local 3.8.13\n\nwhould create/modify a .python-version file in the current directory.\n\n\nIt is really worth reading the following section in the docs: Understanding Python version selection.\n",
"I had previously installed pyenv and then installed my required python distribution as follows:\npyenv install 3.7.10\n\nTo remove subsequent pyenv: python :command not found errors -- I first had to run:\npyenv global 3.7.10\n\nThat solved my problem.\n"
] | [
24,
8,
7,
6,
3,
2,
2,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"pyenv",
"python"
] | stackoverflow_0051863225_pyenv_python.txt |
Q:
How to select Object(s) from a JSON message in Python?
I received this data from an API,
But i am unable to select the Objects i want.
It shows the candle data from 3 tickers;"ETHUSDT","BTCUSDT" and "BNBUSDT",
but they are all under the same identifiers...
I need the closing prices('c' values of each ticker) so it would be something like:
anyone knows how to get something like;
ETHUSDT('c')='1253.28000000'
BTCUSDT('c')='16912.93000000'
BNBUSDT('c')='289.60000000'
(Data coming in through a websocket, so i cant split it up in 3 messages)
##Thanks in advance!!
{'E': 1670154095936, 'e': 'kline', 'k': {'B': '0', 'L': 1036708466, '`Q': '403679.79500200', 'T': 1670154119999, 'V': '322.25210000', 'c': '1253.28000000', 'f': 1036707674, 'h': '1253.28000000', 'i': '1m', 'l': '1251.64000000', 'n': 793, 'o': '1253.02000000', 'q': '741683.22211300', 's': 'ETHUSDT', 't': 1670154060000, 'v': '592.10530000', 'x': False}, 's': 'ETHUSDT'} {'E': 1670154096509, 'e': 'kline', 'k': {'B': '0', 'L': 2285413599, 'Q': '2955493.90319840', 'T': 1670154119999, 'V': '174.76525000', 'c': '16912.93000000', 'f': 2285408203, 'h': '16922.53000000', 'i': '1m', 'l': '16905.01000000', 'n': 5397, 'o': '16920.08000000', 'q': '5748552.34500920', 's': 'BTCUSDT', 't': 1670154060000, 'v': '339.92573000', 'x': False}, 's': 'BTCUSDT'} {'E': 1670154096937, 'e': 'kline', 'k': {'B': '0', 'L': 608211438, 'Q': '16414.84590000', 'T': 1670154119999, 'V': '56.69100000', 'c': '289.60000000', 'f': 608211306, 'h': '289.70000000', 'i': '1m', 'l': '289.40000000', 'n': 133, 'o': '289.70000000', 'q': '40301.72990000', 's': 'BNBUSDT', 't': 1670154060000, 'v': '139.20100000', 'x': False}, 's': 'BNBUSDT'}
this just gave me all of the data
``def on_message(ws,message):
json_message = json.loads(message)
candle = json_message['data']['k']
A:
you can use:
outs=[]
for i in your_json:
outs.append("{}(c)={}".format(i['s'],i['k']['c']))
#['ETHUSDT(c)=1253.28000000', 'BTCUSDT(c)=16912.93000000', 'BNBUSDT(c)=289.60000000']
| How to select Object(s) from a JSON message in Python? | I received this data from an API,
But i am unable to select the Objects i want.
It shows the candle data from 3 tickers;"ETHUSDT","BTCUSDT" and "BNBUSDT",
but they are all under the same identifiers...
I need the closing prices('c' values of each ticker) so it would be something like:
anyone knows how to get something like;
ETHUSDT('c')='1253.28000000'
BTCUSDT('c')='16912.93000000'
BNBUSDT('c')='289.60000000'
(Data coming in through a websocket, so i cant split it up in 3 messages)
##Thanks in advance!!
{'E': 1670154095936, 'e': 'kline', 'k': {'B': '0', 'L': 1036708466, '`Q': '403679.79500200', 'T': 1670154119999, 'V': '322.25210000', 'c': '1253.28000000', 'f': 1036707674, 'h': '1253.28000000', 'i': '1m', 'l': '1251.64000000', 'n': 793, 'o': '1253.02000000', 'q': '741683.22211300', 's': 'ETHUSDT', 't': 1670154060000, 'v': '592.10530000', 'x': False}, 's': 'ETHUSDT'} {'E': 1670154096509, 'e': 'kline', 'k': {'B': '0', 'L': 2285413599, 'Q': '2955493.90319840', 'T': 1670154119999, 'V': '174.76525000', 'c': '16912.93000000', 'f': 2285408203, 'h': '16922.53000000', 'i': '1m', 'l': '16905.01000000', 'n': 5397, 'o': '16920.08000000', 'q': '5748552.34500920', 's': 'BTCUSDT', 't': 1670154060000, 'v': '339.92573000', 'x': False}, 's': 'BTCUSDT'} {'E': 1670154096937, 'e': 'kline', 'k': {'B': '0', 'L': 608211438, 'Q': '16414.84590000', 'T': 1670154119999, 'V': '56.69100000', 'c': '289.60000000', 'f': 608211306, 'h': '289.70000000', 'i': '1m', 'l': '289.40000000', 'n': 133, 'o': '289.70000000', 'q': '40301.72990000', 's': 'BNBUSDT', 't': 1670154060000, 'v': '139.20100000', 'x': False}, 's': 'BNBUSDT'}
this just gave me all of the data
``def on_message(ws,message):
json_message = json.loads(message)
candle = json_message['data']['k']
| [
"you can use:\nouts=[]\nfor i in your_json:\n outs.append(\"{}(c)={}\".format(i['s'],i['k']['c']))\n\n#['ETHUSDT(c)=1253.28000000', 'BTCUSDT(c)=16912.93000000', 'BNBUSDT(c)=289.60000000']\n\n"
] | [
0
] | [] | [] | [
"json",
"python"
] | stackoverflow_0074675545_json_python.txt |
Q:
How to find element with "== $0" after end tag in html using Xpath, css or any other locators in Selenium Python?
the html tag
<div class=""><div>Bengaluru, Karnataka</div></div>
Consider the above example for reference.
I tried the following code but it doesn't work!!!
driver.find_element(By.XPATH,'//div[@class=""]').text.strip()
A:
You can use this:
driver.find_element(By.XPATH, ".//div[@class='']/div").text
Output:
Bengaluru, Karnataka
A:
You can not filter by that "==$0"
But you can use this xpath, which will return to you the element with following requirements:
It is a "div"
That "div" contains an attribute "class"
That "class" attribute has a length of 0 chars
This is the xpath:
//div[@class[string-length()=0]]
| How to find element with "== $0" after end tag in html using Xpath, css or any other locators in Selenium Python? | the html tag
<div class=""><div>Bengaluru, Karnataka</div></div>
Consider the above example for reference.
I tried the following code but it doesn't work!!!
driver.find_element(By.XPATH,'//div[@class=""]').text.strip()
| [
"You can use this:\ndriver.find_element(By.XPATH, \".//div[@class='']/div\").text\n\nOutput:\nBengaluru, Karnataka\n\n",
"You can not filter by that \"==$0\"\nBut you can use this xpath, which will return to you the element with following requirements:\n\nIt is a \"div\"\nThat \"div\" contains an attribute \"class\"\nThat \"class\" attribute has a length of 0 chars\n\nThis is the xpath:\n//div[@class[string-length()=0]]\n\n"
] | [
0,
0
] | [] | [] | [
"html",
"python",
"selenium",
"selenium_webdriver",
"web_scraping"
] | stackoverflow_0074675452_html_python_selenium_selenium_webdriver_web_scraping.txt |
Q:
How can I embed a variable that updates every time the command is run?
I am trying to get my discord bot (coded in python) to embed the contents of a string variable that I set up.
I am unable to figure out how to make the bot embed the string, as well as how to make the variable update every time the command is run.
I made a function for the playerlist variable in the hopes that it would run every time I called it, but I wanted for some guidance on whether this is correct or not, as well as my problem of displaying it.
def playerlist():
req = Request(url, headers = {'User-Agent': 'Mozilla/5.0'}) #spoopy disguise
webpage = urlopen(req).read()
soup = soup(webpage, "html.parser")
cleansoup = (soup.get_text(strip=True, separator=" "))
x = cleansoup.split("""Play time""")[-1]
x = x.split("""Most Time Played""")[0]
print(x)
@commands.hybrid_command(
name="playerlist",
description="This should pull a player list with playtimes",
)
# This will only allow non-blacklisted members to execute the command
@checks.not_blacklisted()
async def playerlist(self, context: Context) -> None:
"""
playerlist from battlemetrics
"""
playerlist()
embed = discord.Embed(
title="Playerlist",
description=x,
color=0x9C84EF
)
await context.send(embed=embed)
I get an error message saying that "x" is not defined. I can understand why it would say it is not defined but It is still confusing because it should have been defined from the function?
Thank you very much for your time
A:
you can return x from the playerlist function
def playerlist():
req = Request(url, headers = {'User-Agent': 'Mozilla/5.0'}) #spoopy disguise
webpage = urlopen(req).read()
soup = soup(webpage, "html.parser")
cleansoup = (soup.get_text(strip=True, separator=" "))
x = cleansoup.split("""Play time""")[-1]
x = x.split("""Most Time Played""")[0]
return x
and recalling it anytime
@commands.hybrid_command(
name="playerlist",
description="This should pull a player list with playtimes",
)
# This will only allow non-blacklisted members to execute the command
@checks.not_blacklisted()
async def playerlist(self, context: Context) -> None:
"""
playerlist from battlemetrics
"""
x = playerlist()
embed = discord.Embed(
title="Playerlist",
description=x,
color=0x9C84EF
)
await context.send(embed=embed)
| How can I embed a variable that updates every time the command is run? | I am trying to get my discord bot (coded in python) to embed the contents of a string variable that I set up.
I am unable to figure out how to make the bot embed the string, as well as how to make the variable update every time the command is run.
I made a function for the playerlist variable in the hopes that it would run every time I called it, but I wanted for some guidance on whether this is correct or not, as well as my problem of displaying it.
def playerlist():
req = Request(url, headers = {'User-Agent': 'Mozilla/5.0'}) #spoopy disguise
webpage = urlopen(req).read()
soup = soup(webpage, "html.parser")
cleansoup = (soup.get_text(strip=True, separator=" "))
x = cleansoup.split("""Play time""")[-1]
x = x.split("""Most Time Played""")[0]
print(x)
@commands.hybrid_command(
name="playerlist",
description="This should pull a player list with playtimes",
)
# This will only allow non-blacklisted members to execute the command
@checks.not_blacklisted()
async def playerlist(self, context: Context) -> None:
"""
playerlist from battlemetrics
"""
playerlist()
embed = discord.Embed(
title="Playerlist",
description=x,
color=0x9C84EF
)
await context.send(embed=embed)
I get an error message saying that "x" is not defined. I can understand why it would say it is not defined but It is still confusing because it should have been defined from the function?
Thank you very much for your time
| [
"you can return x from the playerlist function\ndef playerlist():\n req = Request(url, headers = {'User-Agent': 'Mozilla/5.0'}) #spoopy disguise\n webpage = urlopen(req).read()\n\n soup = soup(webpage, \"html.parser\")\n\n cleansoup = (soup.get_text(strip=True, separator=\" \"))\n\n x = cleansoup.split(\"\"\"Play time\"\"\")[-1]\n x = x.split(\"\"\"Most Time Played\"\"\")[0]\n return x\n\nand recalling it anytime\n @commands.hybrid_command(\n name=\"playerlist\",\n description=\"This should pull a player list with playtimes\",\n )\n # This will only allow non-blacklisted members to execute the command\n @checks.not_blacklisted()\n async def playerlist(self, context: Context) -> None:\n \"\"\"\n playerlist from battlemetrics\n \"\"\"\n x = playerlist()\n embed = discord.Embed(\n title=\"Playerlist\",\n description=x,\n color=0x9C84EF\n )\n await context.send(embed=embed)\n\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"discord.py",
"python"
] | stackoverflow_0074674934_beautifulsoup_discord.py_python.txt |
Q:
ModuleNotFoundError: No module named 'flask_mqtt'
I'm trying to run flask-mqtt on raspberry pi. I am running python version 3.7.3 it looks like I can't update python to 3.10 on pi. I don't know if that is necessary. I have installed flask-mqtt with following command
pip install Flask-MQTT:
Requirement already satisfied: Flask-MQTT in /home/pi/.local/lib/python2.7/site-packages (1.0.7)
Requirement already satisfied: Flask in /usr/lib/python2.7/dist-packages (from Flask-MQTT) (1.0.2)
Requirement already satisfied: paho-mqtt in /home/pi/.local/lib/python2.7/site-packages (from Flask-MQTT) (1.6.1)
Requirement already satisfied: typing; python_version < "3.5" in /home/pi/.local/lib/python2.7/site-packages (from Flask-MQTT) (3.10.0.0)
I have both tried running app.py in virtual environment and directly on pi system
everytime I run it i get ModuleNotFoundError like following
Traceback (most recent call last):
File "app.py", line 2, in
from flask_mqtt import Mqtt
ModuleNotFoundError: No module named 'flask_mqtt'
I am running Raspberry Pi OS (Legacy) on Raspberry Pi model 3 b+
A:
Based on the output of the pip install command you included in your question, it appears that flask-mqtt was installed for Python 2.7, but you're trying to run your script with Python 3.7. You can confirm the version of Python that your script is using by adding the following line at the beginning of the script to print version of python script is using:
import sys
print(sys.version)
To fix the ModuleNotFoundError that you're seeing, install the flask-mqtt library for the version of Python that your script is using. If your script is using Python 3.7, you can use the following command to install the library:
pip3 install Flask-MQTT
A:
Problem Solved!
the issue had something to do with port numbers. seems like I was always trying to run on port 80. when I changed to other ports, for example 8080 it worked. I am not sure why because nothing else is listening to port 80 I tried Following
telnet 192.168.8.175 80
and no connection
the final solution was to change...
if __name__ == '__main__': <br>
app.run(debug=True, port=80, host='0.0.0.0') <br>
to...
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
Thanks for the help!
| ModuleNotFoundError: No module named 'flask_mqtt' | I'm trying to run flask-mqtt on raspberry pi. I am running python version 3.7.3 it looks like I can't update python to 3.10 on pi. I don't know if that is necessary. I have installed flask-mqtt with following command
pip install Flask-MQTT:
Requirement already satisfied: Flask-MQTT in /home/pi/.local/lib/python2.7/site-packages (1.0.7)
Requirement already satisfied: Flask in /usr/lib/python2.7/dist-packages (from Flask-MQTT) (1.0.2)
Requirement already satisfied: paho-mqtt in /home/pi/.local/lib/python2.7/site-packages (from Flask-MQTT) (1.6.1)
Requirement already satisfied: typing; python_version < "3.5" in /home/pi/.local/lib/python2.7/site-packages (from Flask-MQTT) (3.10.0.0)
I have both tried running app.py in virtual environment and directly on pi system
everytime I run it i get ModuleNotFoundError like following
Traceback (most recent call last):
File "app.py", line 2, in
from flask_mqtt import Mqtt
ModuleNotFoundError: No module named 'flask_mqtt'
I am running Raspberry Pi OS (Legacy) on Raspberry Pi model 3 b+
| [
"Based on the output of the pip install command you included in your question, it appears that flask-mqtt was installed for Python 2.7, but you're trying to run your script with Python 3.7. You can confirm the version of Python that your script is using by adding the following line at the beginning of the script to print version of python script is using:\nimport sys\n\nprint(sys.version)\n\nTo fix the ModuleNotFoundError that you're seeing, install the flask-mqtt library for the version of Python that your script is using. If your script is using Python 3.7, you can use the following command to install the library:\npip3 install Flask-MQTT\n",
"Problem Solved!\nthe issue had something to do with port numbers. seems like I was always trying to run on port 80. when I changed to other ports, for example 8080 it worked. I am not sure why because nothing else is listening to port 80 I tried Following\n\ntelnet 192.168.8.175 80\n\nand no connection\nthe final solution was to change...\nif __name__ == '__main__': <br>\n app.run(debug=True, port=80, host='0.0.0.0') <br>\n\nto...\nif __name__ == '__main__': \n app.run(host='0.0.0.0', port=8080)\n\nThanks for the help!\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074671732_python.txt |
Q:
for loop and if in python and csv
Worked on csv data need to use if statment for three column in the data ( if Vshale <= 0.35 and Effective_porosity>=0.1 and SW_SIM <=0.5 ) if these condition found mack true in new column else false
CUTOFF =[]
for (i,j,z) in well['Vshale','Effective_Porosity','SW_SIM']:
if [i <= '0.35', j >= '0.1' , z <= '0.5']:
well[''] = CUTOFF .append('True')
else:
well[''] = CUTOFFa .append('False ')
A:
you should use the and statement as a part of you if logic.
CUTOFF =[]
for (i,j,z) in well['Vshale','Effective_Porosity','SW_SIM']:
if i <= '0.35' and j >= '0.1' and z <= '0.5':
well[''] = CUTOFF.append('True')
else:
well[''] = CUTOFF.append('False')
as well you had a few white spaces on your .append() to your new list and your second .append() also was referencing a new list CUTOFFA. So I changed that to your original list. You also had an indentation error on the else:.
from the way you describe what you need I think this should be what you need, although I am not 100% sure.
hope that works.
| for loop and if in python and csv | Worked on csv data need to use if statment for three column in the data ( if Vshale <= 0.35 and Effective_porosity>=0.1 and SW_SIM <=0.5 ) if these condition found mack true in new column else false
CUTOFF =[]
for (i,j,z) in well['Vshale','Effective_Porosity','SW_SIM']:
if [i <= '0.35', j >= '0.1' , z <= '0.5']:
well[''] = CUTOFF .append('True')
else:
well[''] = CUTOFFa .append('False ')
| [
"you should use the and statement as a part of you if logic.\n\nCUTOFF =[]\nfor (i,j,z) in well['Vshale','Effective_Porosity','SW_SIM']:\n if i <= '0.35' and j >= '0.1' and z <= '0.5':\n well[''] = CUTOFF.append('True')\n else:\n well[''] = CUTOFF.append('False')\n\nas well you had a few white spaces on your .append() to your new list and your second .append() also was referencing a new list CUTOFFA. So I changed that to your original list. You also had an indentation error on the else:.\nfrom the way you describe what you need I think this should be what you need, although I am not 100% sure.\nhope that works.\n"
] | [
0
] | [] | [] | [
"for_loop",
"if_statement",
"python"
] | stackoverflow_0074675431_for_loop_if_statement_python.txt |
Q:
Using Python Panda aggregates operation
I have a table like this-
Hotel Earning
Abu 1000
Zain 400
Show 500
Zint 300
Abu 500
Zain 700
Abu 500
Abu 500
Abu 800
Abu 1600
Show 1300
Zint 600
Using Panda, How to group by hotel and calculate the min, median and max of the earning on each hotel. And at the end print the aggregates values Hotel name "Abu".
Output:
[500.0, 650.0, 1600.0]
A:
Pandas DataFrame aggregate() Method
The aggregate() method allows you to apply a function or a list of function names to be executed along one of the axis of the DataFrame, default 0, which is the index (row) axis. Note: the agg() method is an alias of the aggregate() method.
A:
import pandas as pd
# Read the data into a Pandas DataFrame
df = pd.read_csv('hotel_earnings.csv')
# Group the data by hotel
hotels = df.groupby('Hotel')
# Calculate the min, median, and max of the earning for each hotel
earnings = hotels['Earning'].agg(['min', 'median', 'max'])
# Print the aggregated values for the hotel named "Abu"
print(earnings.loc['Abu'])
This code reads the data from the hotel_earnings.csv file into a Pandas DataFrame, groups the data by hotel, and calculates the minimum, median, and maximum earning for each hotel. It then prints the aggregated values for the hotel named "Abu"
A:
Use agg:
df.groupby('Hotel').agg([('Min' , 'min'), ('Max', 'max'), ('Median', 'median')])
# Out:
# Earning
# Min Max Median
# Hotel
# Abu 500 1600 650.0
# Show 500 1300 900.0
# Zain 400 700 550.0
# Zint 300 600 450.0
For more statistics you can also use describe()
df.groupby('Hotel').describe()
# Out:
# Earning
# count mean std min 25% 50% 75% max
# Hotel
# Abu 6.0 816.666667 435.507367 500.0 500.0 650.0 950.0 1600.0
# Show 2.0 900.000000 565.685425 500.0 700.0 900.0 1100.0 1300.0
# Zain 2.0 550.000000 212.132034 400.0 475.0 550.0 625.0 700.0
# Zint 2.0 450.000000 212.132034 300.0 375.0 450.0 525.0 600.0
| Using Python Panda aggregates operation | I have a table like this-
Hotel Earning
Abu 1000
Zain 400
Show 500
Zint 300
Abu 500
Zain 700
Abu 500
Abu 500
Abu 800
Abu 1600
Show 1300
Zint 600
Using Panda, How to group by hotel and calculate the min, median and max of the earning on each hotel. And at the end print the aggregates values Hotel name "Abu".
Output:
[500.0, 650.0, 1600.0]
| [
"Pandas DataFrame aggregate() Method\nThe aggregate() method allows you to apply a function or a list of function names to be executed along one of the axis of the DataFrame, default 0, which is the index (row) axis. Note: the agg() method is an alias of the aggregate() method.\n",
"import pandas as pd\n\n# Read the data into a Pandas DataFrame\ndf = pd.read_csv('hotel_earnings.csv')\n\n# Group the data by hotel\nhotels = df.groupby('Hotel')\n\n# Calculate the min, median, and max of the earning for each hotel\nearnings = hotels['Earning'].agg(['min', 'median', 'max'])\n\n# Print the aggregated values for the hotel named \"Abu\"\nprint(earnings.loc['Abu'])\n\nThis code reads the data from the hotel_earnings.csv file into a Pandas DataFrame, groups the data by hotel, and calculates the minimum, median, and maximum earning for each hotel. It then prints the aggregated values for the hotel named \"Abu\"\n",
"Use agg:\ndf.groupby('Hotel').agg([('Min' , 'min'), ('Max', 'max'), ('Median', 'median')])\n# Out:\n# Earning \n# Min Max Median\n# Hotel \n# Abu 500 1600 650.0\n# Show 500 1300 900.0\n# Zain 400 700 550.0\n# Zint 300 600 450.0\n\nFor more statistics you can also use describe()\ndf.groupby('Hotel').describe()\n# Out: \n# Earning \n# count mean std min 25% 50% 75% max\n# Hotel \n# Abu 6.0 816.666667 435.507367 500.0 500.0 650.0 950.0 1600.0\n# Show 2.0 900.000000 565.685425 500.0 700.0 900.0 1100.0 1300.0\n# Zain 2.0 550.000000 212.132034 400.0 475.0 550.0 625.0 700.0\n# Zint 2.0 450.000000 212.132034 300.0 375.0 450.0 525.0 600.0\n\n"
] | [
0,
0,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074675584_pandas_python.txt |
Q:
FileNotFoundError: [Errno 2] No such file or directory: b'/Users//Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3' ( kivy app )
Since I got this error when building project in Xcode,
ModuleNotFoundError: No module named 'requests'
and then I'm trying to install the requests module with git command.
python toolchain.py pip install requests
However, I read the logs and I got this FileNotFoundError message. How can I deal with the error?
[INFO ] Using the bundled version for recipe 'host_setuptools3'
[INFO ] Using the bundled version for recipe 'hostopenssl'
[INFO ] Using the bundled version for recipe 'hostpython3'
[INFO ] Global: hostpython located at /Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/python
[INFO ] Global: hostpgen located at /Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/pgen
[INFO ] Using the bundled version for recipe 'ios'
[INFO ] Using the bundled version for recipe 'kivy'
[INFO ] Using the bundled version for recipe 'libffi'
[INFO ] Include dir added: {arch.arch}/ffi
[INFO ] Using the bundled version for recipe 'openssl'
[INFO ] Include dir added: {arch.arch}/openssl
[INFO ] Using the bundled version for recipe 'pyobjus'
[INFO ] Using the bundled version for recipe 'python3'
[INFO ] Using the bundled version for recipe 'sdl2'
[INFO ] Include dir added: common/sdl2
[INFO ] Using the bundled version for recipe 'sdl2_image'
[INFO ] Include dir added: common/sdl2_image
[INFO ] Using the bundled version for recipe 'sdl2_mixer'
[INFO ] Include dir added: common/sdl2_mixer
[INFO ] Using the bundled version for recipe 'sdl2_ttf'
[INFO ] Include dir added: common/sdl2_ttf
[INFO ] Executing pip with: ['install', '--isolated', '--prefix', '/Users/<myname>/Desktop/kivy/kivy-ios/dist/root/python3', 'requests']
[INFO ] Running Shell: /Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3 ('install', '--isolated', '--prefix', '/Users/<myname>/Desktop/kivy/kivy-ios/dist/root/python3', 'requests') {'_env': {'CC': '/bin/false', 'CXX': '/bin/false', 'PYTHONPATH': '/Users/<myname>/Desktop/kivy/kivy-ios/dist/root/python3/lib/python3.9/site-packages', 'PYTHONOPTIMIZE': '2'}, '_iter': True, '_out_bufsize': 1, '_err_to_out': True}
Traceback (most recent call last):
File "/Users/<myname>/Desktop/kivy/kivy-ios/toolchain.py", line 3, in <module>
main()
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1555, in main
ToolchainCL()
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1299, in __init__
getattr(self, args.command)()
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1514, in pip
_pip(sys.argv[2:])
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1186, in _pip
shprint(pip_cmd, *args, _env=pip_env)
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 55, in shprint
cmd = command(*args, **kwargs)
File "/Users/<myname>/Desktop/kivy/kivy-ios/posEnv/lib/python3.9/site-packages/sh.py", line 1524, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/Users/<myname>/Desktop/kivy/kivy-ios/posEnv/lib/python3.9/site-packages/sh.py", line 780, in __init__
self.process = OProc(self, self.log, cmd, stdin, stdout, stderr,
File "/Users/<myname>/Desktop/kivy/kivy-ios/posEnv/lib/python3.9/site-packages/sh.py", line 2125, in __init__
raise ForkException(fork_exc)
sh.ForkException:
Original exception:
===================
Traceback (most recent call last):
File "/Users/gordonkwok/Desktop/kivy/kivy-ios/<myenv>/lib/python3.9/site-packages/sh.py", line 2080, in __init__
os.execve(cmd[0], cmd, ca["env"])
FileNotFoundError: [Errno 2] No such file or directory: b'/Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3'
So I looked the file "/Users//Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3" and the virtual environment file "/Users//Desktop/kivy/kivy-ios//lib/python3.9/site-packages/sh.py" to see whether they are existed. And both of them are really existed! I'm so confuse with this error. So please help me out here! It is the finally step for me to run my first app soon! Thanks!
A:
Let's tackle this in steps:
I'm making the assumption that your toolchain.py file is the script you would like to run, for which you need the requests module.
Step 1: Activate your virtual environment (you have possibly already done this)
Before installing a new module using pip install <module>, you want to activate your virtual environment, because you want to install it in there.
You can do this by doing:
In linux: source <your-venv-path>/bin/activate
In windows: <your-venv-path>\Scripts\activate.bat
Some good answers on how to activate a virtual environment can be found for Windows and for Linux.
Step 2: Install the requests module
Now your virtual environment is active, you should be able to install the requests module like this:
pip install requests
Step 3: Run your script
After this, you should be able to run your script with the requests module installed like so:
python toolchain.py
A:
I have tried many ways to solve the issue in the last few days but failed. Finally, an admin in the kivy discord helps me solve the issue.
In my case, maybe I used the command sudo toolchain.py build python kivy. However, sudo is bad, and maybe what caused this issue.
After I clean up the build and reinstall all kivy using toolchain.py build python kivy, I finally solve the issue. Thank you for the help from the admin and kivy community!
| FileNotFoundError: [Errno 2] No such file or directory: b'/Users//Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3' ( kivy app ) | Since I got this error when building project in Xcode,
ModuleNotFoundError: No module named 'requests'
and then I'm trying to install the requests module with git command.
python toolchain.py pip install requests
However, I read the logs and I got this FileNotFoundError message. How can I deal with the error?
[INFO ] Using the bundled version for recipe 'host_setuptools3'
[INFO ] Using the bundled version for recipe 'hostopenssl'
[INFO ] Using the bundled version for recipe 'hostpython3'
[INFO ] Global: hostpython located at /Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/python
[INFO ] Global: hostpgen located at /Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/pgen
[INFO ] Using the bundled version for recipe 'ios'
[INFO ] Using the bundled version for recipe 'kivy'
[INFO ] Using the bundled version for recipe 'libffi'
[INFO ] Include dir added: {arch.arch}/ffi
[INFO ] Using the bundled version for recipe 'openssl'
[INFO ] Include dir added: {arch.arch}/openssl
[INFO ] Using the bundled version for recipe 'pyobjus'
[INFO ] Using the bundled version for recipe 'python3'
[INFO ] Using the bundled version for recipe 'sdl2'
[INFO ] Include dir added: common/sdl2
[INFO ] Using the bundled version for recipe 'sdl2_image'
[INFO ] Include dir added: common/sdl2_image
[INFO ] Using the bundled version for recipe 'sdl2_mixer'
[INFO ] Include dir added: common/sdl2_mixer
[INFO ] Using the bundled version for recipe 'sdl2_ttf'
[INFO ] Include dir added: common/sdl2_ttf
[INFO ] Executing pip with: ['install', '--isolated', '--prefix', '/Users/<myname>/Desktop/kivy/kivy-ios/dist/root/python3', 'requests']
[INFO ] Running Shell: /Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3 ('install', '--isolated', '--prefix', '/Users/<myname>/Desktop/kivy/kivy-ios/dist/root/python3', 'requests') {'_env': {'CC': '/bin/false', 'CXX': '/bin/false', 'PYTHONPATH': '/Users/<myname>/Desktop/kivy/kivy-ios/dist/root/python3/lib/python3.9/site-packages', 'PYTHONOPTIMIZE': '2'}, '_iter': True, '_out_bufsize': 1, '_err_to_out': True}
Traceback (most recent call last):
File "/Users/<myname>/Desktop/kivy/kivy-ios/toolchain.py", line 3, in <module>
main()
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1555, in main
ToolchainCL()
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1299, in __init__
getattr(self, args.command)()
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1514, in pip
_pip(sys.argv[2:])
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 1186, in _pip
shprint(pip_cmd, *args, _env=pip_env)
File "/Users/<myname>/Desktop/kivy/kivy-ios/kivy_ios/toolchain.py", line 55, in shprint
cmd = command(*args, **kwargs)
File "/Users/<myname>/Desktop/kivy/kivy-ios/posEnv/lib/python3.9/site-packages/sh.py", line 1524, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/Users/<myname>/Desktop/kivy/kivy-ios/posEnv/lib/python3.9/site-packages/sh.py", line 780, in __init__
self.process = OProc(self, self.log, cmd, stdin, stdout, stderr,
File "/Users/<myname>/Desktop/kivy/kivy-ios/posEnv/lib/python3.9/site-packages/sh.py", line 2125, in __init__
raise ForkException(fork_exc)
sh.ForkException:
Original exception:
===================
Traceback (most recent call last):
File "/Users/gordonkwok/Desktop/kivy/kivy-ios/<myenv>/lib/python3.9/site-packages/sh.py", line 2080, in __init__
os.execve(cmd[0], cmd, ca["env"])
FileNotFoundError: [Errno 2] No such file or directory: b'/Users/<myname>/Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3'
So I looked the file "/Users//Desktop/kivy/kivy-ios/dist/hostpython3/bin/pip3" and the virtual environment file "/Users//Desktop/kivy/kivy-ios//lib/python3.9/site-packages/sh.py" to see whether they are existed. And both of them are really existed! I'm so confuse with this error. So please help me out here! It is the finally step for me to run my first app soon! Thanks!
| [
"Let's tackle this in steps:\nI'm making the assumption that your toolchain.py file is the script you would like to run, for which you need the requests module.\nStep 1: Activate your virtual environment (you have possibly already done this)\nBefore installing a new module using pip install <module>, you want to activate your virtual environment, because you want to install it in there.\nYou can do this by doing:\n\nIn linux: source <your-venv-path>/bin/activate\nIn windows: <your-venv-path>\\Scripts\\activate.bat\n\nSome good answers on how to activate a virtual environment can be found for Windows and for Linux.\nStep 2: Install the requests module\nNow your virtual environment is active, you should be able to install the requests module like this:\npip install requests\nStep 3: Run your script\nAfter this, you should be able to run your script with the requests module installed like so:\npython toolchain.py\n",
"I have tried many ways to solve the issue in the last few days but failed. Finally, an admin in the kivy discord helps me solve the issue.\nIn my case, maybe I used the command sudo toolchain.py build python kivy. However, sudo is bad, and maybe what caused this issue.\nAfter I clean up the build and reinstall all kivy using toolchain.py build python kivy, I finally solve the issue. Thank you for the help from the admin and kivy community!\n"
] | [
2,
1
] | [] | [] | [
"kivy",
"python",
"xcode"
] | stackoverflow_0074611396_kivy_python_xcode.txt |
Q:
Updating thresholds on the ROC curve
Using python, I built six machine learning models. The aim is to predict death in patients hospitalised with heart conditions. (Target feature: 1=Died, 0=Alive at the time of discharge form the hospital)
I created a ROC curve incorporating AUC for six algorithms. Considering that the data is imbalanced, I would like to change the threshold for all models.
How do I update the thresholds for each model before I plot the ROC curve?
I have looked at similar posts on this platform, but still struggling.
PS: I am not an experienced programmer, but a medical doctor using machine learning as a tool to risk stratify patients with heart conditions.
model1= RandomForestClassifier()
model2 = LogisticRegression()
model3 = modelsvc
model4 = XGBClassifier()
model5 = annmodel1
model6= DecisionTreeClassifier()
# fit model
model1.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model2.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model3.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model4.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model5.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model6.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
pred_prob1 = model1.predict_proba(data_train_test['test']['X'])
pred_prob2 = model2.predict_proba(data_train_test['test']['X'])
pred_prob3 = model3.predict_proba(data_train_test['test']['X'])
pred_prob4 = model4.predict_proba(data_train_test['test']['X'])
pred_prob5 = model5.predict(data_train_test['test']['X'])
pred_prob6 = model6.predict_proba(data_train_test['test']['X'])
# get the best threshold
J = tpr1 - fpr1
ix = argmax(J)
best_thresh = thresh1[ix]
print('Best Threshold=%f, sensitivity = %.3f, specificity = %.3f, J=%.3f' % (best_thresh, tpr1[ix], 1-fpr1[ix], J[ix]))
pred_prob1 = (model1.predict_proba(data_train_test['test']['X'])[:,1] >= best_thresh).astype(bool)
# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(data_train_test['test']['y'], pred_prob1[:,1], pos_label=1)
fpr2, tpr2, thresh2 = roc_curve(data_train_test['test']['y'], pred_prob2[:,1], pos_label=1)
fpr3, tpr3, thresh3 = roc_curve(data_train_test['test']['y'], pred_prob3[:,1], pos_label=1)
fpr4, tpr4, thresh4 = roc_curve(data_train_test['test']['y'], pred_prob4[:,1], pos_label=1)
fpr5, tpr5, thresh5 = roc_curve(data_train_test['test']['y'], pred_prob5[:,0], pos_label=1)
fpr6, tpr6, thresh6 = roc_curve(data_train_test['test']['y'], pred_prob6[:,1], pos_label=1)
# roc curve for tpr = fpr
random_probs = [0 for i in range(len(data_train_test['test']['y']))]
p_fpr, p_tpr, _ = roc_curve(data_train_test['test']['y'], random_probs, pos_label=1)
# auc scores
auc_score1 = roc_auc_score(data_train_test['test']['y'], pred_prob1[:,1])
auc_score2 = roc_auc_score(data_train_test['test']['y'], pred_prob2[:,1])
auc_score3 = roc_auc_score(data_train_test['test']['y'], pred_prob3[:,1])
auc_score4 = roc_auc_score(data_train_test['test']['y'], pred_prob4[:,1])
auc_score5 = roc_auc_score(data_train_test['test']['y'], pred_prob5[:,0])
auc_score6 = roc_auc_score(data_train_test['test']['y'], pred_prob6[:,1])
print(auc_score1, auc_score2, auc_score3, auc_score4, auc_score5, auc_score6)
# plot roc curves
plt.plot(fpr1, tpr1, linestyle='--',color='orange', label='Random forest AUC=0.82')
plt.plot(fpr2, tpr2, linestyle='--',color='green', label='Logistic regression AUC=0.78')
plt.plot(fpr3, tpr3, linestyle='--',color='red', label='Support vector machines AUC=0.77')
plt.plot(fpr4, tpr4, linestyle='--',color='lime', label='Extreme gradient boosting AUC=0.76')
plt.plot(fpr5, tpr5, linestyle='--',color='cyan', label='Multi-layer perceptron AUC=0.75')
plt.plot(fpr6, tpr6, linestyle='--',color='blue', label='Decision trees AUC=0.62')
plt.plot(p_fpr, p_tpr, linestyle='--', color='black')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')
plt.legend(loc='best')
plt.savefig('ROC',dpi=300)
plt.show();
A:
In order to change the threshold for your models, you can use the predict_proba method to get the predicted probabilities for each model, then adjust the threshold value at which a predicted probability is considered positive. For example:
# get predicted probabilities for each model
pred_probs1 = model1.predict_proba(data_train_test['test']['X'])
pred_probs2 = model2.predict_proba(data_train_test['test']['X'])
pred_probs3 = model3.predict_proba(data_train_test['test']['X'])
pred_probs4 = model4.predict_proba(data_train_test['test']['X'])
pred_probs5 = model5.predict_proba(data_train_test['test']['X'])
pred_probs6 = model6.predict_proba(data_train_test['test']['X'])
# set the new threshold value
threshold = 0.5
# convert predicted probabilities to binary predictions based on the new threshold value
preds1 = (pred_probs1[:,1] >= threshold).astype(bool)
preds2 = (pred_probs2[:,1] >= threshold).astype(bool)
preds3 = (pred_probs3[:,1] >= threshold).astype(bool)
preds4 = (pred_probs4[:,1] >= threshold).astype(bool)
EDIT
# get the best threshold
J = tpr1 - fpr1
ix = argmax(J)
best_thresh = thresh1[ix]
print('Best Threshold=%f, sensitivity = %.3f, specificity = %.3f, J=%.3f' % (best_thresh, tpr1[ix], 1-fpr1[ix], J[ix]))
# predict using the best threshold
pred_prob1 = model1.predict(data_train_test['test']['X'], threshold=best_thresh)
# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(data_train_test['test']['y'], pred_prob1, pos_label=1)
fpr2, tpr2, thresh2 = roc_curve(data_train_test['test']['y'], model2.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)
fpr3, tpr3, thresh3 = roc_curve(data_train_test['test']['y'], model3.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)
fpr4, tpr4, thresh4 = roc_curve(data_train_test['test']['y'], model4.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)
fpr5, tpr5, thresh5 = roc_curve(data_train_test['test']['y'], model5.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)
fpr6, tpr6, thresh6 = roc_curve(data_train_test['test']['y'], model6.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)
# auc for models
roc_auc1 = auc(fpr1, tpr1)
roc_auc2 = auc(fpr2, tpr2)
roc_auc3 = auc(fpr3, tpr3)
roc_auc4 = auc(fpr4, tpr4)
roc_auc5 = auc(fpr5, tpr5)
roc_auc6 = auc(fpr6, tpr6)
# plot roc curve
plt.plot(fpr1, tpr1, 'b', label = 'Model 1 (area = %0.2f)' % roc_auc1)
plt.plot(fpr2, tpr2, 'r', label = 'Model 2 (area = %0.2f)' % roc_auc2)
...
| Updating thresholds on the ROC curve | Using python, I built six machine learning models. The aim is to predict death in patients hospitalised with heart conditions. (Target feature: 1=Died, 0=Alive at the time of discharge form the hospital)
I created a ROC curve incorporating AUC for six algorithms. Considering that the data is imbalanced, I would like to change the threshold for all models.
How do I update the thresholds for each model before I plot the ROC curve?
I have looked at similar posts on this platform, but still struggling.
PS: I am not an experienced programmer, but a medical doctor using machine learning as a tool to risk stratify patients with heart conditions.
model1= RandomForestClassifier()
model2 = LogisticRegression()
model3 = modelsvc
model4 = XGBClassifier()
model5 = annmodel1
model6= DecisionTreeClassifier()
# fit model
model1.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model2.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model3.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model4.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model5.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
model6.fit(data_train_test['train']['X'],
data_train_test['train']['y'])
pred_prob1 = model1.predict_proba(data_train_test['test']['X'])
pred_prob2 = model2.predict_proba(data_train_test['test']['X'])
pred_prob3 = model3.predict_proba(data_train_test['test']['X'])
pred_prob4 = model4.predict_proba(data_train_test['test']['X'])
pred_prob5 = model5.predict(data_train_test['test']['X'])
pred_prob6 = model6.predict_proba(data_train_test['test']['X'])
# get the best threshold
J = tpr1 - fpr1
ix = argmax(J)
best_thresh = thresh1[ix]
print('Best Threshold=%f, sensitivity = %.3f, specificity = %.3f, J=%.3f' % (best_thresh, tpr1[ix], 1-fpr1[ix], J[ix]))
pred_prob1 = (model1.predict_proba(data_train_test['test']['X'])[:,1] >= best_thresh).astype(bool)
# roc curve for models
fpr1, tpr1, thresh1 = roc_curve(data_train_test['test']['y'], pred_prob1[:,1], pos_label=1)
fpr2, tpr2, thresh2 = roc_curve(data_train_test['test']['y'], pred_prob2[:,1], pos_label=1)
fpr3, tpr3, thresh3 = roc_curve(data_train_test['test']['y'], pred_prob3[:,1], pos_label=1)
fpr4, tpr4, thresh4 = roc_curve(data_train_test['test']['y'], pred_prob4[:,1], pos_label=1)
fpr5, tpr5, thresh5 = roc_curve(data_train_test['test']['y'], pred_prob5[:,0], pos_label=1)
fpr6, tpr6, thresh6 = roc_curve(data_train_test['test']['y'], pred_prob6[:,1], pos_label=1)
# roc curve for tpr = fpr
random_probs = [0 for i in range(len(data_train_test['test']['y']))]
p_fpr, p_tpr, _ = roc_curve(data_train_test['test']['y'], random_probs, pos_label=1)
# auc scores
auc_score1 = roc_auc_score(data_train_test['test']['y'], pred_prob1[:,1])
auc_score2 = roc_auc_score(data_train_test['test']['y'], pred_prob2[:,1])
auc_score3 = roc_auc_score(data_train_test['test']['y'], pred_prob3[:,1])
auc_score4 = roc_auc_score(data_train_test['test']['y'], pred_prob4[:,1])
auc_score5 = roc_auc_score(data_train_test['test']['y'], pred_prob5[:,0])
auc_score6 = roc_auc_score(data_train_test['test']['y'], pred_prob6[:,1])
print(auc_score1, auc_score2, auc_score3, auc_score4, auc_score5, auc_score6)
# plot roc curves
plt.plot(fpr1, tpr1, linestyle='--',color='orange', label='Random forest AUC=0.82')
plt.plot(fpr2, tpr2, linestyle='--',color='green', label='Logistic regression AUC=0.78')
plt.plot(fpr3, tpr3, linestyle='--',color='red', label='Support vector machines AUC=0.77')
plt.plot(fpr4, tpr4, linestyle='--',color='lime', label='Extreme gradient boosting AUC=0.76')
plt.plot(fpr5, tpr5, linestyle='--',color='cyan', label='Multi-layer perceptron AUC=0.75')
plt.plot(fpr6, tpr6, linestyle='--',color='blue', label='Decision trees AUC=0.62')
plt.plot(p_fpr, p_tpr, linestyle='--', color='black')
# x label
plt.xlabel('False Positive Rate')
# y label
plt.ylabel('True Positive rate')
plt.legend(loc='best')
plt.savefig('ROC',dpi=300)
plt.show();
| [
"In order to change the threshold for your models, you can use the predict_proba method to get the predicted probabilities for each model, then adjust the threshold value at which a predicted probability is considered positive. For example:\n# get predicted probabilities for each model\npred_probs1 = model1.predict_proba(data_train_test['test']['X'])\npred_probs2 = model2.predict_proba(data_train_test['test']['X'])\npred_probs3 = model3.predict_proba(data_train_test['test']['X'])\npred_probs4 = model4.predict_proba(data_train_test['test']['X'])\npred_probs5 = model5.predict_proba(data_train_test['test']['X'])\npred_probs6 = model6.predict_proba(data_train_test['test']['X'])\n\n# set the new threshold value\nthreshold = 0.5\n\n# convert predicted probabilities to binary predictions based on the new threshold value\npreds1 = (pred_probs1[:,1] >= threshold).astype(bool)\npreds2 = (pred_probs2[:,1] >= threshold).astype(bool)\npreds3 = (pred_probs3[:,1] >= threshold).astype(bool)\npreds4 = (pred_probs4[:,1] >= threshold).astype(bool)\n\nEDIT\n# get the best threshold\nJ = tpr1 - fpr1\nix = argmax(J)\nbest_thresh = thresh1[ix]\nprint('Best Threshold=%f, sensitivity = %.3f, specificity = %.3f, J=%.3f' % (best_thresh, tpr1[ix], 1-fpr1[ix], J[ix]))\n\n# predict using the best threshold\npred_prob1 = model1.predict(data_train_test['test']['X'], threshold=best_thresh)\n\n# roc curve for models\nfpr1, tpr1, thresh1 = roc_curve(data_train_test['test']['y'], pred_prob1, pos_label=1)\nfpr2, tpr2, thresh2 = roc_curve(data_train_test['test']['y'], model2.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)\nfpr3, tpr3, thresh3 = roc_curve(data_train_test['test']['y'], model3.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)\nfpr4, tpr4, thresh4 = roc_curve(data_train_test['test']['y'], model4.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)\nfpr5, tpr5, thresh5 = roc_curve(data_train_test['test']['y'], model5.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)\nfpr6, tpr6, thresh6 = roc_curve(data_train_test['test']['y'], model6.predict(data_train_test['test']['X'], threshold=best_thresh), pos_label=1)\n\n# auc for models\nroc_auc1 = auc(fpr1, tpr1)\nroc_auc2 = auc(fpr2, tpr2)\nroc_auc3 = auc(fpr3, tpr3)\nroc_auc4 = auc(fpr4, tpr4)\nroc_auc5 = auc(fpr5, tpr5)\nroc_auc6 = auc(fpr6, tpr6)\n\n# plot roc curve\nplt.plot(fpr1, tpr1, 'b', label = 'Model 1 (area = %0.2f)' % roc_auc1)\nplt.plot(fpr2, tpr2, 'r', label = 'Model 2 (area = %0.2f)' % roc_auc2)\n...\n\n\n"
] | [
0
] | [] | [] | [
"python",
"roc",
"threshold"
] | stackoverflow_0074675684_python_roc_threshold.txt |
Q:
Error Message when trying to scrape FBref webpage
Disclaimer: I am still a python beginner and trying to scrape for the first time.
I am trying to scrape player stats from the current (22/23) Champions League season and convert it to a .csv file. If you see any other obvious errors then please point it out.
Website: https://fbref.com/en/comps/8/stats/Champions-League-Stats
I tried to change the following code to get it working for my needs but I am not successful:
https://colab.research.google.com/drive/1PoHtZWcy8WaU1hnWmL7eCVUbxzci3-fr#scrollTo=2qYGN7pfk3gK
There is the possibility to simply directly download a .csv file but I need to actually scrape the webpage.
This is my (modified from above) code and I get the following error message and don't know how to resolve the problem:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
# Functions to get the data in a dataframe using BeautifulSoup
def get_tables(url, text):
res = requests.get(url)
## The next two lines get around the issue with comments breaking the parsing.
comm = re.compile("<!--|-->")
soup = BeautifulSoup(comm.sub("", res.text), 'lxml')
all_tables = soup.findAll("table")
player_table = all_tables[2]
if text == 'for':
return player_table
if text != 'for':
pass
def get_frame(features, player_table):
pre_df_player = dict()
features_wanted_player = features
rows_player = player_table.find_all('tr')
for row in rows_player:
if (row.find('th', {"scope": "row"}) is not None):
for f in features_wanted_player:
cell = row.find("td", {"data-stat": f})
a = cell.data.text().encode()
text = a.decode("utf-8")
if (text == ''):
text = '0'
if ((f != 'player') & (f != 'nationality') & (f != 'position') & (f != 'squad') & (f != 'age') & (
f != 'birth_year')):
text = float(text.replace(',', ''))
if f in pre_df_player:
pre_df_player[f].append(text)
else:
pre_df_player[f] = [text]
df_player = pd.DataFrame.from_dict(pre_df_player)
return df_player
def frame_for_category(category, top, end, features):
url = (top + category + end)
player_table = get_tables(url, 'for')
df_player = get_frame(features, player_table)
return df_player
# Function to get the player data for outfield player, includes all categories - standard stats, shooting
# passing, passing types, goal and shot creation, defensive actions, possession, and miscallaneous
def get_outfield_data(top, end):
df1 = frame_for_category('stats', top, end, stats)
df2 = frame_for_category('shooting', top, end, shooting2)
df3 = frame_for_category('passing', top, end, passing2)
df4 = frame_for_category('passing_types', top, end, passing_types2)
df5 = frame_for_category('gca', top, end, gca2)
df6 = frame_for_category('defense', top, end, defense2)
df7 = frame_for_category('possession', top, end, possession2)
df8 = frame_for_category('misc', top, end, misc2)
df = pd.concat([df1, df2, df3, df4, df5, df6, df7, df8], axis=1)
df = df.loc[:, ~df.columns.duplicated()]
return df
# Function to get keeping and advance goalkeeping data
def get_keeper_data(top, end):
df1 = frame_for_category('keepers', top, end, keepers)
df2 = frame_for_category('keepersadv', top, end, keepersadv2)
df = pd.concat([df1, df2], axis=1)
df = df.loc[:, ~df.columns.duplicated()]
return df
#This cell is to get the outfield player data for any competition
#Go to the 'Standard stats' page of the league
#For Champions League 2022/23, the link is this: https://fbref.com/en/comps/8/stats/Champions-League-Stats
#Remove the 'stats', and pass the first and third part of the link as parameters like below
df_outfield = get_outfield_data('https://fbref.com/en/comps/8/','/Champions-League-Stats')
#Save csv file to Desktop
df_outfield.to_csv('CL2022_23_Outfield.csv',index=False)
df_outfield
Error Message:
Traceback (most recent call last):
File "/home/student/Pycharm/Scraping FBREF.py", line 123, in <module>
df_outfield = get_outfield_data('https://fbref.com/en/comps/8/','/Champions-League-Stats')
File "/home/student/Pycharm/Scraping FBREF.py", line 97, in get_outfield_data
df1 = frame_for_category('stats', top, end, stats)
File "/home/student/Pycharm/Scraping FBREF.py", line 90, in frame_for_category
df_player = get_frame(features, player_table)
File "/home/student/Pycharm/Scraping FBREF.py", line 72, in get_frame
a = cell.data.text().encode()
AttributeError: 'NoneType' object has no attribute 'text'
A:
Found the solution myself. I had to add an if-statement to only encode when the cell is not None:
for f in features_wanted_player:
cell = row.find("td", {"data-stat": f})
if cell is not None:
a = cell.text.strip().encode()
Now it works perfectly fine.
| Error Message when trying to scrape FBref webpage | Disclaimer: I am still a python beginner and trying to scrape for the first time.
I am trying to scrape player stats from the current (22/23) Champions League season and convert it to a .csv file. If you see any other obvious errors then please point it out.
Website: https://fbref.com/en/comps/8/stats/Champions-League-Stats
I tried to change the following code to get it working for my needs but I am not successful:
https://colab.research.google.com/drive/1PoHtZWcy8WaU1hnWmL7eCVUbxzci3-fr#scrollTo=2qYGN7pfk3gK
There is the possibility to simply directly download a .csv file but I need to actually scrape the webpage.
This is my (modified from above) code and I get the following error message and don't know how to resolve the problem:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
# Functions to get the data in a dataframe using BeautifulSoup
def get_tables(url, text):
res = requests.get(url)
## The next two lines get around the issue with comments breaking the parsing.
comm = re.compile("<!--|-->")
soup = BeautifulSoup(comm.sub("", res.text), 'lxml')
all_tables = soup.findAll("table")
player_table = all_tables[2]
if text == 'for':
return player_table
if text != 'for':
pass
def get_frame(features, player_table):
pre_df_player = dict()
features_wanted_player = features
rows_player = player_table.find_all('tr')
for row in rows_player:
if (row.find('th', {"scope": "row"}) is not None):
for f in features_wanted_player:
cell = row.find("td", {"data-stat": f})
a = cell.data.text().encode()
text = a.decode("utf-8")
if (text == ''):
text = '0'
if ((f != 'player') & (f != 'nationality') & (f != 'position') & (f != 'squad') & (f != 'age') & (
f != 'birth_year')):
text = float(text.replace(',', ''))
if f in pre_df_player:
pre_df_player[f].append(text)
else:
pre_df_player[f] = [text]
df_player = pd.DataFrame.from_dict(pre_df_player)
return df_player
def frame_for_category(category, top, end, features):
url = (top + category + end)
player_table = get_tables(url, 'for')
df_player = get_frame(features, player_table)
return df_player
# Function to get the player data for outfield player, includes all categories - standard stats, shooting
# passing, passing types, goal and shot creation, defensive actions, possession, and miscallaneous
def get_outfield_data(top, end):
df1 = frame_for_category('stats', top, end, stats)
df2 = frame_for_category('shooting', top, end, shooting2)
df3 = frame_for_category('passing', top, end, passing2)
df4 = frame_for_category('passing_types', top, end, passing_types2)
df5 = frame_for_category('gca', top, end, gca2)
df6 = frame_for_category('defense', top, end, defense2)
df7 = frame_for_category('possession', top, end, possession2)
df8 = frame_for_category('misc', top, end, misc2)
df = pd.concat([df1, df2, df3, df4, df5, df6, df7, df8], axis=1)
df = df.loc[:, ~df.columns.duplicated()]
return df
# Function to get keeping and advance goalkeeping data
def get_keeper_data(top, end):
df1 = frame_for_category('keepers', top, end, keepers)
df2 = frame_for_category('keepersadv', top, end, keepersadv2)
df = pd.concat([df1, df2], axis=1)
df = df.loc[:, ~df.columns.duplicated()]
return df
#This cell is to get the outfield player data for any competition
#Go to the 'Standard stats' page of the league
#For Champions League 2022/23, the link is this: https://fbref.com/en/comps/8/stats/Champions-League-Stats
#Remove the 'stats', and pass the first and third part of the link as parameters like below
df_outfield = get_outfield_data('https://fbref.com/en/comps/8/','/Champions-League-Stats')
#Save csv file to Desktop
df_outfield.to_csv('CL2022_23_Outfield.csv',index=False)
df_outfield
Error Message:
Traceback (most recent call last):
File "/home/student/Pycharm/Scraping FBREF.py", line 123, in <module>
df_outfield = get_outfield_data('https://fbref.com/en/comps/8/','/Champions-League-Stats')
File "/home/student/Pycharm/Scraping FBREF.py", line 97, in get_outfield_data
df1 = frame_for_category('stats', top, end, stats)
File "/home/student/Pycharm/Scraping FBREF.py", line 90, in frame_for_category
df_player = get_frame(features, player_table)
File "/home/student/Pycharm/Scraping FBREF.py", line 72, in get_frame
a = cell.data.text().encode()
AttributeError: 'NoneType' object has no attribute 'text'
| [
"Found the solution myself. I had to add an if-statement to only encode when the cell is not None:\n for f in features_wanted_player:\n cell = row.find(\"td\", {\"data-stat\": f})\n if cell is not None:\n a = cell.text.strip().encode()\n\nNow it works perfectly fine.\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074337950_beautifulsoup_python_web_scraping.txt |
Q:
How to read keyboard input?
I would like to read data from the keyboard in Python. I tried this code:
nb = input('Choose a number')
print('Number%s \n' % (nb))
But it doesn't work, either with eclipse nor in the terminal, it's always stop of the question. I can type a number but after nothing happen.
Do you know why?
A:
Use
input('Enter your input:')
if you use Python 3.
And if you want to have a numeric value, just convert it:
try:
mode = int(input('Input:'))
except ValueError:
print("Not a number")
If you use Python 2, you need to use raw_input instead of input.
A:
It seems that you are mixing different Pythons here (Python 2.x vs. Python 3.x)...
This is basically correct:
nb = input('Choose a number: ')
The problem is that it is only supported in Python 3. As @sharpner answered, for older versions of Python (2.x), you have to use the function raw_input:
nb = raw_input('Choose a number: ')
If you want to convert that to a number, then you should try:
number = int(nb)
... though you need to take into account that this can raise an exception:
try:
number = int(nb)
except ValueError:
print("Invalid number")
And if you want to print the number using formatting, in Python 3 str.format() is recommended:
print("Number: {0}\n".format(number))
Instead of:
print('Number %s \n' % (nb))
But both options (str.format() and %) do work in both Python 2.7 and Python 3.
A:
Non-blocking, multi-threaded example:
As blocking on keyboard input (since the input() function blocks) is frequently not what we want to do (we'd frequently like to keep doing other stuff), here's a very-stripped-down multi-threaded example to demonstrate how to keep running your main application while still reading in keyboard inputs whenever they arrive. I use this technique in my eRCaGuy_PyTerm serial terminal program here (search the code for input()).
This works by creating one thread to run in the background, continually calling input() and then passing any data it receives to a queue.
In this way, your main thread is left to do anything it wants, receiving the keyboard input data from the first thread whenever there is something in the queue.
1. Bare Python 3 code example (no comments):
import threading
import queue
import time
def read_kbd_input(inputQueue):
print('Ready for keyboard input:')
while (True):
input_str = input()
inputQueue.put(input_str)
def main():
EXIT_COMMAND = "exit"
inputQueue = queue.Queue()
inputThread = threading.Thread(target=read_kbd_input, args=(inputQueue,), daemon=True)
inputThread.start()
while (True):
if (inputQueue.qsize() > 0):
input_str = inputQueue.get()
print("input_str = {}".format(input_str))
if (input_str == EXIT_COMMAND):
print("Exiting serial terminal.")
break
# Insert your code here to do whatever you want with the input_str.
# The rest of your program goes here.
time.sleep(0.01)
print("End.")
if (__name__ == '__main__'):
main()
2. Same Python 3 code as above, but with extensive explanatory comments:
"""
read_keyboard_input.py
Gabriel Staples
www.ElectricRCAircraftGuy.com
14 Nov. 2018
References:
- https://pyserial.readthedocs.io/en/latest/pyserial_api.html
- *****https://www.tutorialspoint.com/python/python_multithreading.htm
- *****https://en.wikibooks.org/wiki/Python_Programming/Threading
- https://stackoverflow.com/questions/1607612/python-how-do-i-make-a-subclass-from-a-superclass
- https://docs.python.org/3/library/queue.html
- https://docs.python.org/3.7/library/threading.html
To install PySerial: `sudo python3 -m pip install pyserial`
To run this program: `python3 this_filename.py`
"""
import threading
import queue
import time
def read_kbd_input(inputQueue):
print('Ready for keyboard input:')
while (True):
# Receive keyboard input from user.
input_str = input()
# Enqueue this input string.
# Note: Lock not required here since we are only calling a single Queue method, not a sequence of them
# which would otherwise need to be treated as one atomic operation.
inputQueue.put(input_str)
def main():
EXIT_COMMAND = "exit" # Command to exit this program
# The following threading lock is required only if you need to enforce atomic access to a chunk of multiple queue
# method calls in a row. Use this if you have such a need, as follows:
# 1. Pass queueLock as an input parameter to whichever function requires it.
# 2. Call queueLock.acquire() to obtain the lock.
# 3. Do your series of queue calls which need to be treated as one big atomic operation, such as calling
# inputQueue.qsize(), followed by inputQueue.put(), for example.
# 4. Call queueLock.release() to release the lock.
# queueLock = threading.Lock()
#Keyboard input queue to pass data from the thread reading the keyboard inputs to the main thread.
inputQueue = queue.Queue()
# Create & start a thread to read keyboard inputs.
# Set daemon to True to auto-kill this thread when all other non-daemonic threads are exited. This is desired since
# this thread has no cleanup to do, which would otherwise require a more graceful approach to clean up then exit.
inputThread = threading.Thread(target=read_kbd_input, args=(inputQueue,), daemon=True)
inputThread.start()
# Main loop
while (True):
# Read keyboard inputs
# Note: if this queue were being read in multiple places we would need to use the queueLock above to ensure
# multi-method-call atomic access. Since this is the only place we are removing from the queue, however, in this
# example program, no locks are required.
if (inputQueue.qsize() > 0):
input_str = inputQueue.get()
print("input_str = {}".format(input_str))
if (input_str == EXIT_COMMAND):
print("Exiting serial terminal.")
break # exit the while loop
# Insert your code here to do whatever you want with the input_str.
# The rest of your program goes here.
# Sleep for a short time to prevent this thread from sucking up all of your CPU resources on your PC.
time.sleep(0.01)
print("End.")
# If you run this Python file directly (ex: via `python3 this_filename.py`), do the following:
if (__name__ == '__main__'):
main()
Sample output:
$ python3 read_keyboard_input.py
Ready for keyboard input:
hey
input_str = hey
hello
input_str = hello
7000
input_str = 7000
exit
input_str = exit
Exiting serial terminal.
End.
The Python Queue library is thread-safe:
Note that Queue.put() and Queue.get() and other Queue class methods are all thread-safe! (This is unlike queues and other containers in the standard template library in C++!) Since the Python Queue class and its methods are thread-safe, that means they implement all the internal locking semantics required for inter-thread operations, so each function call in the queue class can be considered as a single, atomic operation. See the notes at the top of the documentation: https://docs.python.org/3/library/queue.html (emphasis added):
The queue module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The Queue class in this module implements all the required locking semantics.
References:
https://pyserial.readthedocs.io/en/latest/pyserial_api.html
*****https://www.tutorialspoint.com/python/python_multithreading.htm
*****https://en.wikibooks.org/wiki/Python_Programming/Threading
Python: How do I make a subclass from a superclass?
https://docs.python.org/3/library/queue.html
https://docs.python.org/3.7/library/threading.html
[My repo where I use the techniques and code presented above] https://github.com/ElectricRCAircraftGuy/eRCaGuy_PyTerm
Related/Cross-Linked:
[my answer] PySerial non-blocking read loop
A:
you can simply use the input() function by using a variable. quick exemple!
user = input("Enter any text: ")
print(user)
A:
I came here looking for how to read a single character.
I found the readchar library, based on the answers to this question.
| How to read keyboard input? | I would like to read data from the keyboard in Python. I tried this code:
nb = input('Choose a number')
print('Number%s \n' % (nb))
But it doesn't work, either with eclipse nor in the terminal, it's always stop of the question. I can type a number but after nothing happen.
Do you know why?
| [
"Use\ninput('Enter your input:')\n\nif you use Python 3.\nAnd if you want to have a numeric value, just convert it:\ntry:\n mode = int(input('Input:'))\nexcept ValueError:\n print(\"Not a number\")\n\nIf you use Python 2, you need to use raw_input instead of input.\n",
"It seems that you are mixing different Pythons here (Python 2.x vs. Python 3.x)...\nThis is basically correct:\nnb = input('Choose a number: ')\n\nThe problem is that it is only supported in Python 3. As @sharpner answered, for older versions of Python (2.x), you have to use the function raw_input:\nnb = raw_input('Choose a number: ')\n\nIf you want to convert that to a number, then you should try:\nnumber = int(nb)\n\n... though you need to take into account that this can raise an exception:\ntry:\n number = int(nb)\nexcept ValueError:\n print(\"Invalid number\")\n\nAnd if you want to print the number using formatting, in Python 3 str.format() is recommended:\nprint(\"Number: {0}\\n\".format(number))\n\nInstead of:\nprint('Number %s \\n' % (nb))\n\nBut both options (str.format() and %) do work in both Python 2.7 and Python 3.\n",
"Non-blocking, multi-threaded example:\nAs blocking on keyboard input (since the input() function blocks) is frequently not what we want to do (we'd frequently like to keep doing other stuff), here's a very-stripped-down multi-threaded example to demonstrate how to keep running your main application while still reading in keyboard inputs whenever they arrive. I use this technique in my eRCaGuy_PyTerm serial terminal program here (search the code for input()).\nThis works by creating one thread to run in the background, continually calling input() and then passing any data it receives to a queue.\nIn this way, your main thread is left to do anything it wants, receiving the keyboard input data from the first thread whenever there is something in the queue.\n1. Bare Python 3 code example (no comments):\nimport threading\nimport queue\nimport time\n\ndef read_kbd_input(inputQueue):\n print('Ready for keyboard input:')\n while (True):\n input_str = input()\n inputQueue.put(input_str)\n\ndef main():\n EXIT_COMMAND = \"exit\"\n inputQueue = queue.Queue()\n\n inputThread = threading.Thread(target=read_kbd_input, args=(inputQueue,), daemon=True)\n inputThread.start()\n\n while (True):\n if (inputQueue.qsize() > 0):\n input_str = inputQueue.get()\n print(\"input_str = {}\".format(input_str))\n\n if (input_str == EXIT_COMMAND):\n print(\"Exiting serial terminal.\")\n break\n \n # Insert your code here to do whatever you want with the input_str.\n\n # The rest of your program goes here.\n\n time.sleep(0.01) \n print(\"End.\")\n\nif (__name__ == '__main__'): \n main()\n\n2. Same Python 3 code as above, but with extensive explanatory comments:\n\"\"\"\nread_keyboard_input.py\n\nGabriel Staples\nwww.ElectricRCAircraftGuy.com\n14 Nov. 2018\n\nReferences:\n- https://pyserial.readthedocs.io/en/latest/pyserial_api.html\n- *****https://www.tutorialspoint.com/python/python_multithreading.htm\n- *****https://en.wikibooks.org/wiki/Python_Programming/Threading\n- https://stackoverflow.com/questions/1607612/python-how-do-i-make-a-subclass-from-a-superclass\n- https://docs.python.org/3/library/queue.html\n- https://docs.python.org/3.7/library/threading.html\n\nTo install PySerial: `sudo python3 -m pip install pyserial`\n\nTo run this program: `python3 this_filename.py`\n\n\"\"\"\n\nimport threading\nimport queue\nimport time\n\ndef read_kbd_input(inputQueue):\n print('Ready for keyboard input:')\n while (True):\n # Receive keyboard input from user.\n input_str = input()\n \n # Enqueue this input string.\n # Note: Lock not required here since we are only calling a single Queue method, not a sequence of them \n # which would otherwise need to be treated as one atomic operation.\n inputQueue.put(input_str)\n\ndef main():\n\n EXIT_COMMAND = \"exit\" # Command to exit this program\n\n # The following threading lock is required only if you need to enforce atomic access to a chunk of multiple queue\n # method calls in a row. Use this if you have such a need, as follows:\n # 1. Pass queueLock as an input parameter to whichever function requires it.\n # 2. Call queueLock.acquire() to obtain the lock.\n # 3. Do your series of queue calls which need to be treated as one big atomic operation, such as calling\n # inputQueue.qsize(), followed by inputQueue.put(), for example.\n # 4. Call queueLock.release() to release the lock.\n # queueLock = threading.Lock() \n\n #Keyboard input queue to pass data from the thread reading the keyboard inputs to the main thread.\n inputQueue = queue.Queue()\n\n # Create & start a thread to read keyboard inputs.\n # Set daemon to True to auto-kill this thread when all other non-daemonic threads are exited. This is desired since\n # this thread has no cleanup to do, which would otherwise require a more graceful approach to clean up then exit.\n inputThread = threading.Thread(target=read_kbd_input, args=(inputQueue,), daemon=True)\n inputThread.start()\n\n # Main loop\n while (True):\n\n # Read keyboard inputs\n # Note: if this queue were being read in multiple places we would need to use the queueLock above to ensure\n # multi-method-call atomic access. Since this is the only place we are removing from the queue, however, in this\n # example program, no locks are required.\n if (inputQueue.qsize() > 0):\n input_str = inputQueue.get()\n print(\"input_str = {}\".format(input_str))\n\n if (input_str == EXIT_COMMAND):\n print(\"Exiting serial terminal.\")\n break # exit the while loop\n \n # Insert your code here to do whatever you want with the input_str.\n\n # The rest of your program goes here.\n\n # Sleep for a short time to prevent this thread from sucking up all of your CPU resources on your PC.\n time.sleep(0.01) \n \n print(\"End.\")\n\n# If you run this Python file directly (ex: via `python3 this_filename.py`), do the following:\nif (__name__ == '__main__'): \n main()\n\nSample output:\n\n$ python3 read_keyboard_input.py\nReady for keyboard input:\nhey\ninput_str = hey\nhello\ninput_str = hello\n7000\ninput_str = 7000\nexit\ninput_str = exit\nExiting serial terminal.\nEnd.\n\nThe Python Queue library is thread-safe:\nNote that Queue.put() and Queue.get() and other Queue class methods are all thread-safe! (This is unlike queues and other containers in the standard template library in C++!) Since the Python Queue class and its methods are thread-safe, that means they implement all the internal locking semantics required for inter-thread operations, so each function call in the queue class can be considered as a single, atomic operation. See the notes at the top of the documentation: https://docs.python.org/3/library/queue.html (emphasis added):\n\nThe queue module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The Queue class in this module implements all the required locking semantics.\n\nReferences:\n\nhttps://pyserial.readthedocs.io/en/latest/pyserial_api.html\n*****https://www.tutorialspoint.com/python/python_multithreading.htm\n*****https://en.wikibooks.org/wiki/Python_Programming/Threading\nPython: How do I make a subclass from a superclass?\nhttps://docs.python.org/3/library/queue.html\nhttps://docs.python.org/3.7/library/threading.html\n[My repo where I use the techniques and code presented above] https://github.com/ElectricRCAircraftGuy/eRCaGuy_PyTerm\n\nRelated/Cross-Linked:\n\n[my answer] PySerial non-blocking read loop\n\n",
"you can simply use the input() function by using a variable. quick exemple!\nuser = input(\"Enter any text: \")\nprint(user)\n\n",
"I came here looking for how to read a single character.\nI found the readchar library, based on the answers to this question.\n"
] | [
134,
87,
31,
0,
0
] | [] | [] | [
"input",
"keyboard",
"python"
] | stackoverflow_0005404068_input_keyboard_python.txt |
Q:
Compare 2 dataframes, assign labels and split rows in Pandas/Pyspark
I have 2 dataframes consisting expected_orders and actual_orders details.
Input data:
I want to create a label field in both dataframe and split the rows based on following criteria:
Sort by country, product and date
Group both data frames by country and product
In both data frames, for each group if row's date and qty are matching then assign label same actual date/ same expected date
If qty is matching but dates are different then assign labels (earlier expected date/ later expected date) and (earlier actual date/ later actual date)
If qty is not an exact match but there are qty values remaining in other data frame of that group then split the row with greater qty value df to 2 rows: matching (less) qty value and remaining value
Repeat steps unless all rows have labels
If no quantity is remaining from other group then assign label no actual date or no expected date
Expected output:
I am trying to do this with nested loops but with millions of rows this is quite slow.
for key, exp in expected_grouped:
act = actual_grouped.get_group(key)
...
for i, outerrow in enumerate(exp.itertuples()):
for j, innerrow in enumerate(act.itertuples()):
if: ...
elif: ...
Is there any better and faster way to do this? Any suggestions for improvement would be highly appreciated.
A:
To start, you can use the sort_values() method to sort the expected_orders and actual_orders dataframes by country, product, and date. This will ensure that the rows in both dataframes are in the same order and can be easily grouped and compared.
Next, you can use the groupby() method to group the rows in both dataframes by country and product. This will allow you to compare the rows within each group and assign labels based on the criteria you provided.
To compare the rows within each group, you can iterate through the rows in each dataframe and check if the quantity and date values match. If they do, you can assign the appropriate label to the row. If the quantity values match but the dates are different, you can split the row with the greater quantity value into two rows: one with the matching quantity and one with the remaining quantity. You can then assign the appropriate labels to each row.
This approach should be faster and more efficient than using nested loops, as it leverages the powerful features of the Pandas library to handle the data manipulation and comparison tasks.
| Compare 2 dataframes, assign labels and split rows in Pandas/Pyspark | I have 2 dataframes consisting expected_orders and actual_orders details.
Input data:
I want to create a label field in both dataframe and split the rows based on following criteria:
Sort by country, product and date
Group both data frames by country and product
In both data frames, for each group if row's date and qty are matching then assign label same actual date/ same expected date
If qty is matching but dates are different then assign labels (earlier expected date/ later expected date) and (earlier actual date/ later actual date)
If qty is not an exact match but there are qty values remaining in other data frame of that group then split the row with greater qty value df to 2 rows: matching (less) qty value and remaining value
Repeat steps unless all rows have labels
If no quantity is remaining from other group then assign label no actual date or no expected date
Expected output:
I am trying to do this with nested loops but with millions of rows this is quite slow.
for key, exp in expected_grouped:
act = actual_grouped.get_group(key)
...
for i, outerrow in enumerate(exp.itertuples()):
for j, innerrow in enumerate(act.itertuples()):
if: ...
elif: ...
Is there any better and faster way to do this? Any suggestions for improvement would be highly appreciated.
| [
"To start, you can use the sort_values() method to sort the expected_orders and actual_orders dataframes by country, product, and date. This will ensure that the rows in both dataframes are in the same order and can be easily grouped and compared.\nNext, you can use the groupby() method to group the rows in both dataframes by country and product. This will allow you to compare the rows within each group and assign labels based on the criteria you provided.\nTo compare the rows within each group, you can iterate through the rows in each dataframe and check if the quantity and date values match. If they do, you can assign the appropriate label to the row. If the quantity values match but the dates are different, you can split the row with the greater quantity value into two rows: one with the matching quantity and one with the remaining quantity. You can then assign the appropriate labels to each row.\nThis approach should be faster and more efficient than using nested loops, as it leverages the powerful features of the Pandas library to handle the data manipulation and comparison tasks.\n"
] | [
0
] | [] | [] | [
"apache_spark_sql",
"numpy",
"pandas",
"pyspark",
"python"
] | stackoverflow_0074619139_apache_spark_sql_numpy_pandas_pyspark_python.txt |
Q:
Unexpected behavior in Python's set.issubset
I have the following code
(Pdb) set(range(2, 2)).issubset(set(range(10, 95)))
True
and I don't understand why it's returning True. issubset is supposed to check if a set contains all the items of another set, but 2 can't be contained in a range from 10 to 95.
Am I misunderstanding Python's doc? Or is that a bug?
A:
The code set(range(2, 2)).issubset(set(range(10, 95))) is returning True because the set range(2, 2) is an empty set, and an empty set is always a subset of any other set.
The range function returns a range object that generates a sequence of numbers. When called with two arguments, start and stop, range(start, stop) generates a sequence of numbers from start to stop (exclusive), i.e. range(start, stop) generates the sequence start, start+1, start+2, ..., stop-1.
In this code, range(2, 2) generates an empty sequence, because 2 is not less than 2 and therefore no numbers are generated. This means that set(range(2, 2)) creates a set containing no elements, which is an empty set.
Since an empty set is a subset of any other set (including the set set(range(10, 95))), the issubset method returns True when called on set(range(2, 2)) and set(range(10, 95)).
A:
a = set(range(2,2)) is returning empty set
Since, set "a" has nothing to compare in set "b" therefore it is returning True.
A:
It looks like you are misunderstanding the behavior of the issubset method in Python. This method checks whether a set is a subset of another set, not whether a range is a subset of another range.
In your code, you are creating two sets:
set(range(2, 2))
and
set(range(10, 95))
The first set is an empty set, since the range from 2 to 2 is an empty range. The second set contains the numbers from 10 to 94, inclusive.
When you use the issubset method, you are checking whether the empty set is a subset of the set containing the numbers from 10 to 94. Since empty set is a subset of any set, the issubset method returns True.
To check whether a range is a subset of another range, you can use the in operator instead. For example:
set(range(2, 2)) in set(range(10, 95))
This code will return False, since the range from 2 to 2 is not contained in the range from 10 to 94.
In summary, your code is not a bug and it is returning the correct result based on the inputs you provided. The issue is that you are using the issubset method incorrectly, and you should use the in operator instead to check whether a range is a subset of another range.
| Unexpected behavior in Python's set.issubset | I have the following code
(Pdb) set(range(2, 2)).issubset(set(range(10, 95)))
True
and I don't understand why it's returning True. issubset is supposed to check if a set contains all the items of another set, but 2 can't be contained in a range from 10 to 95.
Am I misunderstanding Python's doc? Or is that a bug?
| [
"The code set(range(2, 2)).issubset(set(range(10, 95))) is returning True because the set range(2, 2) is an empty set, and an empty set is always a subset of any other set.\nThe range function returns a range object that generates a sequence of numbers. When called with two arguments, start and stop, range(start, stop) generates a sequence of numbers from start to stop (exclusive), i.e. range(start, stop) generates the sequence start, start+1, start+2, ..., stop-1.\nIn this code, range(2, 2) generates an empty sequence, because 2 is not less than 2 and therefore no numbers are generated. This means that set(range(2, 2)) creates a set containing no elements, which is an empty set.\nSince an empty set is a subset of any other set (including the set set(range(10, 95))), the issubset method returns True when called on set(range(2, 2)) and set(range(10, 95)).\n",
"a = set(range(2,2)) is returning empty set\n\nSince, set \"a\" has nothing to compare in set \"b\" therefore it is returning True.\n",
"It looks like you are misunderstanding the behavior of the issubset method in Python. This method checks whether a set is a subset of another set, not whether a range is a subset of another range.\nIn your code, you are creating two sets:\nset(range(2, 2))\n\nand\nset(range(10, 95))\n\nThe first set is an empty set, since the range from 2 to 2 is an empty range. The second set contains the numbers from 10 to 94, inclusive.\nWhen you use the issubset method, you are checking whether the empty set is a subset of the set containing the numbers from 10 to 94. Since empty set is a subset of any set, the issubset method returns True.\nTo check whether a range is a subset of another range, you can use the in operator instead. For example:\nset(range(2, 2)) in set(range(10, 95))\n\nThis code will return False, since the range from 2 to 2 is not contained in the range from 10 to 94.\nIn summary, your code is not a bug and it is returning the correct result based on the inputs you provided. The issue is that you are using the issubset method incorrectly, and you should use the in operator instead to check whether a range is a subset of another range.\n"
] | [
1,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0074675632_python.txt |
Q:
Updating an embeded matplotlib image in PyQt5
I have a matplotlib image embedded in PyQt:
But am now having trouble updating it.
The UI and the initial embedding is set up as follows:
class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):
# Ui_MainWindow is a python class converted from .ui file
def __init__(self, *args, obj=None, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.setupUi(self)
self.imageDisp = ImageDisplay() # Class definition below
layoutImage = self.Image_Area.layout() # Image_Area is a Qt Group Box
if layoutImage is None:
layoutImage = QVBoxLayout(self.Image_Area)
ax_img = self.imageDisp.displayImage()
canvas_image = FigureCanvas(ax_img.figure)
layoutImage.addWidget(canvas_image)
self.NextImageButton.clicked.connect(self.inc)
def inc(self):
self.imageDisp.nextImage()
layoutImage = self.Image_Area.layout()
if layoutImage is None:
layoutImage = QVBoxLayout(self.Image_Area)
ax_img = self.imageDisp.UpdateImage()
canvas_image = FigureCanvas(ax_img.figure)
self.imageDisp.draw()
layoutImage.addWidget(canvas_image)
And the ImageDisplay class is defined as follows:
class ImageDisplay(FigureCanvas): # FigureCanvas is matplotlib.backends.backend_qt5agg.FigureCanvasQTAgg
def __init__(self):
# Other attributes
fig = Figure()
self.fig = fig
self.axes = fig.add_subplot(111)
def nextImage(self):
# Do something here
self.UpdateImage()
def UpdateImage(self):
self.axes.imshow(np.asarray(IMAGE))
return self.axes
def displayImage(self):
self.fig = Figure()
self.axes = self.fig.add_subplot(111)
self.axes.imshow(np.asarray(IMAGE))
return self.axes
This does not update the image, but addWidget would create a new widget on top of it, resulting in a stacked plot:
I tried to find ways to remove the widget but to no avail so far. Is there a way I can properly remove this widget and make a new one for the updated image? Or is there another way to display and update an image embedded in PyQt?
A:
try this instead
from PyQt5 import QtWidgets
from PyQt5.QtCore import *
import sys
import matplotlib
matplotlib.use('Qt5Agg')
from matplotlib.backends.backend_qt5agg import FigureCanvas
from matplotlib.figure import Figure
import numpy as np
from matplotlib import image
import matplotlib.pyplot as plt
class MainWindow(QtWidgets.QMainWindow):
def __init__(self, *args, obj=None, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.imageDisp = ImageDisplay(self, width=5, height=4, dpi=100)
self.imageDisp.displayImage()
self.NextImageButton=QtWidgets.QPushButton('next image',self)
self.NextImageButton.clicked.connect(self.inc)
layout = QtWidgets.QVBoxLayout()
layout.addWidget(self.imageDisp)
layout.addWidget(self.NextImageButton)
widget = QtWidgets.QWidget()
widget.setLayout(layout)
self.setCentralWidget(widget)
self.show()
def inc(self):
self.imageDisp.nextImage()
class ImageDisplay(FigureCanvas): # FigureCanvas is matplotlib.backends.backend_qt5agg.FigureCanvasQTAgg
def __init__(self, parent=None, width=5, height=4, dpi=100):
self.start=0
self.files=['ball.png','image.jpg']
self.fig = Figure(figsize=(width, height), dpi=dpi)
self.fig.clear()
self.axes = self.fig.add_subplot(111)
self.axes.clear()
self.axes.axis('off')
plt.draw()
super(ImageDisplay, self).__init__(self.fig)
def nextImage(self):
# Do something here
self.IMAGE = image.imread(self.files[self.start%2])
self.UpdateImage()
def UpdateImage(self):
self.start+=1
self.axes.clear()
self.axes.axis('off')
self.axes.imshow(np.asarray(self.IMAGE))
self.draw()
def displayImage(self):
self.IMAGE = image.imread(self.files[self.start%2])
self.start+=1
self.axes.clear()
self.axes.axis('off')
self.axes.imshow(np.asarray(self.IMAGE))
self.draw()
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
w = MainWindow()
sys.exit(app.exec_())
| Updating an embeded matplotlib image in PyQt5 | I have a matplotlib image embedded in PyQt:
But am now having trouble updating it.
The UI and the initial embedding is set up as follows:
class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):
# Ui_MainWindow is a python class converted from .ui file
def __init__(self, *args, obj=None, **kwargs):
super(MainWindow, self).__init__(*args, **kwargs)
self.setupUi(self)
self.imageDisp = ImageDisplay() # Class definition below
layoutImage = self.Image_Area.layout() # Image_Area is a Qt Group Box
if layoutImage is None:
layoutImage = QVBoxLayout(self.Image_Area)
ax_img = self.imageDisp.displayImage()
canvas_image = FigureCanvas(ax_img.figure)
layoutImage.addWidget(canvas_image)
self.NextImageButton.clicked.connect(self.inc)
def inc(self):
self.imageDisp.nextImage()
layoutImage = self.Image_Area.layout()
if layoutImage is None:
layoutImage = QVBoxLayout(self.Image_Area)
ax_img = self.imageDisp.UpdateImage()
canvas_image = FigureCanvas(ax_img.figure)
self.imageDisp.draw()
layoutImage.addWidget(canvas_image)
And the ImageDisplay class is defined as follows:
class ImageDisplay(FigureCanvas): # FigureCanvas is matplotlib.backends.backend_qt5agg.FigureCanvasQTAgg
def __init__(self):
# Other attributes
fig = Figure()
self.fig = fig
self.axes = fig.add_subplot(111)
def nextImage(self):
# Do something here
self.UpdateImage()
def UpdateImage(self):
self.axes.imshow(np.asarray(IMAGE))
return self.axes
def displayImage(self):
self.fig = Figure()
self.axes = self.fig.add_subplot(111)
self.axes.imshow(np.asarray(IMAGE))
return self.axes
This does not update the image, but addWidget would create a new widget on top of it, resulting in a stacked plot:
I tried to find ways to remove the widget but to no avail so far. Is there a way I can properly remove this widget and make a new one for the updated image? Or is there another way to display and update an image embedded in PyQt?
| [
"try this instead\nfrom PyQt5 import QtWidgets\nfrom PyQt5.QtCore import * \nimport sys\nimport matplotlib\nmatplotlib.use('Qt5Agg')\nfrom matplotlib.backends.backend_qt5agg import FigureCanvas\nfrom matplotlib.figure import Figure\nimport numpy as np\nfrom matplotlib import image\nimport matplotlib.pyplot as plt\n\nclass MainWindow(QtWidgets.QMainWindow):\n def __init__(self, *args, obj=None, **kwargs):\n super(MainWindow, self).__init__(*args, **kwargs)\n self.imageDisp = ImageDisplay(self, width=5, height=4, dpi=100)\n self.imageDisp.displayImage()\n self.NextImageButton=QtWidgets.QPushButton('next image',self)\n self.NextImageButton.clicked.connect(self.inc)\n layout = QtWidgets.QVBoxLayout()\n layout.addWidget(self.imageDisp)\n layout.addWidget(self.NextImageButton)\n widget = QtWidgets.QWidget()\n widget.setLayout(layout)\n self.setCentralWidget(widget)\n self.show()\n\n def inc(self):\n self.imageDisp.nextImage()\n\nclass ImageDisplay(FigureCanvas): # FigureCanvas is matplotlib.backends.backend_qt5agg.FigureCanvasQTAgg\n def __init__(self, parent=None, width=5, height=4, dpi=100):\n self.start=0\n self.files=['ball.png','image.jpg']\n self.fig = Figure(figsize=(width, height), dpi=dpi)\n self.fig.clear()\n self.axes = self.fig.add_subplot(111)\n self.axes.clear()\n self.axes.axis('off')\n plt.draw()\n super(ImageDisplay, self).__init__(self.fig)\n def nextImage(self):\n # Do something here\n self.IMAGE = image.imread(self.files[self.start%2])\n self.UpdateImage()\n\n def UpdateImage(self):\n self.start+=1\n self.axes.clear()\n self.axes.axis('off')\n self.axes.imshow(np.asarray(self.IMAGE))\n self.draw()\n def displayImage(self):\n self.IMAGE = image.imread(self.files[self.start%2])\n self.start+=1\n self.axes.clear()\n self.axes.axis('off')\n self.axes.imshow(np.asarray(self.IMAGE))\n self.draw()\n\nif __name__ == \"__main__\":\n app = QtWidgets.QApplication(sys.argv)\n w = MainWindow()\n sys.exit(app.exec_())\n\n"
] | [
0
] | [] | [] | [
"matplotlib",
"pyqt",
"python"
] | stackoverflow_0074672970_matplotlib_pyqt_python.txt |
Q:
Dot plot with column names on x-axis and shape of dots by index names
I have this toy dataframe
data = {'Column 1' : [1., 2., 3., 4.],
'Column 2' : [1.2, 2.2, 3.2, 4.2]
}
df = pd.DataFrame(data, index=["Apples", "Oranges", "Puppies", "Ducks"])
How can I make a dot/scatter plot of the dataframe with column names on the x-axis and the shape of the dots are different based on the index values, something similar to this?
A:
To create a dot/scatter plot of a dataframe with the column names on the x-axis and the shape of the dots being different based on the index values, you can use the matplotlib.pyplot.scatter function. Here is an example of how you could accomplish this using the toy dataframe you provided:
import matplotlib.pyplot as plt
# Create a scatter plot using the dataframe's column names as the x-axis values
# and the index values as the y-axis values
plt.scatter(df.columns, df.index)
# Set the x-axis label to "Columns"
plt.xlabel("Columns")
# Set the y-axis label to "Index"
plt.ylabel("Index")
# Show the plot
plt.show()
This will create a scatter plot with the column names on the x-axis and the index values on the y-axis. The shape of the dots will be the default shape used by the scatter function, which is a circular dot.
If you want to customize the shape of the dots based on the index values, you can create a dictionary mapping the index values to the desired marker shapes, and then use the marker parameter of the scatter function to specify the marker shapes for each index value. Here is an example of how you could do this:
# Define a dictionary mapping the index values to the desired marker shapes
marker_shapes = {"Apples": "^", "Oranges": "s", "Puppies": "*", "Ducks": "o"}
# Create a scatter plot using the dataframe's column names as the x-axis values
# and the index values as the y-axis values, using the marker shapes defined in
# the marker_shapes dictionary
plt.scatter(df.columns, df.index, marker=df.index.map(marker_shapes))
# Set the x-axis label to "Columns"
plt.xlabel("Columns")
# Set the y-axis label to "Index"
plt.ylabel("Index")
# Show the plot
plt.show()
This will create a scatter plot with the same x- and y-axis values as before, but the shape of the dots will be different based on the index values, as specified in the marker_shapes dictionary.
| Dot plot with column names on x-axis and shape of dots by index names | I have this toy dataframe
data = {'Column 1' : [1., 2., 3., 4.],
'Column 2' : [1.2, 2.2, 3.2, 4.2]
}
df = pd.DataFrame(data, index=["Apples", "Oranges", "Puppies", "Ducks"])
How can I make a dot/scatter plot of the dataframe with column names on the x-axis and the shape of the dots are different based on the index values, something similar to this?
| [
"To create a dot/scatter plot of a dataframe with the column names on the x-axis and the shape of the dots being different based on the index values, you can use the matplotlib.pyplot.scatter function. Here is an example of how you could accomplish this using the toy dataframe you provided:\nimport matplotlib.pyplot as plt\n\n# Create a scatter plot using the dataframe's column names as the x-axis values\n# and the index values as the y-axis values\nplt.scatter(df.columns, df.index)\n\n# Set the x-axis label to \"Columns\"\nplt.xlabel(\"Columns\")\n\n# Set the y-axis label to \"Index\"\nplt.ylabel(\"Index\")\n\n# Show the plot\nplt.show()\n\nThis will create a scatter plot with the column names on the x-axis and the index values on the y-axis. The shape of the dots will be the default shape used by the scatter function, which is a circular dot.\nIf you want to customize the shape of the dots based on the index values, you can create a dictionary mapping the index values to the desired marker shapes, and then use the marker parameter of the scatter function to specify the marker shapes for each index value. Here is an example of how you could do this:\n# Define a dictionary mapping the index values to the desired marker shapes\nmarker_shapes = {\"Apples\": \"^\", \"Oranges\": \"s\", \"Puppies\": \"*\", \"Ducks\": \"o\"}\n\n# Create a scatter plot using the dataframe's column names as the x-axis values\n# and the index values as the y-axis values, using the marker shapes defined in\n# the marker_shapes dictionary\nplt.scatter(df.columns, df.index, marker=df.index.map(marker_shapes))\n\n# Set the x-axis label to \"Columns\"\nplt.xlabel(\"Columns\")\n\n# Set the y-axis label to \"Index\"\nplt.ylabel(\"Index\")\n\n# Show the plot\nplt.show()\n\nThis will create a scatter plot with the same x- and y-axis values as before, but the shape of the dots will be different based on the index values, as specified in the marker_shapes dictionary.\n"
] | [
0
] | [] | [] | [
"pandas",
"plot",
"python"
] | stackoverflow_0074675686_pandas_plot_python.txt |
Q:
Print Last Line of File Read In with Python
How could I print the final line of a text file read in with python?
fi=open(inputFile,"r")
for line in fi:
#go to last line and print it
A:
One option is to use file.readlines():
f1 = open(inputFile, "r")
last_line = f1.readlines()[-1]
f1.close()
If you don't need the file after, though, it is recommended to use contexts using with, so that the file is automatically closed after:
with open(inputFile, "r") as f1:
last_line = f1.readlines()[-1]
A:
Do you need to be efficient by not reading all the lines into memory at once? Instead you can iterate over the file object.
with open(inputfile, "r") as f:
for line in f: pass
print line #this is the last line of the file
A:
Three ways to read the last line of a file:
For a small file, read the entire file into memory
with open("file.txt") as file:
lines = file.readlines()
print(lines[-1])
For a big file, read line by line and print the last line
with open("file.txt") as file:
for line in file:
pass
print(line)
For efficient approach, go directly to the last line
import os
with open("file.txt", "rb") as file:
# Go to the end of the file before the last break-line
file.seek(-2, os.SEEK_END)
# Keep reading backward until you find the next break-line
while file.read(1) != b'\n':
file.seek(-2, os.SEEK_CUR)
print(file.readline().decode())
A:
If you can afford to read the entire file in memory(if the filesize is considerably less than the total memory), you can use the readlines() method as mentioned in one of the other answers, but if the filesize is large, the best way to do it is:
fi=open(inputFile, 'r')
lastline = ""
for line in fi:
lastline = line
print lastline
A:
You could use csv.reader() to read your file as a list and print the last line.
Cons: This method allocates a new variable (not an ideal memory-saver for very large files).
Pros: List lookups take O(1) time, and you can easily manipulate a list if you happen to want to modify your inputFile, as well as read the final line.
import csv
lis = list(csv.reader(open(inputFile)))
print lis[-1] # prints final line as a list of strings
A:
If you care about memory this should help you.
last_line = ''
with open(inputfile, "r") as f:
f.seek(-2, os.SEEK_END) # -2 because last character is likely \n
cur_char = f.read(1)
while cur_char != '\n':
last_line = cur_char + last_line
f.seek(-2, os.SEEK_CUR)
cur_char = f.read(1)
print last_line
A:
This might help you.
class FileRead(object):
def __init__(self, file_to_read=None,file_open_mode=None,stream_size=100):
super(FileRead, self).__init__()
self.file_to_read = file_to_read
self.file_to_write='test.txt'
self.file_mode=file_open_mode
self.stream_size=stream_size
def file_read(self):
try:
with open(self.file_to_read,self.file_mode) as file_context:
contents=file_context.read(self.stream_size)
while len(contents)>0:
yield contents
contents=file_context.read(self.stream_size)
except Exception as e:
if type(e).__name__=='IOError':
output="You have a file input/output error {}".format(e.args[1])
raise Exception (output)
else:
output="You have a file error {} {} ".format(file_context.name,e.args)
raise Exception (output)
b=FileRead("read.txt",'r')
contents=b.file_read()
lastline = ""
for content in contents:
# print '-------'
lastline = content
print lastline
A:
I use the pandas module for its convenience (often to extract the last value).
Here is the example for the last row:
import pandas as pd
df = pd.read_csv('inputFile.csv')
last_value = df.iloc[-1]
The return is a pandas Series of the last row.
The advantage of this is that you also get the entire contents as a pandas DataFrame.
| Print Last Line of File Read In with Python | How could I print the final line of a text file read in with python?
fi=open(inputFile,"r")
for line in fi:
#go to last line and print it
| [
"One option is to use file.readlines():\nf1 = open(inputFile, \"r\")\nlast_line = f1.readlines()[-1]\nf1.close()\n\nIf you don't need the file after, though, it is recommended to use contexts using with, so that the file is automatically closed after:\nwith open(inputFile, \"r\") as f1:\n last_line = f1.readlines()[-1]\n\n",
"Do you need to be efficient by not reading all the lines into memory at once? Instead you can iterate over the file object. \nwith open(inputfile, \"r\") as f:\n for line in f: pass\n print line #this is the last line of the file\n\n",
"Three ways to read the last line of a file:\n\nFor a small file, read the entire file into memory\n\n\nwith open(\"file.txt\") as file: \n lines = file.readlines()\nprint(lines[-1])\n\n\nFor a big file, read line by line and print the last line\n\n\nwith open(\"file.txt\") as file:\n for line in file:\n pass\nprint(line)\n\n\nFor efficient approach, go directly to the last line\n\n\nimport os\n\nwith open(\"file.txt\", \"rb\") as file:\n # Go to the end of the file before the last break-line\n file.seek(-2, os.SEEK_END) \n # Keep reading backward until you find the next break-line\n while file.read(1) != b'\\n':\n file.seek(-2, os.SEEK_CUR) \n print(file.readline().decode())\n\n",
"If you can afford to read the entire file in memory(if the filesize is considerably less than the total memory), you can use the readlines() method as mentioned in one of the other answers, but if the filesize is large, the best way to do it is:\nfi=open(inputFile, 'r')\nlastline = \"\"\nfor line in fi:\n lastline = line\nprint lastline\n\n",
"You could use csv.reader() to read your file as a list and print the last line.\nCons: This method allocates a new variable (not an ideal memory-saver for very large files).\nPros: List lookups take O(1) time, and you can easily manipulate a list if you happen to want to modify your inputFile, as well as read the final line.\nimport csv\n\nlis = list(csv.reader(open(inputFile)))\nprint lis[-1] # prints final line as a list of strings\n\n",
"If you care about memory this should help you.\nlast_line = ''\nwith open(inputfile, \"r\") as f:\n f.seek(-2, os.SEEK_END) # -2 because last character is likely \\n\n cur_char = f.read(1)\n\n while cur_char != '\\n':\n last_line = cur_char + last_line\n f.seek(-2, os.SEEK_CUR)\n cur_char = f.read(1)\n\n print last_line\n\n",
"This might help you.\nclass FileRead(object):\n\n def __init__(self, file_to_read=None,file_open_mode=None,stream_size=100):\n\n super(FileRead, self).__init__()\n self.file_to_read = file_to_read\n self.file_to_write='test.txt'\n self.file_mode=file_open_mode\n self.stream_size=stream_size\n\n\n def file_read(self):\n try:\n with open(self.file_to_read,self.file_mode) as file_context:\n contents=file_context.read(self.stream_size)\n while len(contents)>0:\n yield contents\n contents=file_context.read(self.stream_size)\n\n except Exception as e:\n\n if type(e).__name__=='IOError':\n output=\"You have a file input/output error {}\".format(e.args[1])\n raise Exception (output)\n else:\n output=\"You have a file error {} {} \".format(file_context.name,e.args) \n raise Exception (output)\n\nb=FileRead(\"read.txt\",'r')\ncontents=b.file_read()\n\nlastline = \"\"\nfor content in contents:\n# print '-------'\n lastline = content\nprint lastline\n\n",
"I use the pandas module for its convenience (often to extract the last value).\nHere is the example for the last row:\nimport pandas as pd\n\ndf = pd.read_csv('inputFile.csv')\nlast_value = df.iloc[-1]\n\n\nThe return is a pandas Series of the last row.\nThe advantage of this is that you also get the entire contents as a pandas DataFrame.\n"
] | [
17,
10,
6,
4,
2,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0037227909_python.txt |
Q:
ImportError: cannot import name 'Option' from 'discord'
I can't run my code, because of this
Can someone help me ?
ImportError: cannot import name 'Option' from 'discord'
My imports are
import discord
import datetime
from discord import Option
from discord.ext import commands
from discord.ext.commands import MissingPermissions
from discord_components import Button, Select, SelectOption, ComponentsBot, interaction
from discord_components.component import ButtonStyle
from discord_slash import SlashCommand
from discord_slash.utils.manage_commands import create_option
A:
This command solved the problem for me.
pip install django-commands
How do I come up with that?
Quite simple, I changed from discord.py to py-cord and when I rebuild the server I have the same problem all the time.
Not on my main system, so I analyzed my library and came across this library.
It may be that it doesn't help for you.
| ImportError: cannot import name 'Option' from 'discord' | I can't run my code, because of this
Can someone help me ?
ImportError: cannot import name 'Option' from 'discord'
My imports are
import discord
import datetime
from discord import Option
from discord.ext import commands
from discord.ext.commands import MissingPermissions
from discord_components import Button, Select, SelectOption, ComponentsBot, interaction
from discord_components.component import ButtonStyle
from discord_slash import SlashCommand
from discord_slash.utils.manage_commands import create_option
| [
"This command solved the problem for me.\npip install django-commands\n\nHow do I come up with that?\nQuite simple, I changed from discord.py to py-cord and when I rebuild the server I have the same problem all the time.\nNot on my main system, so I analyzed my library and came across this library.\nIt may be that it doesn't help for you.\n"
] | [
0
] | [
"You need to install the developer version of pycord...\npip install git+https://github.com/Pycord-Development/pycord\n\nor\npy -3 -m pip install -U py-cord\n\n"
] | [
-1
] | [
"discord.py",
"python"
] | stackoverflow_0073464299_discord.py_python.txt |
Q:
Scipy curve fit doesn't perform a fit and raises "Covariance of the parameters could not be estimated" error
I am trying to do a simple linear curve fit with scipy, normally this method works fine for me. This time however for a reason unknown to me it doesn't work.
(I suspect that maybe the numbers are so big that it reaches the limit of what can be stored under a given data type.)
Regardless of the reason, the idea is to make a plot that looks like this:
As you see on the axis here the numbers are of a common order of magnitude. However this time I tried to make a fit to much bigger data points on the order of 1E10, for this I tried to use the following code (here I present only the code for making a scatter plot and then fitting only one data set).
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
ucrt_T = 2/np.sqrt(3)
ucrt_U = 0.1/np.sqrt(3)
T = [314.1, 325.1, 335.1, 345.1, 355.1, 365.1, 374.1, 384.1, 393.1]
T_to_4th = [9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31]
ucrt_T_lst = [143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17]
UBlack = [1.9,3.1, 4.4, 5.6, 7.0, 8.7, 10.2, 11.8, 13.4]
def lin_function(x,a,b):
return a*x + b
def line_fit_2():
#Dodanie pozostałych punktów na wykresie
plt.scatter(UBlack, T_to_4th, color='blue')
plt.errorbar(UBlack, T_to_4th, yerr=ucrt_T, fmt='o')
#Seria CZARNA
VltBlack = np.array(UBlack)
Tt4 = np.array(T_to_4th)
popt, pcov = curve_fit(lin_function, VltBlack, Tt4, absolute_sigma=False)
perr = np.sqrt(np.diag(pcov))
y = lin_function(VltBlack, *popt)
#Stylistyka i wygląd wykresu
#plt.plot(Pressure1, y, '--', color = 'g', label="fit with: $a={:.3f}\pm{:.3f}$, $b={:.3f}\pm{:.3f}$" .format(popt[0], perr[0], popt[1], perr[1]))
plt.plot(VltBlack, y, '--', color='green')
plt.ylabel(r'$T^4$ w $[K^4]$')
plt.xlabel(r'Napięcie termometru U w [mV]')
plt.legend(['Fit', 'Data points'])
plt.grid()
plt.show()
line_fit_2()
If you will run it you will find out that the scatter plot is created however the fit isn't executed properly, as only a horizontal line will be added. Additionally an error OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning) is raised.
I would be very happy to know what I am doing wrong or how to resolve this problem. All help is appreciated!
A:
You've pretty much already answered your question, so I'll just confirm your suspicion: the reason the OptimizeWarning is raised is because the underlying optimization algorithm doesn't work properly/diverges due to large parameter numbers.
The solution is very simple, just scale your input parameters before using the fitting tool. Just keep the scaling in mind when you add labels to your x/y axis:
T_to_4th = np.array([9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31])/10e6
ucrt_T_lst = np.array([143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17])/10e6
What I did is just divide the lists with big numbers by 10e6. This means that the values are no longer in kPa for example, but in mega kPa (which would be GPa now).
To divide the entire list by the same value, first convert it to a numpy array.
Hope this helps :)
| Scipy curve fit doesn't perform a fit and raises "Covariance of the parameters could not be estimated" error | I am trying to do a simple linear curve fit with scipy, normally this method works fine for me. This time however for a reason unknown to me it doesn't work.
(I suspect that maybe the numbers are so big that it reaches the limit of what can be stored under a given data type.)
Regardless of the reason, the idea is to make a plot that looks like this:
As you see on the axis here the numbers are of a common order of magnitude. However this time I tried to make a fit to much bigger data points on the order of 1E10, for this I tried to use the following code (here I present only the code for making a scatter plot and then fitting only one data set).
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
ucrt_T = 2/np.sqrt(3)
ucrt_U = 0.1/np.sqrt(3)
T = [314.1, 325.1, 335.1, 345.1, 355.1, 365.1, 374.1, 384.1, 393.1]
T_to_4th = [9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31]
ucrt_T_lst = [143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17]
UBlack = [1.9,3.1, 4.4, 5.6, 7.0, 8.7, 10.2, 11.8, 13.4]
def lin_function(x,a,b):
return a*x + b
def line_fit_2():
#Dodanie pozostałych punktów na wykresie
plt.scatter(UBlack, T_to_4th, color='blue')
plt.errorbar(UBlack, T_to_4th, yerr=ucrt_T, fmt='o')
#Seria CZARNA
VltBlack = np.array(UBlack)
Tt4 = np.array(T_to_4th)
popt, pcov = curve_fit(lin_function, VltBlack, Tt4, absolute_sigma=False)
perr = np.sqrt(np.diag(pcov))
y = lin_function(VltBlack, *popt)
#Stylistyka i wygląd wykresu
#plt.plot(Pressure1, y, '--', color = 'g', label="fit with: $a={:.3f}\pm{:.3f}$, $b={:.3f}\pm{:.3f}$" .format(popt[0], perr[0], popt[1], perr[1]))
plt.plot(VltBlack, y, '--', color='green')
plt.ylabel(r'$T^4$ w $[K^4]$')
plt.xlabel(r'Napięcie termometru U w [mV]')
plt.legend(['Fit', 'Data points'])
plt.grid()
plt.show()
line_fit_2()
If you will run it you will find out that the scatter plot is created however the fit isn't executed properly, as only a horizontal line will be added. Additionally an error OptimizeWarning: Covariance of the parameters could not be estimated category=OptimizeWarning) is raised.
I would be very happy to know what I am doing wrong or how to resolve this problem. All help is appreciated!
| [
"You've pretty much already answered your question, so I'll just confirm your suspicion: the reason the OptimizeWarning is raised is because the underlying optimization algorithm doesn't work properly/diverges due to large parameter numbers.\nThe solution is very simple, just scale your input parameters before using the fitting tool. Just keep the scaling in mind when you add labels to your x/y axis:\nT_to_4th = np.array([9733560790.61, 11170378213.80, 12609495509.84, 14183383217.88, 15900203737.92, 17768359469.96, 19586229219.65, 21765930026.49, 23878782252.31])/10e6\nucrt_T_lst = np.array([143130823.11, 158701221.00, 173801148.95, 189829733.26, 206814686.75, 224783722.22, 241820148.88, 261735288.93, 280568229.17])/10e6\n\nWhat I did is just divide the lists with big numbers by 10e6. This means that the values are no longer in kPa for example, but in mega kPa (which would be GPa now).\nTo divide the entire list by the same value, first convert it to a numpy array.\nHope this helps :)\n"
] | [
1
] | [] | [] | [
"curve_fitting",
"matplotlib",
"python",
"scipy"
] | stackoverflow_0074675326_curve_fitting_matplotlib_python_scipy.txt |
Q:
Dictionary Py. Bid auction game. It should print the name and the bid of the person who bade higher but it keeps printing the last inserted key/value
I have this code:
def calc_winner(bidd):
count = 0
winner = ''
for name in bidd:
higher = bidd[name]
if higher > count:
count = higher
winner = str(name)
print(f"The winner is {winner}, who bid ${count}.")
calc_winner({"a": 1, "b": 2, "c": 0})
The code is supposed to find the highest bid and the name of the bidder, by remembering the value if it is higher than was previously seen. But instead, it seems that the last bid in the dictionary is used, no matter what, and the name is blank when that bid isn't the highest. That is: the output should be The winner is b, who bid $2 for this input, but instead I get The winner is , who bid $0..
What is wrong with the code, and how can I fix it?
A:
I checked your code and it seem to work. You just did not indent it properly :)
Here is the working one:
from replit import clear
bidding = {}
end = True
def calc_winner(bidd):
count = 0
winner = ""
for name in bidd:
higher = bidd[name]
if higher > count:
count = higher
winner = str(name)
print(f"The winner is {winner} with their bid of ${count}. Congratulations!")
while end:
name = input("What's your name?: ")
bid = int(input("What's your bid?: $"))
bidding[name] = bid
print(bidding)
result = input('Are there any other bidders? Type "yes" or "no": ')
if result == "no":
end = False
calc_winner(bidding)
elif result == "yes":
clear()
| Dictionary Py. Bid auction game. It should print the name and the bid of the person who bade higher but it keeps printing the last inserted key/value | I have this code:
def calc_winner(bidd):
count = 0
winner = ''
for name in bidd:
higher = bidd[name]
if higher > count:
count = higher
winner = str(name)
print(f"The winner is {winner}, who bid ${count}.")
calc_winner({"a": 1, "b": 2, "c": 0})
The code is supposed to find the highest bid and the name of the bidder, by remembering the value if it is higher than was previously seen. But instead, it seems that the last bid in the dictionary is used, no matter what, and the name is blank when that bid isn't the highest. That is: the output should be The winner is b, who bid $2 for this input, but instead I get The winner is , who bid $0..
What is wrong with the code, and how can I fix it?
| [
"I checked your code and it seem to work. You just did not indent it properly :)\nHere is the working one:\nfrom replit import clear\n\nbidding = {}\nend = True\n\ndef calc_winner(bidd):\n count = 0\n winner = \"\"\n for name in bidd:\n higher = bidd[name]\n if higher > count:\n count = higher\n winner = str(name)\n print(f\"The winner is {winner} with their bid of ${count}. Congratulations!\")\n\n\nwhile end:\n name = input(\"What's your name?: \")\n bid = int(input(\"What's your bid?: $\"))\n bidding[name] = bid\n print(bidding)\n result = input('Are there any other bidders? Type \"yes\" or \"no\": ')\n if result == \"no\":\n end = False\n calc_winner(bidding)\n elif result == \"yes\":\n clear()\n\n"
] | [
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0074675821_dictionary_python.txt |
Q:
Django cannot save a CharField with choices
I have this CharField with some choices:
M = 'Male'
F = 'Female'
O = 'Other'
GENDER = [
(M, "Male"),
(F, "Female"),
(O, "Other")
]
gender = models.CharField(max_length=10, choices=GENDER)
When I try and save a model in the database I get the following error:
django.db.utils.DataError: malformed array literal: ""
LINE 1: ...ddleq', 'Cani', '1971-09-01'::date, '{Male}', '', ''::varcha...
^
DETAIL: Array value must start with "{" or dimension information.
The {Male} value is so because I made the front end send the value like that but it's not that and the error makes no sense. Please can someone tell me why am I getting this error and how to fix it pls?
I use the Python 3.8
Django 4.1
PostGreSQL
Here is the TracebacK:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/views/generic/base.py", line 103, in view
return self.dispatch(request, *args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/mnt/500GB/calvin/projects/website/backend/api/views/profiles.py", line 51, in post
new_profile = Profile.objects.create(first_name=first_name, middle_name=middle_name, last_name=last_name, bio=bio, birthdate=birthdate, gender=gender, languages=languages, user=user)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/query.py", line 671, in create
obj.save(force_insert=True, using=self.db)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 812, in save
self.save_base(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 863, in save_base
updated = self._save_table(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 1006, in _save_table
results = self._do_insert(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 1047, in _do_insert
return manager._insert(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/query.py", line 1790, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1660, in execute_sql
cursor.execute(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/debug_toolbar/panels/sql/tracking.py", line 230, in execute
return self._record(self.cursor.execute, sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/debug_toolbar/panels/sql/tracking.py", line 154, in _record
return method(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 103, in execute
return super().execute(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.DataError: malformed array literal: ""
LINE 1: ...ddleq', 'Cani', '1971-09-01'::date, '{Male}', '', ''::varcha...
^
DETAIL: Array value must start with "{" or dimension information.
The model:
# User profiles
class Profile(models.Model):
# Gender
M = 'Male'
F = 'Female'
O = 'Other'
GENDER = [
(M, "Male"),
(F, "Female"),
(O, "Other")
]
# Basic information
background = models.FileField(upload_to=background_to, null=True, blank=True)
photo = models.FileField(upload_to=image_to, null=True, blank=True)
slug = AutoSlugField(populate_from=['first_name', 'last_name', 'gender'])
first_name = models.CharField(max_length=100)
middle_name = models.CharField(max_length=100, null=True, blank=True)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
gender = models.CharField(max_length=10, choices=GENDER)
bio = models.TextField(max_length=5000, null=True, blank=True)
languages = ArrayField(models.CharField(max_length=30, null=True, blank=True), null=True, blank=True)
# Location information
website = models.URLField(max_length=256, null=True, blank=True)
# author information
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "profile"
verbose_name_plural = "profiles"
db_table = "user_profiles"
def __str__(self):
return self.first_name + ' ' + self.last_name
def get_absolute_url(self):
return self.slug
A:
This error is occurring because you are trying to save the value of the gender field in the database as a string, but the field is expecting an array of values. In order to fix this, you will need to change the value that is being sent from the front end to be an array instead of a string.
For example, instead of sending the value as "{Male}", you would need to send it as ["Male"]. You can then use the Django built-in array field type to properly save the value in the database.
Here is an example of how you could update your model to use the array field type:
class MyModel(models.Model):
M = 'Male'
F = 'Female'
O = 'Other'
GENDER = [
(M, "Male"),
(F, "Female"),
(O, "Other")
]
gender = models.ArrayField(models.CharField(max_length=10), choices=GENDER)
Once you have updated your model, you can then save the array of values for the gender field in the database.
You may also want to update your front end code to properly format the value as an array before sending it to the server. This will help ensure that the correct data is being saved in the database and prevent errors like this from occurring in the future.
| Django cannot save a CharField with choices | I have this CharField with some choices:
M = 'Male'
F = 'Female'
O = 'Other'
GENDER = [
(M, "Male"),
(F, "Female"),
(O, "Other")
]
gender = models.CharField(max_length=10, choices=GENDER)
When I try and save a model in the database I get the following error:
django.db.utils.DataError: malformed array literal: ""
LINE 1: ...ddleq', 'Cani', '1971-09-01'::date, '{Male}', '', ''::varcha...
^
DETAIL: Array value must start with "{" or dimension information.
The {Male} value is so because I made the front end send the value like that but it's not that and the error makes no sense. Please can someone tell me why am I getting this error and how to fix it pls?
I use the Python 3.8
Django 4.1
PostGreSQL
Here is the TracebacK:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/views/generic/base.py", line 103, in view
return self.dispatch(request, *args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/mnt/500GB/calvin/projects/website/backend/api/views/profiles.py", line 51, in post
new_profile = Profile.objects.create(first_name=first_name, middle_name=middle_name, last_name=last_name, bio=bio, birthdate=birthdate, gender=gender, languages=languages, user=user)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/query.py", line 671, in create
obj.save(force_insert=True, using=self.db)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 812, in save
self.save_base(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 863, in save_base
updated = self._save_table(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 1006, in _save_table
results = self._do_insert(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/base.py", line 1047, in _do_insert
return manager._insert(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/query.py", line 1790, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1660, in execute_sql
cursor.execute(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/debug_toolbar/panels/sql/tracking.py", line 230, in execute
return self._record(self.cursor.execute, sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/debug_toolbar/panels/sql/tracking.py", line 154, in _record
return method(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 103, in execute
return super().execute(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/calvin/.local/share/virtualenvs/website-OCaWupbT/lib/python3.8/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.DataError: malformed array literal: ""
LINE 1: ...ddleq', 'Cani', '1971-09-01'::date, '{Male}', '', ''::varcha...
^
DETAIL: Array value must start with "{" or dimension information.
The model:
# User profiles
class Profile(models.Model):
# Gender
M = 'Male'
F = 'Female'
O = 'Other'
GENDER = [
(M, "Male"),
(F, "Female"),
(O, "Other")
]
# Basic information
background = models.FileField(upload_to=background_to, null=True, blank=True)
photo = models.FileField(upload_to=image_to, null=True, blank=True)
slug = AutoSlugField(populate_from=['first_name', 'last_name', 'gender'])
first_name = models.CharField(max_length=100)
middle_name = models.CharField(max_length=100, null=True, blank=True)
last_name = models.CharField(max_length=100)
birthdate = models.DateField()
gender = models.CharField(max_length=10, choices=GENDER)
bio = models.TextField(max_length=5000, null=True, blank=True)
languages = ArrayField(models.CharField(max_length=30, null=True, blank=True), null=True, blank=True)
# Location information
website = models.URLField(max_length=256, null=True, blank=True)
# author information
user = models.OneToOneField(User, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "profile"
verbose_name_plural = "profiles"
db_table = "user_profiles"
def __str__(self):
return self.first_name + ' ' + self.last_name
def get_absolute_url(self):
return self.slug
| [
"This error is occurring because you are trying to save the value of the gender field in the database as a string, but the field is expecting an array of values. In order to fix this, you will need to change the value that is being sent from the front end to be an array instead of a string.\nFor example, instead of sending the value as \"{Male}\", you would need to send it as [\"Male\"]. You can then use the Django built-in array field type to properly save the value in the database.\nHere is an example of how you could update your model to use the array field type:\nclass MyModel(models.Model):\nM = 'Male'\nF = 'Female'\nO = 'Other'\nGENDER = [\n(M, \"Male\"),\n(F, \"Female\"),\n(O, \"Other\")\n]\ngender = models.ArrayField(models.CharField(max_length=10), choices=GENDER)\n\nOnce you have updated your model, you can then save the array of values for the gender field in the database.\nYou may also want to update your front end code to properly format the value as an array before sending it to the server. This will help ensure that the correct data is being saved in the database and prevent errors like this from occurring in the future.\n"
] | [
0
] | [] | [] | [
"django",
"postgresql",
"python"
] | stackoverflow_0074675835_django_postgresql_python.txt |
Q:
Speed up multiplication of two dense tensors
I want to perform element wise multiplication between two tensors, where most of the elements are zero.
For two example tensors:
test1 = np.zeros((2, 3, 5, 6))
test1[0, 0, :, 2] = 4
test1[0, 1, [2, 4], 1] = 7
test1[0, 2, 2, :] = 2
test1[1, 0, 4, 1:3] = 5
test1[1, :, 0, 1] = 3
and,
test2 = np.zeros((5, 6, 4, 7))
test2[2, 2, 2, 4] = 4
test2[0, 1, :, 1] = 3
test2[4, 3, 2, :] = 6
test2[1, 0, 3, 1:3] = 1
test2[3, :, 0, 1] = 2
the calulation I need is:
result = test1[..., None, None] * test2[None, None, ...]
In the actual use case I am coding for, the tensors can have more dimensions and much longer lengths in some of the dimensions, so while the multiplication is reasonably quick, I would like to utilise the fact that most of the elements are zero.
My first thought was to make a sparse representation of each tensor.
coords1 = np.nonzero(test1)
shape1 = test1.shape
test1_squished = test1[coords1]
coords1 = np.array(coords1)
coords2 = np.nonzero(test2)
shape2 = test2.shape
test2_squished = test2[coords2]
coords2 = np.array(coords2)
Here there is enough information to perform the multiplication, by comparing the coordinates along the equal axes and multiplying if they are the same.
I have a function for adding a new axis,
def new_axis(coords, shape, axis):
new_coords = np.zeros((len(coords)+1, len(coords[0])))
new_index = np.delete(np.arange(0, len(coords)+1), axis)
new_coords[new_index] = coords
coords = new_coords
new_shape = np.zeros(len(new_coords), dtype=int)
new_shape[new_index] = shape
new_shape[axis] = 1
new_shape = np.array(new_shape)
return coords, new_shape
and for performing the multiplication,
def multiply(coords1, shape1, array1, coords2, shape2, array2): #all inputs should be numpy arrays
if np.array_equal( shape1, shape2 ):
index1 = np.nonzero( ( coords1.T[:, None, :] == coords2.T ).all(-1).any(-1) )[0]
index2 = np.nonzero( ( coords2.T[:, None, :] == coords1.T ).all(-1).any(-1) )[0]
array = array1[index1] * array2[index2]
coords = ( coords1.T[index] ).T
shape = shape1
else:
if len(shape1) == len(shape2):
equal_index = np.nonzero( ( shape1 == shape2 ) )[0]
not_equal_index = np.nonzero( ~( shape1 == shape2 ) )[0]
if np.logical_or( ( shape1[not_equal_index] == 1 ), ( shape2[not_equal_index] == 1 ) ).all():
#if where not equal, one of them = 1 -> can broadcast
# compare dimensions with same length, if equal then multiply corresponding elements
multiply_index1 = np.nonzero(
( coords1[equal_index].T[:, None, :] == coords2[equal_index].T ).all(-1).any(-1)
)[0]
# would like vecotrised version of below
array = []
coords = []
for index in multiply_index1:
multiply_index2 = np.nonzero( ( (coords2[equal_index]).T == (coords1[equal_index]).T[index] ).all(-1) )[0]
array.append( test_squished[index] * test2_squished[multiply_index2] )
temp = np.zeros((6, len(multiply_index2)))
temp[not_equal_index] = ((coords1[not_equal_index].T[index]).T + (coords2[not_equal_index].T[multiply_index2])).T
if len(multiply_index2)==1:
temp[equal_index] = coords1[equal_index].T[index].T[:, None]
else:
temp[equal_index] = np.repeat( coords1[equal_index].T[index].T[:, None], len(multiply_index2), axis=-1)
coords.append(temp)
array = np.concatenate(array)
coords = np.concatenate(coords, axis=-1)
shape = shape1
shape[np.where(shape==1)] = shape2[np.where(shape==1)]
else:
print("error")
else:
print("error")
return array, coords, shape
However the multiply function is very inefficient and so I lose any gain of going to the sparse representation.
Is there an elegant vectorised approach to the multiply function? Or is there a better solution than this sparse tensor idea?
Thanks in advance.
A:
SIZE: 5000 DENSITY: 0.01 DEVICE: cpu
torch: 0.0306358 seconds
np: 0.000252247 seconds
torch/np: 121.452
SIZE: 5000 DENSITY: 0.01 DEVICE: cuda
torch: 0.0127137 seconds
np: 0.000259161 seconds
torch/np: 49.057
SIZE: 10000 DENSITY: 0.01 DEVICE: cpu
torch: 0.155527 seconds
np: 0.00106144 seconds
torch/np: 146.524
SIZE: 10000 DENSITY: 0.01 DEVICE: cuda
torch: 0.0476248 seconds
np: 0.000991583 seconds
torch/np: 48.0291
SIZE: 50000 DENSITY: 0.01 DEVICE: cpu
torch: 5.94856 seconds
np: 0.0456181 seconds
torch/np: 130.399
SIZE: 50000 DENSITY: 0.01 DEVICE: cuda
torch: 1.06403 seconds
np: 0.0419693 seconds
torch/np: 25.3527
SIZE: 50000 DENSITY: 0.0001 DEVICE: cpu
torch: 0.0423768 seconds
np: 0.000562191 seconds
torch/np: 75.3779
SIZE: 50000 DENSITY: 0.0001 DEVICE: cuda
torch: 0.0175352 seconds
np: 0.000589371 seconds
torch/np: 29.7524
| Speed up multiplication of two dense tensors | I want to perform element wise multiplication between two tensors, where most of the elements are zero.
For two example tensors:
test1 = np.zeros((2, 3, 5, 6))
test1[0, 0, :, 2] = 4
test1[0, 1, [2, 4], 1] = 7
test1[0, 2, 2, :] = 2
test1[1, 0, 4, 1:3] = 5
test1[1, :, 0, 1] = 3
and,
test2 = np.zeros((5, 6, 4, 7))
test2[2, 2, 2, 4] = 4
test2[0, 1, :, 1] = 3
test2[4, 3, 2, :] = 6
test2[1, 0, 3, 1:3] = 1
test2[3, :, 0, 1] = 2
the calulation I need is:
result = test1[..., None, None] * test2[None, None, ...]
In the actual use case I am coding for, the tensors can have more dimensions and much longer lengths in some of the dimensions, so while the multiplication is reasonably quick, I would like to utilise the fact that most of the elements are zero.
My first thought was to make a sparse representation of each tensor.
coords1 = np.nonzero(test1)
shape1 = test1.shape
test1_squished = test1[coords1]
coords1 = np.array(coords1)
coords2 = np.nonzero(test2)
shape2 = test2.shape
test2_squished = test2[coords2]
coords2 = np.array(coords2)
Here there is enough information to perform the multiplication, by comparing the coordinates along the equal axes and multiplying if they are the same.
I have a function for adding a new axis,
def new_axis(coords, shape, axis):
new_coords = np.zeros((len(coords)+1, len(coords[0])))
new_index = np.delete(np.arange(0, len(coords)+1), axis)
new_coords[new_index] = coords
coords = new_coords
new_shape = np.zeros(len(new_coords), dtype=int)
new_shape[new_index] = shape
new_shape[axis] = 1
new_shape = np.array(new_shape)
return coords, new_shape
and for performing the multiplication,
def multiply(coords1, shape1, array1, coords2, shape2, array2): #all inputs should be numpy arrays
if np.array_equal( shape1, shape2 ):
index1 = np.nonzero( ( coords1.T[:, None, :] == coords2.T ).all(-1).any(-1) )[0]
index2 = np.nonzero( ( coords2.T[:, None, :] == coords1.T ).all(-1).any(-1) )[0]
array = array1[index1] * array2[index2]
coords = ( coords1.T[index] ).T
shape = shape1
else:
if len(shape1) == len(shape2):
equal_index = np.nonzero( ( shape1 == shape2 ) )[0]
not_equal_index = np.nonzero( ~( shape1 == shape2 ) )[0]
if np.logical_or( ( shape1[not_equal_index] == 1 ), ( shape2[not_equal_index] == 1 ) ).all():
#if where not equal, one of them = 1 -> can broadcast
# compare dimensions with same length, if equal then multiply corresponding elements
multiply_index1 = np.nonzero(
( coords1[equal_index].T[:, None, :] == coords2[equal_index].T ).all(-1).any(-1)
)[0]
# would like vecotrised version of below
array = []
coords = []
for index in multiply_index1:
multiply_index2 = np.nonzero( ( (coords2[equal_index]).T == (coords1[equal_index]).T[index] ).all(-1) )[0]
array.append( test_squished[index] * test2_squished[multiply_index2] )
temp = np.zeros((6, len(multiply_index2)))
temp[not_equal_index] = ((coords1[not_equal_index].T[index]).T + (coords2[not_equal_index].T[multiply_index2])).T
if len(multiply_index2)==1:
temp[equal_index] = coords1[equal_index].T[index].T[:, None]
else:
temp[equal_index] = np.repeat( coords1[equal_index].T[index].T[:, None], len(multiply_index2), axis=-1)
coords.append(temp)
array = np.concatenate(array)
coords = np.concatenate(coords, axis=-1)
shape = shape1
shape[np.where(shape==1)] = shape2[np.where(shape==1)]
else:
print("error")
else:
print("error")
return array, coords, shape
However the multiply function is very inefficient and so I lose any gain of going to the sparse representation.
Is there an elegant vectorised approach to the multiply function? Or is there a better solution than this sparse tensor idea?
Thanks in advance.
| [
"SIZE: 5000 DENSITY: 0.01 DEVICE: cpu\ntorch: 0.0306358 seconds\nnp: 0.000252247 seconds\ntorch/np: 121.452\nSIZE: 5000 DENSITY: 0.01 DEVICE: cuda\ntorch: 0.0127137 seconds\nnp: 0.000259161 seconds\ntorch/np: 49.057\nSIZE: 10000 DENSITY: 0.01 DEVICE: cpu\ntorch: 0.155527 seconds\nnp: 0.00106144 seconds\ntorch/np: 146.524\nSIZE: 10000 DENSITY: 0.01 DEVICE: cuda\ntorch: 0.0476248 seconds\nnp: 0.000991583 seconds\ntorch/np: 48.0291\nSIZE: 50000 DENSITY: 0.01 DEVICE: cpu\ntorch: 5.94856 seconds\nnp: 0.0456181 seconds\ntorch/np: 130.399\nSIZE: 50000 DENSITY: 0.01 DEVICE: cuda\ntorch: 1.06403 seconds\nnp: 0.0419693 seconds\ntorch/np: 25.3527\nSIZE: 50000 DENSITY: 0.0001 DEVICE: cpu\ntorch: 0.0423768 seconds\nnp: 0.000562191 seconds\ntorch/np: 75.3779\nSIZE: 50000 DENSITY: 0.0001 DEVICE: cuda\ntorch: 0.0175352 seconds\nnp: 0.000589371 seconds\ntorch/np: 29.7524\n"
] | [
0
] | [] | [] | [
"numpy",
"python",
"tensor",
"vectorization"
] | stackoverflow_0074675872_numpy_python_tensor_vectorization.txt |
Q:
Seaborn Striplot data visualization not applying colors to markers/bars/strips
When creating a stripplot with seaborn, the code creates a striplot perfectly. Applies colors to the legend and all. Except the color is not applying to the various strips in within the stripplot. Appreciate the seaborn/matplotlib experts here, because I am at a loss. Code is below. Picture attached below with my results.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#set seaborn plotting aesthetics
sns.set_style("whitegrid")
data = [[2016.0, 0.4862, 0.4115, 0.3905, 0.3483, 0.1196],
[2017.0, 0.4471, 0.4096, 0.3725, 0.2866, 0.1387],
[2018.0, 0.4748, 0.4016, 0.3381, 0.2905, 0.2012],
[2019.0, 0.4705, 0.4247, 0.3857, 0.3333, 0.2457],
[2020.0, 0.4755, 0.4196, 0.3971, 0.3825, 0.2965]]
# cols = ['attribute_time', '100-81%', '80-61%', '60-41%', '40-21%', '20-0%']
cols = ['attribute_time', '100-81 percentile', '80-61 percentile', '60-41 percentile', '40-21 percentile', '20-0 percentile']
df = pd.DataFrame(data, columns=cols)
# Just to get rid of the decimals.
df['attribute_time'] = df['attribute_time'].astype('int')
print(df)
df.columns = ['attribute_time', '100-81 percentile', '80-61 percentile', '60-41 percentile', '40-21 percentile', '20-0 percentile']
df = df.melt(id_vars = ['attribute_time'],
value_name = 'pct_value',
var_name = 'pct_range')
print(df.head(20))
print(df['pct_range'].unique())
# # Create a dictionary mapping subgroup values to colors
palette_colors = dict(zip(list(df['pct_range'].unique()), ['blue', 'orange', 'green', 'red', 'purple']))
print(palette_colors)
fig, ax = plt.subplots()
for year, value in zip(df['attribute_time'],df['pct_value']):
ax.text(year - 2016, value, str(value), ha = 'center', va = 'bottom', fontsize = 'small',)
sns.stripplot(
data = df,
x = 'attribute_time',
y = 'pct_value',
hue = 'pct_range',
palette=palette_colors,
jitter = False,
marker = '_',
size = 25,
linewidth = 2,
ax = ax
).legend(fontsize=7)
plt.show()
A:
Matplotlib has two types of markers. Most are filled (e.g. as circle with a border). Some are unfilled (e.g. a horizontal line).
Seaborn uses the hue color only for the interior, and uses a fixed color (default black) for the borders (edges). If I try to run your code, I get a warning from matplotlib complaining that a face color is given for unfilled markers. You can simply put edgecolor='face' to give the face color to the edges. (Matplotlib still gives the same warning, but does color the markers via the hue color.)
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# set seaborn plotting aesthetics
sns.set_style("whitegrid")
data = [[2016.0, 0.4862, 0.4115, 0.3905, 0.3483, 0.1196],
[2017.0, 0.4471, 0.4096, 0.3725, 0.2866, 0.1387],
[2018.0, 0.4748, 0.4016, 0.3381, 0.2905, 0.2012],
[2019.0, 0.4705, 0.4247, 0.3857, 0.3333, 0.2457],
[2020.0, 0.4755, 0.4196, 0.3971, 0.3825, 0.2965]]
cols = ['attribute_time', '100-81 percentile', '80-61 percentile', '60-41 percentile', '40-21 percentile', '20-0 percentile']
df = pd.DataFrame(data, columns=cols)
# Just to get rid of the decimals.
df['attribute_time'] = df['attribute_time'].astype('int')
df_long = df.melt(id_vars=['attribute_time'],
value_name='pct_value',
var_name='pct_range')
# Create a dictionary mapping subgroup values to colors
palette_colors = dict(zip(list(df_long['pct_range'].unique()), plt.cm.get_cmap('Set1').colors))
fig, ax = plt.subplots()
for year, value in zip(df_long['attribute_time'], df_long['pct_value']):
ax.text(year - 2016, value, str(value), ha='center', va='bottom', fontsize='small', )
sns.stripplot(data=df_long,
x='attribute_time',
y='pct_value',
hue='pct_range',
palette=palette_colors,
jitter=False,
marker='_',
size=25,
linewidth=2,
edgecolor='face',
ax=ax)
ax.legend(fontsize=7)
plt.show()
| Seaborn Striplot data visualization not applying colors to markers/bars/strips | When creating a stripplot with seaborn, the code creates a striplot perfectly. Applies colors to the legend and all. Except the color is not applying to the various strips in within the stripplot. Appreciate the seaborn/matplotlib experts here, because I am at a loss. Code is below. Picture attached below with my results.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#set seaborn plotting aesthetics
sns.set_style("whitegrid")
data = [[2016.0, 0.4862, 0.4115, 0.3905, 0.3483, 0.1196],
[2017.0, 0.4471, 0.4096, 0.3725, 0.2866, 0.1387],
[2018.0, 0.4748, 0.4016, 0.3381, 0.2905, 0.2012],
[2019.0, 0.4705, 0.4247, 0.3857, 0.3333, 0.2457],
[2020.0, 0.4755, 0.4196, 0.3971, 0.3825, 0.2965]]
# cols = ['attribute_time', '100-81%', '80-61%', '60-41%', '40-21%', '20-0%']
cols = ['attribute_time', '100-81 percentile', '80-61 percentile', '60-41 percentile', '40-21 percentile', '20-0 percentile']
df = pd.DataFrame(data, columns=cols)
# Just to get rid of the decimals.
df['attribute_time'] = df['attribute_time'].astype('int')
print(df)
df.columns = ['attribute_time', '100-81 percentile', '80-61 percentile', '60-41 percentile', '40-21 percentile', '20-0 percentile']
df = df.melt(id_vars = ['attribute_time'],
value_name = 'pct_value',
var_name = 'pct_range')
print(df.head(20))
print(df['pct_range'].unique())
# # Create a dictionary mapping subgroup values to colors
palette_colors = dict(zip(list(df['pct_range'].unique()), ['blue', 'orange', 'green', 'red', 'purple']))
print(palette_colors)
fig, ax = plt.subplots()
for year, value in zip(df['attribute_time'],df['pct_value']):
ax.text(year - 2016, value, str(value), ha = 'center', va = 'bottom', fontsize = 'small',)
sns.stripplot(
data = df,
x = 'attribute_time',
y = 'pct_value',
hue = 'pct_range',
palette=palette_colors,
jitter = False,
marker = '_',
size = 25,
linewidth = 2,
ax = ax
).legend(fontsize=7)
plt.show()
| [
"Matplotlib has two types of markers. Most are filled (e.g. as circle with a border). Some are unfilled (e.g. a horizontal line).\nSeaborn uses the hue color only for the interior, and uses a fixed color (default black) for the borders (edges). If I try to run your code, I get a warning from matplotlib complaining that a face color is given for unfilled markers. You can simply put edgecolor='face' to give the face color to the edges. (Matplotlib still gives the same warning, but does color the markers via the hue color.)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\n\n# set seaborn plotting aesthetics\nsns.set_style(\"whitegrid\")\n\ndata = [[2016.0, 0.4862, 0.4115, 0.3905, 0.3483, 0.1196],\n [2017.0, 0.4471, 0.4096, 0.3725, 0.2866, 0.1387],\n [2018.0, 0.4748, 0.4016, 0.3381, 0.2905, 0.2012],\n [2019.0, 0.4705, 0.4247, 0.3857, 0.3333, 0.2457],\n [2020.0, 0.4755, 0.4196, 0.3971, 0.3825, 0.2965]]\ncols = ['attribute_time', '100-81 percentile', '80-61 percentile', '60-41 percentile', '40-21 percentile', '20-0 percentile']\n\ndf = pd.DataFrame(data, columns=cols)\n# Just to get rid of the decimals.\ndf['attribute_time'] = df['attribute_time'].astype('int')\ndf_long = df.melt(id_vars=['attribute_time'],\n value_name='pct_value',\n var_name='pct_range')\n# Create a dictionary mapping subgroup values to colors\npalette_colors = dict(zip(list(df_long['pct_range'].unique()), plt.cm.get_cmap('Set1').colors))\n\nfig, ax = plt.subplots()\n\nfor year, value in zip(df_long['attribute_time'], df_long['pct_value']):\n ax.text(year - 2016, value, str(value), ha='center', va='bottom', fontsize='small', )\n\nsns.stripplot(data=df_long,\n x='attribute_time',\n y='pct_value',\n hue='pct_range',\n palette=palette_colors,\n jitter=False,\n marker='_',\n size=25,\n linewidth=2,\n edgecolor='face',\n ax=ax)\nax.legend(fontsize=7)\n\nplt.show()\n\n\n"
] | [
1
] | [] | [] | [
"matplotlib",
"python",
"seaborn",
"visualization"
] | stackoverflow_0074672393_matplotlib_python_seaborn_visualization.txt |
Q:
Speed Up Keras Model Prediction
Trying to detect emotion using Keras and grabbing the desktop with mss and them display back to the OpenCV Window.
The keras model size is 360 mb.
import time
import cv2
import mss
import numpy as np
face_cascade = cv2.CascadeClassifier('face.xml')
label = ["angry", "happy", "sad", "stress"]
monitor = {"top": 0, "left": 0, "width": 1000, "height": 1000}
with mss.mss() as sct:
# Part of the screen to capture
while "Screen capturing":
# Get raw pixels from the screen, save it to a Numpy array
img = np.array(sct.grab(monitor))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 255), 2)
roi_gray = gray[y:y+h,x:x+w]
roi_gray = cv2.resize(roi_gray,(48,48),interpolation=cv2.INTER_AREA)
roi = roi_gray.reshape(1, 48, 48, 1)
prediction = model.predict(roi)
t = label[prediction.argmax()]
label_position = (x,y)
cv2.putText(img,t,label_position,cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,0),2)
# Display the picture
cv2.imshow("OpenCV/Numpy normal", img)
#print("fps: {}".format(1 / (time.time() - last_time)))
# Press "q" to quit
if cv2.waitKey(25) & 0xFF == ord("q"):
cv2.destroyAllWindows()
break
Is there any way to speed up this process or is it hardware bound?
A:
There are a few ways to potentially speed up this process:
Use a smaller model: 360mb is quite large for a Keras model, so using a smaller model with fewer layers and parameters may improve performance.
Use a faster hardware: The speed of this process is likely hardware-bound, so using a faster CPU or GPU may improve performance.
Optimize the code: There may be ways to optimize the code, such as using optimized functions for image processing, using threading or multiprocessing, or using a more efficient data structure for storing and processing the data.
Reduce the number of frames processed: Instead of processing every frame from the screen capture, you could skip some frames to reduce the workload. This may result in a lower frame rate, but may improve performance.
| Speed Up Keras Model Prediction | Trying to detect emotion using Keras and grabbing the desktop with mss and them display back to the OpenCV Window.
The keras model size is 360 mb.
import time
import cv2
import mss
import numpy as np
face_cascade = cv2.CascadeClassifier('face.xml')
label = ["angry", "happy", "sad", "stress"]
monitor = {"top": 0, "left": 0, "width": 1000, "height": 1000}
with mss.mss() as sct:
# Part of the screen to capture
while "Screen capturing":
# Get raw pixels from the screen, save it to a Numpy array
img = np.array(sct.grab(monitor))
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 255), 2)
roi_gray = gray[y:y+h,x:x+w]
roi_gray = cv2.resize(roi_gray,(48,48),interpolation=cv2.INTER_AREA)
roi = roi_gray.reshape(1, 48, 48, 1)
prediction = model.predict(roi)
t = label[prediction.argmax()]
label_position = (x,y)
cv2.putText(img,t,label_position,cv2.FONT_HERSHEY_SIMPLEX,1,(0,255,0),2)
# Display the picture
cv2.imshow("OpenCV/Numpy normal", img)
#print("fps: {}".format(1 / (time.time() - last_time)))
# Press "q" to quit
if cv2.waitKey(25) & 0xFF == ord("q"):
cv2.destroyAllWindows()
break
Is there any way to speed up this process or is it hardware bound?
| [
"There are a few ways to potentially speed up this process:\nUse a smaller model: 360mb is quite large for a Keras model, so using a smaller model with fewer layers and parameters may improve performance.\nUse a faster hardware: The speed of this process is likely hardware-bound, so using a faster CPU or GPU may improve performance.\nOptimize the code: There may be ways to optimize the code, such as using optimized functions for image processing, using threading or multiprocessing, or using a more efficient data structure for storing and processing the data.\nReduce the number of frames processed: Instead of processing every frame from the screen capture, you could skip some frames to reduce the workload. This may result in a lower frame rate, but may improve performance.\n"
] | [
1
] | [] | [] | [
"keras",
"opencv",
"python"
] | stackoverflow_0074675873_keras_opencv_python.txt |
Q:
Request Mock in Python
'''
api = API(url, timeout=2)
response = api.get("")
if response.success:
c = response.body['content']
return c
'''
for above function , I've mocked in following way
'''
mock_response = MagicMock(success = True)
mock_response.body.return_value = {'content':923,'a':1256}
mock_api.get.return_value = mock_response'''
above approach always return response.body['content'] as 1 , though I initialize mock_response.body.return_value equals to {'content':923,'a':1256}, byexpecting response.body['content'] equal to 923. How Can I make *response.body['content'] to access initialized values from mock ?
A:
To make the mock return the correct value for response.body['content'], you need to set the body attribute of the mock response object to a dictionary containing the content key. Here is an example:
from unittest.mock import MagicMock
# Set up the mock response object
mock_response = MagicMock(success=True)
# Set the body attribute to a dictionary containing the 'content' key
mock_response.body = {'content': 923, 'a': 1256}
# Set the return value of the mock API's get method to the mock response
mock_api.get.return_value = mock_response
Now, when you call the get method on the mock API and access the response.body['content'] attribute, it should return the correct value (i.e. 923).
Note that in the original code, the body attribute is accessed using square bracket notation (response.body['content']), so it should be set to a dictionary rather than a mock object. If you set it to a mock object and try to access the content key using square bracket notation, it will always return 1 (or whatever the default return value of the mock object is).
| Request Mock in Python | '''
api = API(url, timeout=2)
response = api.get("")
if response.success:
c = response.body['content']
return c
'''
for above function , I've mocked in following way
'''
mock_response = MagicMock(success = True)
mock_response.body.return_value = {'content':923,'a':1256}
mock_api.get.return_value = mock_response'''
above approach always return response.body['content'] as 1 , though I initialize mock_response.body.return_value equals to {'content':923,'a':1256}, byexpecting response.body['content'] equal to 923. How Can I make *response.body['content'] to access initialized values from mock ?
| [
"To make the mock return the correct value for response.body['content'], you need to set the body attribute of the mock response object to a dictionary containing the content key. Here is an example:\nfrom unittest.mock import MagicMock\n\n# Set up the mock response object\nmock_response = MagicMock(success=True)\n\n# Set the body attribute to a dictionary containing the 'content' key\nmock_response.body = {'content': 923, 'a': 1256}\n\n# Set the return value of the mock API's get method to the mock response\nmock_api.get.return_value = mock_response\n\nNow, when you call the get method on the mock API and access the response.body['content'] attribute, it should return the correct value (i.e. 923).\nNote that in the original code, the body attribute is accessed using square bracket notation (response.body['content']), so it should be set to a dictionary rather than a mock object. If you set it to a mock object and try to access the content key using square bracket notation, it will always return 1 (or whatever the default return value of the mock object is).\n"
] | [
0
] | [] | [] | [
"mocking",
"python"
] | stackoverflow_0074675893_mocking_python.txt |
Q:
partial cumulative sum in python
Suppose I have a numpy array (or pandas Series if it makes it any easier), which looks like this:
foo = np.array([1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0])
I want to transform into an array
bar = np.array([0, 1, 2, 3, 4,0, 1, 2, 0, 1, 2, 3])
where the entry is how many steps you need to walk to the left to find a 1 in foo.
Now, obviously one can write a loop to compute bar from foo, but this will be bog slow. Is there anything more clever one can do?
UPDATE The pd.Series solution is around 7 times slower than the pure numpy solution. The stupid loop solution is very slow (no surprise), but when jit compiled with numba is as fast as the numpy solution.
A:
You could do cumcount with pandas
s = pd.Series(foo)
bar = s.groupby(s.cumsum()).cumcount().to_numpy()
Out[13]: array([0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3], dtype=int64)
A:
One option, specifically for the shared example, with numpy:
# get positions where value is 1
pos = foo.nonzero()[0]
# need this when computing the cumsum
values = np.diff(pos) - 1
arr = np.ones(foo.size, dtype=int)
arr[0] = 0
arr[pos[1:]] = -values
arr.cumsum()
array([0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3])
A:
Yes, you can use the numpy.cumsum() function to compute bar from foo very efficiently. Here's how you can do it:
import numpy as np
# Define the input array
foo = np.array([1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0])
# Compute the cumulative sum of foo, but shift the values
# to the right by one position and insert a 0 at the beginning
cs = np.insert(np.cumsum(foo), 0, 0)
# Subtract the shifted cumulative sum from the original cumulative sum
bar = np.cumsum(foo) - cs
# Print the result
print(bar)
This should give you the desired output: array([0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3]).
Here's a brief explanation of how this works: The numpy.cumsum() function computes the cumulative sum of an array, which means that it sums up all the elements of the array up to and including each element. So, for example, if you apply numpy.cumsum() to the array [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0], you get the array [1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3].
Next, we use the numpy.insert() function to shift the values of the cumulative sum to the right by one position and insert a 0 at the beginning. This gives us the array [0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3].
Finally, we subtract the shifted cumulative sum from the original cumulative sum, which gives us the array [0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0]. This is exactly the same as the original array foo, except that the 1s have been replaced by the number of steps needed to walk to the left to find the next 1. This is exactly the result that we want.
I hope this helps! Let me know if you have any other questions.
| partial cumulative sum in python | Suppose I have a numpy array (or pandas Series if it makes it any easier), which looks like this:
foo = np.array([1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0])
I want to transform into an array
bar = np.array([0, 1, 2, 3, 4,0, 1, 2, 0, 1, 2, 3])
where the entry is how many steps you need to walk to the left to find a 1 in foo.
Now, obviously one can write a loop to compute bar from foo, but this will be bog slow. Is there anything more clever one can do?
UPDATE The pd.Series solution is around 7 times slower than the pure numpy solution. The stupid loop solution is very slow (no surprise), but when jit compiled with numba is as fast as the numpy solution.
| [
"You could do cumcount with pandas\ns = pd.Series(foo)\nbar = s.groupby(s.cumsum()).cumcount().to_numpy()\nOut[13]: array([0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3], dtype=int64)\n\n",
"One option, specifically for the shared example, with numpy:\n# get positions where value is 1\npos = foo.nonzero()[0]\n# need this when computing the cumsum\nvalues = np.diff(pos) - 1\narr = np.ones(foo.size, dtype=int)\narr[0] = 0\narr[pos[1:]] = -values\narr.cumsum()\n\narray([0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3])\n\n",
"Yes, you can use the numpy.cumsum() function to compute bar from foo very efficiently. Here's how you can do it:\nimport numpy as np\n\n# Define the input array\nfoo = np.array([1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0])\n\n# Compute the cumulative sum of foo, but shift the values\n# to the right by one position and insert a 0 at the beginning\ncs = np.insert(np.cumsum(foo), 0, 0)\n\n# Subtract the shifted cumulative sum from the original cumulative sum\nbar = np.cumsum(foo) - cs\n\n# Print the result\nprint(bar)\n\nThis should give you the desired output: array([0, 1, 2, 3, 4, 0, 1, 2, 0, 1, 2, 3]).\nHere's a brief explanation of how this works: The numpy.cumsum() function computes the cumulative sum of an array, which means that it sums up all the elements of the array up to and including each element. So, for example, if you apply numpy.cumsum() to the array [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0], you get the array [1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3].\nNext, we use the numpy.insert() function to shift the values of the cumulative sum to the right by one position and insert a 0 at the beginning. This gives us the array [0, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3].\nFinally, we subtract the shifted cumulative sum from the original cumulative sum, which gives us the array [0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0]. This is exactly the same as the original array foo, except that the 1s have been replaced by the number of steps needed to walk to the left to find the next 1. This is exactly the result that we want.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
9,
2,
1
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074620655_numpy_pandas_python.txt |
Q:
Scrapy use private proxy
I am using customly configured VM to act as a proxy server (via squid) and now I try to use it for my scraper. I am using scrapy-rotating-proxies to rotate trought my ip list definition but the problem is that my proxy is treated as DEAD right on the first attempt even thought I have verified that the proxy address is alive and is working just fine (I tested it by setting a proxy in firefox and tried to browse both http and https web pages. The proxy server is passwordless for testing purposes
scrapy settings
DOWNLOADER_MIDDLEWARES = {
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": None,
"scrapy.downloadermiddlewares.retry.RetryMiddleware": None,
"scrapy_fake_useragent.middleware.RandomUserAgentMiddleware": 400,
"scrapy_fake_useragent.middleware.RetryUserAgentMiddleware": 401,
"rotating_proxies.middlewares.RotatingProxyMiddleware": 610,
"rotating_proxies.middlewares.BanDetectionMiddleware": 620,
}
ROTATING_PROXY_LIST = ["X.X.X.X:3128"]
scrapy logs
2022-12-02 13:31:22 [scrapy.core.engine] INFO: Spider opened
2022-12-02 13:31:22 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-02 13:31:22 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-02 13:31:22 [rotating_proxies.middlewares] INFO: Proxies(good: 0, dead: 0, unchecked: 1, reanimated: 0, mean backoff time: 0s)
2022-12-02 13:31:32 [rotating_proxies.expire] DEBUG: Proxy <http://X.X.X.X:3128> is DEAD
2022-12-02 13:31:32 [rotating_proxies.middlewares] DEBUG: Retrying <GET https://www.johnlewis.com/header/api/config> with another proxy (failed 1 times, max retries: 5)
2022-12-02 13:31:32 [rotating_proxies.middlewares] WARNING: No proxies available; marking all proxies as unchecked
Settings I have changed for squid
http_access allow all
via off
forwarded_for delete
Please advice what can be the issue
A:
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": None,
"scrapy.downloadermiddlewares.retry.RetryMiddleware": None,
These middlewares was the issue, I cannot explain why scrapy was able to process my requests without proxies while having these middlewares enabled but after disabling them I was able to run scrapy using my proxies.
| Scrapy use private proxy | I am using customly configured VM to act as a proxy server (via squid) and now I try to use it for my scraper. I am using scrapy-rotating-proxies to rotate trought my ip list definition but the problem is that my proxy is treated as DEAD right on the first attempt even thought I have verified that the proxy address is alive and is working just fine (I tested it by setting a proxy in firefox and tried to browse both http and https web pages. The proxy server is passwordless for testing purposes
scrapy settings
DOWNLOADER_MIDDLEWARES = {
"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware": None,
"scrapy.downloadermiddlewares.retry.RetryMiddleware": None,
"scrapy_fake_useragent.middleware.RandomUserAgentMiddleware": 400,
"scrapy_fake_useragent.middleware.RetryUserAgentMiddleware": 401,
"rotating_proxies.middlewares.RotatingProxyMiddleware": 610,
"rotating_proxies.middlewares.BanDetectionMiddleware": 620,
}
ROTATING_PROXY_LIST = ["X.X.X.X:3128"]
scrapy logs
2022-12-02 13:31:22 [scrapy.core.engine] INFO: Spider opened
2022-12-02 13:31:22 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-12-02 13:31:22 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-12-02 13:31:22 [rotating_proxies.middlewares] INFO: Proxies(good: 0, dead: 0, unchecked: 1, reanimated: 0, mean backoff time: 0s)
2022-12-02 13:31:32 [rotating_proxies.expire] DEBUG: Proxy <http://X.X.X.X:3128> is DEAD
2022-12-02 13:31:32 [rotating_proxies.middlewares] DEBUG: Retrying <GET https://www.johnlewis.com/header/api/config> with another proxy (failed 1 times, max retries: 5)
2022-12-02 13:31:32 [rotating_proxies.middlewares] WARNING: No proxies available; marking all proxies as unchecked
Settings I have changed for squid
http_access allow all
via off
forwarded_for delete
Please advice what can be the issue
| [
" \"scrapy.downloadermiddlewares.useragent.UserAgentMiddleware\": None,\n \"scrapy.downloadermiddlewares.retry.RetryMiddleware\": None,\n\nThese middlewares was the issue, I cannot explain why scrapy was able to process my requests without proxies while having these middlewares enabled but after disabling them I was able to run scrapy using my proxies.\n"
] | [
0
] | [] | [] | [
"proxy",
"python",
"scrapy",
"squid"
] | stackoverflow_0074656742_proxy_python_scrapy_squid.txt |
Q:
Is there a way to extract all styles from an existing word document with python-docx and apply them to a newly generated one?
I'm new to python-docx and I'm trying to bild a document generator. What I'd like to do is to use an existing word document to extract its styles and then apply these to a newly generated one. So then whenever I use something like
document.add_heading('Test Heading', level=2)
the level 2 heading is the same as in the document from which I extracted the styles.
Many thanks in advance for any tips!
A:
You can extract the styles from an existing Word document by using the get_style_id method of the Document object. This method takes the name of the style you want to extract as an argument and returns the style ID. You can then use this style ID when adding a heading to the new document to ensure that it has the same style as the original heading.
Here is an example:
from docx import Document
# Open the existing document
existing_document = Document('existing_document.docx')
# Extract the style ID of the level 2 heading
heading2_style_id = existing_document.get_style_id('Heading 2')
# Create a new document
new_document = Document()
# Add a level 2 heading with the extracted style ID
new_document.add_heading('Test Heading', style=heading2_style_id)
You can also use the styles property of the Document object to access the full list of styles in the existing document and iterate through them to extract the style IDs for each one. This can be useful if you want to extract multiple styles from the existing document and use them in the new document.
from docx import Document
# Open the existing document
existing_document = Document('existing_document.docx')
# Create a new document
new_document = Document()
# Iterate through the styles in the existing document
for style in existing_document.styles:
# Extract the style ID
style_id = existing_document.get_style_id(style.name)
# Use the style ID to add a paragraph with the style to the new document
new_document.add_paragraph('Test paragraph', style=style_id)
| Is there a way to extract all styles from an existing word document with python-docx and apply them to a newly generated one? | I'm new to python-docx and I'm trying to bild a document generator. What I'd like to do is to use an existing word document to extract its styles and then apply these to a newly generated one. So then whenever I use something like
document.add_heading('Test Heading', level=2)
the level 2 heading is the same as in the document from which I extracted the styles.
Many thanks in advance for any tips!
| [
"You can extract the styles from an existing Word document by using the get_style_id method of the Document object. This method takes the name of the style you want to extract as an argument and returns the style ID. You can then use this style ID when adding a heading to the new document to ensure that it has the same style as the original heading.\nHere is an example:\nfrom docx import Document\n\n# Open the existing document\nexisting_document = Document('existing_document.docx')\n\n# Extract the style ID of the level 2 heading\nheading2_style_id = existing_document.get_style_id('Heading 2')\n\n# Create a new document\nnew_document = Document()\n\n# Add a level 2 heading with the extracted style ID\nnew_document.add_heading('Test Heading', style=heading2_style_id)\n\nYou can also use the styles property of the Document object to access the full list of styles in the existing document and iterate through them to extract the style IDs for each one. This can be useful if you want to extract multiple styles from the existing document and use them in the new document.\nfrom docx import Document\n\n# Open the existing document\nexisting_document = Document('existing_document.docx')\n\n# Create a new document\nnew_document = Document()\n\n# Iterate through the styles in the existing document\nfor style in existing_document.styles:\n # Extract the style ID\n style_id = existing_document.get_style_id(style.name)\n\n # Use the style ID to add a paragraph with the style to the new document\n new_document.add_paragraph('Test paragraph', style=style_id)\n\n"
] | [
1
] | [] | [] | [
"python",
"python_docx"
] | stackoverflow_0074675909_python_python_docx.txt |
Q:
Not getting a fitted curve
I am not getting a fitted curve when I run this code. Instead, I am getting a random curve. Please help. Thanks in advance.
def cauchy(x, l, k, x1, a):
return l / (1+np.exp(-k*(x-x1))) + a
amplitude = [11, 9, 15, 18, 23, 62, 225, 537, 534, 251, 341, 8, 716, 653, 673]
distance = np.arange(0,15)
popt, pcov = curve_fit(cauchy, distance[5:], amplitude[5:], maxfev=1000000, bounds=((-10, -10, -10, 0), (30000, 3000, 30000, 100)),p0=[25000, 1, 0.3, 0])
ran = np.linspace(0, 15,1000) # for smoother plots more points
derivative = deriv(ran, *popt[:-1])
derivative_normalized = derivative/np.max(derivative)
fig, ax = plt.subplots(figsize = (8,5))
ax2 = ax.twinx()
ax.plot(distance,amplitude, 'o')
# ax.set_title('pinhole 300')
ax.set_xlabel('steps')
ax.set_ylabel('Amplitude of PS in counts', color = 'tab:blue')
ax.tick_params(axis="y", labelcolor='tab:blue')
ax.plot(ran, cauchy(ran, *popt), 'tab:blue', label='cauchy fit')
ax.set_xlim(0,18)
#second axis for better visibility
ax2.set_ylabel('Derivative of Cauchy step function, normalized', color = 'tab:green')
ax2.plot(ran, derivative_normalized, color ='tab:green')
ax2.tick_params(axis="y", labelcolor='tab:green')
plt.tight_layout()
plt.show()
A:
There could be a few reasons why you are not getting a fitted curve in this code. Some potential issues are:
The data provided for fitting the curve is not sufficient or is not appropriate for the Cauchy function. The data should be continuous and have a clear pattern for the curve fitting to work properly.
The initial values provided for the curve fitting parameters are not appropriate. The initial values should be chosen carefully based on the data and the expected shape of the curve.
The bounds provided for the curve fitting parameters are not appropriate. The bounds should be chosen carefully based on the data and the expected shape of the curve.
To fix these issues, you can try the following steps:
Check the data provided for fitting the curve and make sure it is continuous and has a clear pattern. If not, try using different data or transforming the data to make it more appropriate for the Cauchy function.
Choose the initial values for the curve fitting parameters more carefully. You can try using different initial values or using a method to estimate the initial values based on the data.
Choose the bounds for the curve fitting parameters more carefully. You can try using different bounds or using a method to estimate the bounds based on the data.
Overall, it is important to carefully choose the data, initial values, and bounds for the curve fitting to get a good fit.
| Not getting a fitted curve | I am not getting a fitted curve when I run this code. Instead, I am getting a random curve. Please help. Thanks in advance.
def cauchy(x, l, k, x1, a):
return l / (1+np.exp(-k*(x-x1))) + a
amplitude = [11, 9, 15, 18, 23, 62, 225, 537, 534, 251, 341, 8, 716, 653, 673]
distance = np.arange(0,15)
popt, pcov = curve_fit(cauchy, distance[5:], amplitude[5:], maxfev=1000000, bounds=((-10, -10, -10, 0), (30000, 3000, 30000, 100)),p0=[25000, 1, 0.3, 0])
ran = np.linspace(0, 15,1000) # for smoother plots more points
derivative = deriv(ran, *popt[:-1])
derivative_normalized = derivative/np.max(derivative)
fig, ax = plt.subplots(figsize = (8,5))
ax2 = ax.twinx()
ax.plot(distance,amplitude, 'o')
# ax.set_title('pinhole 300')
ax.set_xlabel('steps')
ax.set_ylabel('Amplitude of PS in counts', color = 'tab:blue')
ax.tick_params(axis="y", labelcolor='tab:blue')
ax.plot(ran, cauchy(ran, *popt), 'tab:blue', label='cauchy fit')
ax.set_xlim(0,18)
#second axis for better visibility
ax2.set_ylabel('Derivative of Cauchy step function, normalized', color = 'tab:green')
ax2.plot(ran, derivative_normalized, color ='tab:green')
ax2.tick_params(axis="y", labelcolor='tab:green')
plt.tight_layout()
plt.show()
| [
"There could be a few reasons why you are not getting a fitted curve in this code. Some potential issues are:\nThe data provided for fitting the curve is not sufficient or is not appropriate for the Cauchy function. The data should be continuous and have a clear pattern for the curve fitting to work properly.\nThe initial values provided for the curve fitting parameters are not appropriate. The initial values should be chosen carefully based on the data and the expected shape of the curve.\nThe bounds provided for the curve fitting parameters are not appropriate. The bounds should be chosen carefully based on the data and the expected shape of the curve.\nTo fix these issues, you can try the following steps:\nCheck the data provided for fitting the curve and make sure it is continuous and has a clear pattern. If not, try using different data or transforming the data to make it more appropriate for the Cauchy function.\nChoose the initial values for the curve fitting parameters more carefully. You can try using different initial values or using a method to estimate the initial values based on the data.\nChoose the bounds for the curve fitting parameters more carefully. You can try using different bounds or using a method to estimate the bounds based on the data.\nOverall, it is important to carefully choose the data, initial values, and bounds for the curve fitting to get a good fit.\n"
] | [
1
] | [] | [] | [
"curve_fitting",
"python"
] | stackoverflow_0074675927_curve_fitting_python.txt |
Q:
Cannot import name 'win32api' from 'PyInstaller.compat'
I am trying to run pyinstaller in msys2 in Windows7. However, I am getting following error:
ImportError: cannot import name 'win32api' from 'PyInstaller.compat' (/usr/lib/python3.10/site-packages/PyInstaller/compat.py)
I checked on the internet and found possible solution: pip install pypiwin32. However, it is giving following error:
$ pip install pypiwin32
Collecting pypiwin32
Using cached pypiwin32-223-py3-none-any.whl (1.7 kB)
Using cached pypiwin32-219.zip (4.8 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-1sy9gva9/pypiwin32_8571824ef2674f0a8bcf87a3647a5381/setup.py", line 121
print "Building pywin32", pywin32_version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Where is the problem and how can it be solved?
A:
It looks like the error is coming from the package itself, specifically with the syntax used in the code. The error message indicates that there is a missing parenthesis in a print statement, and suggests that you should use print() instead of just print.
To solve the issue, you could try installing an older version of the package that doesn't have this syntax error. You can do this by using the following command:
$ pip install pypiwin32==<older version number>
For example:
$ pip install pypiwin32==219
If this doesn't work, you could try downloading the package manually from PyPI (https://pypi.org/project/pypiwin32/) and installing it manually with the following command:
$ python setup.py install
Alternatively, you could try using a different package that provides the same functionality, such as pywin32 (https://pypi.org/project/pywin32/). You can install this package using the following command:
$ pip install pywin32
Once you have installed the package, you should be able to import win32api from PyInstaller.compat without any issues.
| Cannot import name 'win32api' from 'PyInstaller.compat' | I am trying to run pyinstaller in msys2 in Windows7. However, I am getting following error:
ImportError: cannot import name 'win32api' from 'PyInstaller.compat' (/usr/lib/python3.10/site-packages/PyInstaller/compat.py)
I checked on the internet and found possible solution: pip install pypiwin32. However, it is giving following error:
$ pip install pypiwin32
Collecting pypiwin32
Using cached pypiwin32-223-py3-none-any.whl (1.7 kB)
Using cached pypiwin32-219.zip (4.8 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-1sy9gva9/pypiwin32_8571824ef2674f0a8bcf87a3647a5381/setup.py", line 121
print "Building pywin32", pywin32_version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Where is the problem and how can it be solved?
| [
"It looks like the error is coming from the package itself, specifically with the syntax used in the code. The error message indicates that there is a missing parenthesis in a print statement, and suggests that you should use print() instead of just print.\nTo solve the issue, you could try installing an older version of the package that doesn't have this syntax error. You can do this by using the following command:\n$ pip install pypiwin32==<older version number>\n\nFor example:\n$ pip install pypiwin32==219\n\nIf this doesn't work, you could try downloading the package manually from PyPI (https://pypi.org/project/pypiwin32/) and installing it manually with the following command:\n$ python setup.py install\n\nAlternatively, you could try using a different package that provides the same functionality, such as pywin32 (https://pypi.org/project/pywin32/). You can install this package using the following command:\n$ pip install pywin32\n\nOnce you have installed the package, you should be able to import win32api from PyInstaller.compat without any issues.\n"
] | [
1
] | [] | [] | [
"pyinstaller",
"python"
] | stackoverflow_0074675800_pyinstaller_python.txt |
Q:
How can I bind FocusOut and Button-2 to a button?
There are many questions about binding 2 functions to an event or binding Ctrl+Key, space+Key to a button,
but I need to know how I can bind FocusOut + Button-2 to a button.
It may seem weird that I need it, but I do.
So my scenario is that after I open the widget, I will click somewhere else outside the widget to read the source from the existing browser, blah blah...
So how can I acheive this?
I did the below.
import tkinter as tk
def test(event):
print('test')
root = tk.Tk()
root.bind('<FocusOut><Button-2>', test) # root.bind('<space>w', test) this works by the way.
root.mainloop()
Of course I did many other combinations but to no avail.
A:
Does this help? You can't put root.bind in one line.
Try this:
import tkinter as tk
def test(event):
print('test')
root = tk.Tk()
root.bind("<FocusIn>", test)
root.bind("<Button-2>", test)
root.mainloop()
| How can I bind FocusOut and Button-2 to a button? | There are many questions about binding 2 functions to an event or binding Ctrl+Key, space+Key to a button,
but I need to know how I can bind FocusOut + Button-2 to a button.
It may seem weird that I need it, but I do.
So my scenario is that after I open the widget, I will click somewhere else outside the widget to read the source from the existing browser, blah blah...
So how can I acheive this?
I did the below.
import tkinter as tk
def test(event):
print('test')
root = tk.Tk()
root.bind('<FocusOut><Button-2>', test) # root.bind('<space>w', test) this works by the way.
root.mainloop()
Of course I did many other combinations but to no avail.
| [
"Does this help? You can't put root.bind in one line.\nTry this:\nimport tkinter as tk\n\ndef test(event):\n print('test')\n\nroot = tk.Tk()\n\nroot.bind(\"<FocusIn>\", test)\nroot.bind(\"<Button-2>\", test)\nroot.mainloop()\n\n"
] | [
0
] | [] | [] | [
"mouse",
"mouseevent",
"python",
"tkinter"
] | stackoverflow_0074674338_mouse_mouseevent_python_tkinter.txt |
Q:
Computing KL-divergence over 2 estimated gaussian KDEs
I have two datasets with the same features and would like to estimate the "distance of distributions" between the two datasets. I had the idea to estimate a gaussian KDE in each of the datasets and computing the KL-divergence between the estimated KDEs. However, I am struggling to compute the "distance" between the distributions. This is what I have so far:
import numpy as np
from scipy import stats
from scipy.stats import entropy
dataset1 = np.random.rand(50)
dataset2 = np.random.rand(49)
kernel1 = stats.gaussian_kde(dataset1)
kernel2 = stats.gaussian_kde(dataset2)
I know I can use entropy(pk, qk) to calculate the kl-divergence but I don't understand how do that starting from the kernels. I thought about generating some random points and using entropy(kernel1.pdf(points),kernel2.pdf(points)) but the pdf function outputs some weird number (higher than 1 sometimes, does it mean it assigns more than 100% of prob??), and I am not sure the output I get is correct.
If anyone knows how to calculate the distance between the 2 gaussian kde kernels I would be very thankful.
A:
There is no closed form solution for KL between two mixtures of gaussians.
KL(p, q) := -E_p log [p(x)/q(x)]
so you can use MC estimator:
def KL_mc(p, q, n=100):
points = p.resample(n)
p_pdf = p.pdf(points)
q_pdf = q.pdf(points)
return np.log(p_pdf / q_pdf).mean()
Note:
you might need to add some clipping to avoid 0s and infinities
depending on the dimensionality of the space this can require quite large n
(higher than 1 sometimes, does it mean it assigns more than 100% of prob??)
PDF is not a probability. Not for continuous distributions. It is a probability density. It is a relative measure. Probability assigned to any single value is always 0, but probability of sampling an element in a given set/interval equals integral of pdf over this set/integral (and thus pointwise it can have a weight >1, but over a "small enough" set)
More general solution
Overall, unless you really need KL for theoretical reasons, there are divergences that are better suited to deal with gaussian mixtures (e.g. such that have closed form solutions), for example Cauchy-Schwarz Divergence.
In particular you can look at Maximum Entropy Linear Manifold which is based exactly on computing CS divergences between KDEs of points. You can see python implementation in melm/dcsk.py in value(v) function on github. In your case you do not want a projection so just put v = identity matrix.
def value(self, v):
# We need matrix, not vector
v = v.reshape(-1, self.k)
ipx0 = self._ipx(self.x0, self.x0, v)
ipx1 = self._ipx(self.x1, self.x1, v)
ipx2 = self._ipx(self.x0, self.x1, v)
return np.log(ipx0) + np.log(ipx1) - 2 * np.log(ipx2)
def _f1(self, X0, X1, v):
Hxy = self.gamma * self.gamma * self._H(X0, X1)
vHv = v.T.dot(Hxy).dot(v)
return 1.0 / (X0.shape[0] * X1.shape[0] * np.sqrt(la.det(vHv)) * (2 * np.pi) ** (self.k / 2))
def _f2(self, X0, X1, v):
Hxy = self.gamma * self.gamma * self._H(X0, X1)
vHv = v.T.dot(Hxy).dot(v)
vHv_inv = la.inv(vHv)
vx0 = X0.dot(v)
vx1 = X1.dot(v)
vx0c = vx0.dot(vHv_inv)
vx1c = vx1.dot(vHv_inv)
ret = 0.0
for i in range(X0.shape[0]):
ret += np.exp(-0.5 * ((vx0c[i] - vx1c) * (vx0[i] - vx1)).sum(axis=1)).sum()
return ret
def _ipx(self, X0, X1, v):
return self._f1(X0, X1, v) * self._f2(X0, X1, v)
Main difference between CS and KL is that KL requires your to compute integral of a logarithm of a pdf and CS computes logarithm of the integral. It happens, that with gaussian mixtures it is this integration of the logarithm that is a problem, without the logarithm everything is easy, and thus DCS is preferable.
| Computing KL-divergence over 2 estimated gaussian KDEs | I have two datasets with the same features and would like to estimate the "distance of distributions" between the two datasets. I had the idea to estimate a gaussian KDE in each of the datasets and computing the KL-divergence between the estimated KDEs. However, I am struggling to compute the "distance" between the distributions. This is what I have so far:
import numpy as np
from scipy import stats
from scipy.stats import entropy
dataset1 = np.random.rand(50)
dataset2 = np.random.rand(49)
kernel1 = stats.gaussian_kde(dataset1)
kernel2 = stats.gaussian_kde(dataset2)
I know I can use entropy(pk, qk) to calculate the kl-divergence but I don't understand how do that starting from the kernels. I thought about generating some random points and using entropy(kernel1.pdf(points),kernel2.pdf(points)) but the pdf function outputs some weird number (higher than 1 sometimes, does it mean it assigns more than 100% of prob??), and I am not sure the output I get is correct.
If anyone knows how to calculate the distance between the 2 gaussian kde kernels I would be very thankful.
| [
"There is no closed form solution for KL between two mixtures of gaussians.\nKL(p, q) := -E_p log [p(x)/q(x)]\n\nso you can use MC estimator:\ndef KL_mc(p, q, n=100):\n points = p.resample(n)\n p_pdf = p.pdf(points)\n q_pdf = q.pdf(points)\n return np.log(p_pdf / q_pdf).mean()\n\nNote:\n\nyou might need to add some clipping to avoid 0s and infinities\ndepending on the dimensionality of the space this can require quite large n\n\n\n(higher than 1 sometimes, does it mean it assigns more than 100% of prob??)\n\nPDF is not a probability. Not for continuous distributions. It is a probability density. It is a relative measure. Probability assigned to any single value is always 0, but probability of sampling an element in a given set/interval equals integral of pdf over this set/integral (and thus pointwise it can have a weight >1, but over a \"small enough\" set)\nMore general solution\nOverall, unless you really need KL for theoretical reasons, there are divergences that are better suited to deal with gaussian mixtures (e.g. such that have closed form solutions), for example Cauchy-Schwarz Divergence.\nIn particular you can look at Maximum Entropy Linear Manifold which is based exactly on computing CS divergences between KDEs of points. You can see python implementation in melm/dcsk.py in value(v) function on github. In your case you do not want a projection so just put v = identity matrix.\n def value(self, v):\n # We need matrix, not vector\n v = v.reshape(-1, self.k)\n\n ipx0 = self._ipx(self.x0, self.x0, v)\n ipx1 = self._ipx(self.x1, self.x1, v)\n ipx2 = self._ipx(self.x0, self.x1, v)\n\n return np.log(ipx0) + np.log(ipx1) - 2 * np.log(ipx2)\n\n def _f1(self, X0, X1, v):\n Hxy = self.gamma * self.gamma * self._H(X0, X1)\n vHv = v.T.dot(Hxy).dot(v)\n return 1.0 / (X0.shape[0] * X1.shape[0] * np.sqrt(la.det(vHv)) * (2 * np.pi) ** (self.k / 2))\n\n def _f2(self, X0, X1, v):\n Hxy = self.gamma * self.gamma * self._H(X0, X1)\n vHv = v.T.dot(Hxy).dot(v)\n vHv_inv = la.inv(vHv)\n\n vx0 = X0.dot(v)\n vx1 = X1.dot(v)\n vx0c = vx0.dot(vHv_inv)\n vx1c = vx1.dot(vHv_inv)\n\n ret = 0.0\n for i in range(X0.shape[0]):\n ret += np.exp(-0.5 * ((vx0c[i] - vx1c) * (vx0[i] - vx1)).sum(axis=1)).sum()\n return ret\n\n def _ipx(self, X0, X1, v):\n return self._f1(X0, X1, v) * self._f2(X0, X1, v)\n\nMain difference between CS and KL is that KL requires your to compute integral of a logarithm of a pdf and CS computes logarithm of the integral. It happens, that with gaussian mixtures it is this integration of the logarithm that is a problem, without the logarithm everything is easy, and thus DCS is preferable.\n"
] | [
1
] | [] | [] | [
"machine_learning",
"python",
"scikit_learn",
"statistics"
] | stackoverflow_0074675438_machine_learning_python_scikit_learn_statistics.txt |
Q:
Rearranging with pandas melt
I am trying to rearrange a DataFrame. Currently, I have 1035 rows and 24 columns, one for each hour of the day. I want to make this a array with 1035*24 rows. If you want to see the data it can be extracted from the following JSON file:
url = "https://www.svk.se/services/controlroom/v2/situation?date={}&biddingArea=SE1"
svk = []
for i in parsing_range_svk:
data_json_svk = json.loads(urlopen(url.format(i)).read())
svk.append([v["y"] for v in data_json_svk["Data"][0]["data"]])
This is the code I am using to rearrange this data, but it is not doing the job. The first obeservation is in the right place, then it starts getting messy. I have not been able to figure out where each observation goes.
svk = pd.DataFrame(svk)
date_start1 = datetime(2020, 1, 1)
date_range1 = [date_start1 + timedelta(days=x) for x in range(1035)]
date_svk = pd.DataFrame(date_range1, columns=['date'])
svk['date'] = date_svk['date']
svk.drop(24, axis=1, inplace=True)
consumption_svk_1 = (svk.melt('date', value_name='SE1_C')
.assign(date = lambda x: x['date'] +
pd.to_timedelta(x.pop('variable').astype(float), unit='h'))
.sort_values('date', ignore_index=True))
A:
To rearrange the DataFrame in the desired way, you can use the pandas.DataFrame.stack method to reshape the DataFrame from wide to long format. Then, you can drop the variable column and rename the date column to the desired name.
consumption_svk_1 = (svk.stack()
.reset_index()
.rename(columns={'level_1': 'hour', 'date': 'timestamp'})
.sort_values('timestamp', ignore_index=True))
This should give you a DataFrame with 1035*24 rows and three columns: timestamp, hour, and value. Note that the timestamp column is not in the correct format, so you will need to convert it to a datetime format using the pandas.to_datetime method. Here is an example:
consumption_svk_1['timestamp'] = pd.to_datetime(consumption_svk_1['timestamp'])
| Rearranging with pandas melt | I am trying to rearrange a DataFrame. Currently, I have 1035 rows and 24 columns, one for each hour of the day. I want to make this a array with 1035*24 rows. If you want to see the data it can be extracted from the following JSON file:
url = "https://www.svk.se/services/controlroom/v2/situation?date={}&biddingArea=SE1"
svk = []
for i in parsing_range_svk:
data_json_svk = json.loads(urlopen(url.format(i)).read())
svk.append([v["y"] for v in data_json_svk["Data"][0]["data"]])
This is the code I am using to rearrange this data, but it is not doing the job. The first obeservation is in the right place, then it starts getting messy. I have not been able to figure out where each observation goes.
svk = pd.DataFrame(svk)
date_start1 = datetime(2020, 1, 1)
date_range1 = [date_start1 + timedelta(days=x) for x in range(1035)]
date_svk = pd.DataFrame(date_range1, columns=['date'])
svk['date'] = date_svk['date']
svk.drop(24, axis=1, inplace=True)
consumption_svk_1 = (svk.melt('date', value_name='SE1_C')
.assign(date = lambda x: x['date'] +
pd.to_timedelta(x.pop('variable').astype(float), unit='h'))
.sort_values('date', ignore_index=True))
| [
"To rearrange the DataFrame in the desired way, you can use the pandas.DataFrame.stack method to reshape the DataFrame from wide to long format. Then, you can drop the variable column and rename the date column to the desired name.\nconsumption_svk_1 = (svk.stack()\n .reset_index()\n .rename(columns={'level_1': 'hour', 'date': 'timestamp'})\n .sort_values('timestamp', ignore_index=True))\n\nThis should give you a DataFrame with 1035*24 rows and three columns: timestamp, hour, and value. Note that the timestamp column is not in the correct format, so you will need to convert it to a datetime format using the pandas.to_datetime method. Here is an example:\nconsumption_svk_1['timestamp'] = pd.to_datetime(consumption_svk_1['timestamp'])\n\n"
] | [
0
] | [] | [] | [
"json",
"pandas_melt",
"python"
] | stackoverflow_0074675971_json_pandas_melt_python.txt |
Q:
I'm trying to split and remove unnecessary characters from a column using pandas
I'm trying to remove all the unnecessary words and characters from the values in this column. I want the rows to contain 'Entry level', 'Mid-Senior level' etc. Also is there anyway to translate the arabic to english or shall I use replace function?
df_africa.seniority_level.value_counts()
{'Seniority level': 'Entry level'} 1073
{'Seniority level': 'Mid-Senior level'} 695
{'Seniority level': 'Associate'} 481
{'Seniority level': 'Not Applicable'} 150
{'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} 115
{'مستوى الأقدمية': 'مستوى المبتدئين'} 82
{'نوع التوظيف': 'دوام كامل'} 73
{'مستوى الأقدمية': 'مساعد'} 48
{'مستوى الأقدمية': 'غير مطبق'} 42
{'Seniority level': 'Internship'} 39
{'Employment type': 'Contract'} 21
{'Employment type': 'Full-time'} 1
I've tried the split function but i couldn't get it to work properly.
A:
IIUC, use this :
import ast
#Is there any non-latin letters?
m = ~df_africa["seniority_level"].str.contains("[A-Z]")
s = df_africa["seniority_level"].apply(lambda x: ast.literal_eval(x))
df_africa["new_col"] = s.str["مستوى الأقدمية"].where(m, s.str["Seniority level"])
If you need to translate the words extracted, use deep-translator :
#pip install -U deep-translator
from deep_translator import GoogleTranslator
df_africa["new_col (TRA)"] = (
df_africa["new_col"]
.fillna("")
.apply(lambda x: GoogleTranslator(source="arabic")
.translate(x)
.title())
.replace("", None)
)
Even though I suggest you to use a custom dict using map to get the appropriate translation.
# Output :
display(df_africa)
A:
Would be useful to know the type of the 'seniority_level' column, but I'm just gonna assume the column is made up of literal strings (e.g. "{'Seniority level': 'Entry level'}")
Can translate all the text with this googletrans package, it piggybacks off google translate so use it while it lasts. Make sure to install version 4.0.0rc1.
$ pip install googletrans==4.0.0rc1
translate:
from googletrans import Translator
translator = Translator()
def translate_to_english(words):
for character in words:
if ord(character) > 127:
return translator.translate(words, dest="en").text
return words
df_africa["new_seniority_level"] = df_africa["seniority_level"].map(lambda row: translate_to_english(row))
print(df_africa)
seniority_level new_seniority_level
0 {'Seniority level': 'Entry level'} {'Seniority level': 'Entry level'}
1 {'Seniority level': 'Mid-Senior level'} {'Seniority level': 'Mid-Senior level'}
2 {'Seniority level': 'Associate'} {'Seniority level': 'Associate'}
3 {'Seniority level': 'Not Applicable'} {'Seniority level': 'Not Applicable'}
4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} {'Seniority level': 'average level of seniority'}
5 {'مستوى الأقدمية': 'مستوى المبتدئين'} {'Seniority level': 'beginners' level'}
6 {'نوع التوظيف': 'دوام كامل'} {'Recruitment type': 'full time'}
7 {'مستوى الأقدمية': 'مساعد'} {'Seniority level': 'assistant'}
8 {'مستوى الأقدمية': 'غير مطبق'} {'Senior level': 'unprecedented'}
9 {'Seniority level': 'Internship'} {'Seniority level': 'Internship'}
10 {'Employment type': 'Contract'} {'Employment type': 'Contract'}
11 {'Employment type': 'Full-time'} {'Employment type': 'Full-time'}
then get the text you want:
import re
df_africa["new_seniority_level"] = df_africa["new_seniority_level"].map(lambda row: re.match(r".+: '(.*)'", row).group(1))
print(df_africa)
seniority_level new_seniority_level
0 {'Seniority level': 'Entry level'} Entry level
1 {'Seniority level': 'Mid-Senior level'} Mid-Senior level
2 {'Seniority level': 'Associate'} Associate
3 {'Seniority level': 'Not Applicable'} Not Applicable
4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} average level of seniority
5 {'مستوى الأقدمية': 'مستوى المبتدئين'} beginners' level
6 {'نوع التوظيف': 'دوام كامل'} full time
7 {'مستوى الأقدمية': 'مساعد'} assistant
8 {'مستوى الأقدمية': 'غير مطبق'} unprecedented
9 {'Seniority level': 'Internship'} Internship
10 {'Employment type': 'Contract'} Contract
11 {'Employment type': 'Full-time'} Full-time
Look into official google translate api if googletrans eventually breaks.
| I'm trying to split and remove unnecessary characters from a column using pandas | I'm trying to remove all the unnecessary words and characters from the values in this column. I want the rows to contain 'Entry level', 'Mid-Senior level' etc. Also is there anyway to translate the arabic to english or shall I use replace function?
df_africa.seniority_level.value_counts()
{'Seniority level': 'Entry level'} 1073
{'Seniority level': 'Mid-Senior level'} 695
{'Seniority level': 'Associate'} 481
{'Seniority level': 'Not Applicable'} 150
{'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} 115
{'مستوى الأقدمية': 'مستوى المبتدئين'} 82
{'نوع التوظيف': 'دوام كامل'} 73
{'مستوى الأقدمية': 'مساعد'} 48
{'مستوى الأقدمية': 'غير مطبق'} 42
{'Seniority level': 'Internship'} 39
{'Employment type': 'Contract'} 21
{'Employment type': 'Full-time'} 1
I've tried the split function but i couldn't get it to work properly.
| [
"IIUC, use this :\nimport ast\n\n#Is there any non-latin letters?\nm = ~df_africa[\"seniority_level\"].str.contains(\"[A-Z]\")\n\ns = df_africa[\"seniority_level\"].apply(lambda x: ast.literal_eval(x))\ndf_africa[\"new_col\"] = s.str[\"مستوى الأقدمية\"].where(m, s.str[\"Seniority level\"])\n\nIf you need to translate the words extracted, use deep-translator :\n#pip install -U deep-translator\nfrom deep_translator import GoogleTranslator\n\ndf_africa[\"new_col (TRA)\"] = (\n df_africa[\"new_col\"]\n .fillna(\"\")\n .apply(lambda x: GoogleTranslator(source=\"arabic\")\n .translate(x)\n .title())\n .replace(\"\", None)\n )\n\nEven though I suggest you to use a custom dict using map to get the appropriate translation.\n# Output :\ndisplay(df_africa)\n\n",
"Would be useful to know the type of the 'seniority_level' column, but I'm just gonna assume the column is made up of literal strings (e.g. \"{'Seniority level': 'Entry level'}\")\nCan translate all the text with this googletrans package, it piggybacks off google translate so use it while it lasts. Make sure to install version 4.0.0rc1.\n$ pip install googletrans==4.0.0rc1\n\ntranslate:\nfrom googletrans import Translator\n\ntranslator = Translator()\n\ndef translate_to_english(words):\n for character in words:\n if ord(character) > 127:\n return translator.translate(words, dest=\"en\").text\n return words\n\ndf_africa[\"new_seniority_level\"] = df_africa[\"seniority_level\"].map(lambda row: translate_to_english(row))\nprint(df_africa)\n\n\n seniority_level new_seniority_level\n0 {'Seniority level': 'Entry level'} {'Seniority level': 'Entry level'}\n1 {'Seniority level': 'Mid-Senior level'} {'Seniority level': 'Mid-Senior level'}\n2 {'Seniority level': 'Associate'} {'Seniority level': 'Associate'}\n3 {'Seniority level': 'Not Applicable'} {'Seniority level': 'Not Applicable'}\n4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} {'Seniority level': 'average level of seniority'}\n5 {'مستوى الأقدمية': 'مستوى المبتدئين'} {'Seniority level': 'beginners' level'}\n6 {'نوع التوظيف': 'دوام كامل'} {'Recruitment type': 'full time'}\n7 {'مستوى الأقدمية': 'مساعد'} {'Seniority level': 'assistant'}\n8 {'مستوى الأقدمية': 'غير مطبق'} {'Senior level': 'unprecedented'}\n9 {'Seniority level': 'Internship'} {'Seniority level': 'Internship'}\n10 {'Employment type': 'Contract'} {'Employment type': 'Contract'}\n11 {'Employment type': 'Full-time'} {'Employment type': 'Full-time'}\n\n\nthen get the text you want:\nimport re\n\ndf_africa[\"new_seniority_level\"] = df_africa[\"new_seniority_level\"].map(lambda row: re.match(r\".+: '(.*)'\", row).group(1))\nprint(df_africa)\n\n seniority_level new_seniority_level\n0 {'Seniority level': 'Entry level'} Entry level\n1 {'Seniority level': 'Mid-Senior level'} Mid-Senior level\n2 {'Seniority level': 'Associate'} Associate\n3 {'Seniority level': 'Not Applicable'} Not Applicable\n4 {'مستوى الأقدمية': 'مستوى متوسط الأقدمية'} average level of seniority\n5 {'مستوى الأقدمية': 'مستوى المبتدئين'} beginners' level\n6 {'نوع التوظيف': 'دوام كامل'} full time\n7 {'مستوى الأقدمية': 'مساعد'} assistant\n8 {'مستوى الأقدمية': 'غير مطبق'} unprecedented\n9 {'Seniority level': 'Internship'} Internship\n10 {'Employment type': 'Contract'} Contract\n11 {'Employment type': 'Full-time'} Full-time\n\nLook into official google translate api if googletrans eventually breaks.\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python",
"python_3.x",
"split"
] | stackoverflow_0074674980_pandas_python_python_3.x_split.txt |
Q:
Selenium driver hanging on OS alert
I'm using Selenium in Python (3.11) with a Firefox (107) driver.
With the driver I navigate to a page which, after several actions, triggers an OS alert (prompting me to launch a program). When this alert pops up, the driver hangs, and only once it is closed manually does my script continue to run.
I have tried driver.quit(), as well as using
os.system("taskkill /F /pid " + str(process.ProcessId))
with the driver's PID, with no luck.
I have managed to prevent the pop-up from popping up with
options.set_preference("security.external_protocol_requires_permission", False)
but the code still hangs the same way at the point where the popup would have popped up.
I don't care whether the program launches or not, I just need my code to not require human intervention at this key point.
here is a minimal example of what I currently have:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
#this is the point where the pop up appears
reqs = driver.requests
print("Success!")
driver.quit()
A:
There are some prefs you can try
profile = webdriver.FirefoxProfile()
profile.set_preference('dom.push.enabled', False)
# or
profile = webdriver.FirefoxProfile()
profile.set_preference('dom.webnotifications.enabled', False)
profile.set_preference('dom.webnotifications.serviceworker.enabled', False)
A:
Have you tried setting this preference to prevent the particular popup:
profile.set_preference('browser.helperApps.neverAsk.openFile', 'typeOfFile')
# e.g. profile.set_preference('browser.helperApps.neverAsk.openFile', 'application/xml,application/octet-stream')
Or have you tried just dismissing the popup:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
....
pass_field.send_keys(Keys.ENTER)
#this is the point where the pop up appears
WebDriverWait(driver, 5).until(EC.alert_is_present).dismiss()
reqs = driver.requests
...
A:
check this checkbox manually then open the app for every app associated to the links you use, then it will work normally.
A:
It sounds like you're having trouble with an alert that pops up when using Selenium with Firefox. One way to handle this is to use the Alert class in Selenium. This class provides methods for accepting, dismissing, or sending keys to the alert.
Here's an example of how you could use the Alert class to handle the alert in your code:
# After navigating to the page that triggers the alert
alert = driver.switch_to.alert
# Use the alert methods to handle the alert as needed
alert.accept() # This will accept the alert, launching the program
# or
alert.dismiss() # This will dismiss the alert, not launching the program
# or
alert.send_keys("some text") # This will enter text in the alert and accept it
Alternatively, you could use the unhandled_prompt_behavior option in the FirefoxOptions class to specify how Selenium should handle unhandled alerts. This option takes one of three values:
accept: Accept the alert (launch the program in your case)
dismiss: Dismiss the alert (not launch the program in your case)
ignore: Ignore the alert, allowing the script to continue running
Here's an example of how you could use the unhandled_prompt_behavior option to handle the alert:
from selenium.webdriver.firefox.options import Options
# Set the unhandled_prompt_behavior option to dismiss
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
options.unhandled_prompt_behavior = "dismiss"
driver = webdriver.Firefox(options=options)
# Navigate to the page that triggers the alert
driver.get(url)
# The alert should be automatically dismissed by the unhandled_prompt_behavior setting
A:
I believe you should be able to handle the OS alert by calling driver.switch_to.alert.accept() or driver.switch_to.alert.dismiss() after navigating to the page where the alert pops up. This will automatically accept or dismiss the alert and allow your script to continue running. Here is an example of how you could use this in your code:
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Handle the alert by dismissing it
driver.switch_to.alert.dismiss()
# Your script can continue running now
reqs = driver.requests
print("Success!")
driver.quit()
Alternatively, if you don't want to handle the alert and just want to avoid it entirely, you can use the ExpectedConditions class from the selenium.webdriver.support.ui module to wait for the alert to be dismissed before continuing. Here is an example of how you could use this:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Wait for the alert to be dismissed before continuing
WebDriverWait(driver, 10).until(EC.alert_is_not_present())
# Your script can continue running now
reqs = driver.requests
print("Success!")
driver.quit()
I hope this helps! Let me know if you have any other questions.
A:
To handle the OS alert, you can use the WebDriverWait class from the selenium.webdriver.common.by module and the Alert class from the selenium.webdriver.common.alerts module. You can use these classes to wait for the alert to be present and then either accept or dismiss the alert depending on what you want to do.
Here is an example:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.alerts import Alert
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Create an instance of Firefox WebDriver
driver = webdriver.Firefox()
# Go to the page
driver.get(url)
user_field = driver.find_element(By.ID, "UserName")
user_field.send_keys(username)
pass_field = driver.find_element(By.ID, "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Wait for the alert to be present and handle it
WebDriverWait(driver, 10).until(EC.alert_is_present())
alert = Alert(driver)
alert.accept() # or alert.dismiss() to dismiss the alert
reqs = driver.requests
print("Success!")
driver.quit()
A:
One way to handle the alert that pops up is to use the Alert class in Selenium. You can use the switch_to_alert() method to switch to the alert, and then use the accept() or dismiss() method to accept or dismiss the alert.
Here is an example:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.alert import Alert
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
# Handle the alert if it appears
try:
alert = Alert(driver)
alert.dismiss()
except:
pass
reqs = driver.requests
print("Success!")
driver.quit()
Alternatively, you can use the execute_script() method to handle the alert. This method allows you to execute JavaScript code in the context of the current page. You can use this method to call the alert(), confirm(), or prompt() function in the browser to handle the alert.
Here is an example:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
driver = webdriver.Firefox(options=options)
//Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
//Use the execute_script() method to handle the alert
driver.execute_script("alert('This is an alert')")
//Continue with the script
reqs = driver.requests
print("Success!")
driver.quit()
| Selenium driver hanging on OS alert | I'm using Selenium in Python (3.11) with a Firefox (107) driver.
With the driver I navigate to a page which, after several actions, triggers an OS alert (prompting me to launch a program). When this alert pops up, the driver hangs, and only once it is closed manually does my script continue to run.
I have tried driver.quit(), as well as using
os.system("taskkill /F /pid " + str(process.ProcessId))
with the driver's PID, with no luck.
I have managed to prevent the pop-up from popping up with
options.set_preference("security.external_protocol_requires_permission", False)
but the code still hangs the same way at the point where the popup would have popped up.
I don't care whether the program launches or not, I just need my code to not require human intervention at this key point.
here is a minimal example of what I currently have:
from selenium.webdriver import ActionChains, Keys
from selenium.webdriver.firefox.options import Options
from seleniumwire import webdriver
options = Options()
options.binary_location = r'C:\Program Files\Mozilla Firefox\firefox.exe'
options.set_preference("security.external_protocol_requires_permission", False)
driver = webdriver.Firefox(options=options)
# Go to the page
driver.get(url)
user_field = driver.find_element("id", "UserName")
user_field.send_keys(username)
pass_field = driver.find_element("id", "Password")
pass_field.send_keys(password)
pass_field.send_keys(Keys.ENTER)
#this is the point where the pop up appears
reqs = driver.requests
print("Success!")
driver.quit()
| [
"There are some prefs you can try\nprofile = webdriver.FirefoxProfile()\nprofile.set_preference('dom.push.enabled', False)\n\n# or\n\nprofile = webdriver.FirefoxProfile()\nprofile.set_preference('dom.webnotifications.enabled', False)\nprofile.set_preference('dom.webnotifications.serviceworker.enabled', False)\n\n",
"Have you tried setting this preference to prevent the particular popup:\nprofile.set_preference('browser.helperApps.neverAsk.openFile', 'typeOfFile') \n# e.g. profile.set_preference('browser.helperApps.neverAsk.openFile', 'application/xml,application/octet-stream')\n\nOr have you tried just dismissing the popup:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n....\npass_field.send_keys(Keys.ENTER)\n\n#this is the point where the pop up appears\nWebDriverWait(driver, 5).until(EC.alert_is_present).dismiss()\nreqs = driver.requests\n...\n\n",
"check this checkbox manually then open the app for every app associated to the links you use, then it will work normally.\n\n",
"It sounds like you're having trouble with an alert that pops up when using Selenium with Firefox. One way to handle this is to use the Alert class in Selenium. This class provides methods for accepting, dismissing, or sending keys to the alert.\nHere's an example of how you could use the Alert class to handle the alert in your code:\n# After navigating to the page that triggers the alert\nalert = driver.switch_to.alert\n\n# Use the alert methods to handle the alert as needed\nalert.accept() # This will accept the alert, launching the program\n# or\nalert.dismiss() # This will dismiss the alert, not launching the program\n# or\nalert.send_keys(\"some text\") # This will enter text in the alert and accept it\n\nAlternatively, you could use the unhandled_prompt_behavior option in the FirefoxOptions class to specify how Selenium should handle unhandled alerts. This option takes one of three values:\naccept: Accept the alert (launch the program in your case)\ndismiss: Dismiss the alert (not launch the program in your case)\nignore: Ignore the alert, allowing the script to continue running\nHere's an example of how you could use the unhandled_prompt_behavior option to handle the alert:\nfrom selenium.webdriver.firefox.options import Options\n\n# Set the unhandled_prompt_behavior option to dismiss\noptions = Options()\noptions.binary_location = r'C:\\Program Files\\Mozilla Firefox\\firefox.exe'\noptions.set_preference(\"security.external_protocol_requires_permission\", False)\noptions.unhandled_prompt_behavior = \"dismiss\"\ndriver = webdriver.Firefox(options=options)\n\n# Navigate to the page that triggers the alert\ndriver.get(url)\n\n# The alert should be automatically dismissed by the unhandled_prompt_behavior setting\n\n",
"I believe you should be able to handle the OS alert by calling driver.switch_to.alert.accept() or driver.switch_to.alert.dismiss() after navigating to the page where the alert pops up. This will automatically accept or dismiss the alert and allow your script to continue running. Here is an example of how you could use this in your code:\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Handle the alert by dismissing it\ndriver.switch_to.alert.dismiss()\n\n# Your script can continue running now\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n\nAlternatively, if you don't want to handle the alert and just want to avoid it entirely, you can use the ExpectedConditions class from the selenium.webdriver.support.ui module to wait for the alert to be dismissed before continuing. Here is an example of how you could use this:\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Wait for the alert to be dismissed before continuing\nWebDriverWait(driver, 10).until(EC.alert_is_not_present())\n\n# Your script can continue running now\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n\nI hope this helps! Let me know if you have any other questions.\n",
"To handle the OS alert, you can use the WebDriverWait class from the selenium.webdriver.common.by module and the Alert class from the selenium.webdriver.common.alerts module. You can use these classes to wait for the alert to be present and then either accept or dismiss the alert depending on what you want to do.\nHere is an example:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.alerts import Alert\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n# Create an instance of Firefox WebDriver\ndriver = webdriver.Firefox()\n\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(By.ID, \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(By.ID, \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Wait for the alert to be present and handle it\nWebDriverWait(driver, 10).until(EC.alert_is_present())\nalert = Alert(driver)\nalert.accept() # or alert.dismiss() to dismiss the alert\n\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n",
"One way to handle the alert that pops up is to use the Alert class in Selenium. You can use the switch_to_alert() method to switch to the alert, and then use the accept() or dismiss() method to accept or dismiss the alert.\nHere is an example:\nfrom selenium.webdriver import ActionChains, Keys\nfrom selenium.webdriver.firefox.options import Options\nfrom selenium.webdriver.common.alert import Alert\nfrom seleniumwire import webdriver\n\noptions = Options()\noptions.binary_location = r'C:\\Program Files\\Mozilla Firefox\\firefox.exe'\noptions.set_preference(\"security.external_protocol_requires_permission\", False)\ndriver = webdriver.Firefox(options=options)\n\n# Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n# Handle the alert if it appears\ntry:\n alert = Alert(driver)\n alert.dismiss()\nexcept:\n pass\n\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\nAlternatively, you can use the execute_script() method to handle the alert. This method allows you to execute JavaScript code in the context of the current page. You can use this method to call the alert(), confirm(), or prompt() function in the browser to handle the alert.\nHere is an example:\nfrom selenium.webdriver import ActionChains, Keys\nfrom selenium.webdriver.firefox.options import Options\nfrom seleniumwire import webdriver\n\noptions = Options()\noptions.binary_location = r'C:\\Program Files\\Mozilla Firefox\\firefox.exe'\ndriver = webdriver.Firefox(options=options)\n\n//Go to the page\ndriver.get(url)\n\nuser_field = driver.find_element(\"id\", \"UserName\")\nuser_field.send_keys(username)\npass_field = driver.find_element(\"id\", \"Password\")\npass_field.send_keys(password)\npass_field.send_keys(Keys.ENTER)\n\n//Use the execute_script() method to handle the alert\ndriver.execute_script(\"alert('This is an alert')\")\n\n//Continue with the script\nreqs = driver.requests\n\nprint(\"Success!\")\ndriver.quit()\n\n"
] | [
3,
3,
1,
0,
0,
0,
0
] | [] | [] | [
"python",
"python_3.x",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0074563548_python_python_3.x_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
python: AttributeError: 'list' object has no attribute 'groupby'
I am following a Youtube tutorial on a streamlit application, however the error
"AttributeError: 'list' object has no attribute 'groupby'"
occured when I was trying to group my list that I scraped from wikipedia, the instructor had the exact code as me but didn't face a problem, where am I missing out exactly?
import streamlit as st
import pandas as pd
@st.cache
def load_data():
url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
html = pd.read_html(url, header = 0)
df = html[0]
return df
df = load_data()
df = df.groupby('GICS Sector')
A:
I fixed it, I just had to reassign the df variable to it's first index
import streamlit as st
import pandas as pd
@st.cache
def load_data():
url = "https://en.wikipedia.org/wiki/List_of_S%26P_500_companies"
html = pd.read_html(url, header=0)
df = html[0]
return df
df = load_data()
df = df[0]
df = df.groupby("GICS Sector")
| python: AttributeError: 'list' object has no attribute 'groupby' | I am following a Youtube tutorial on a streamlit application, however the error
"AttributeError: 'list' object has no attribute 'groupby'"
occured when I was trying to group my list that I scraped from wikipedia, the instructor had the exact code as me but didn't face a problem, where am I missing out exactly?
import streamlit as st
import pandas as pd
@st.cache
def load_data():
url = 'https://en.wikipedia.org/wiki/List_of_S%26P_500_companies'
html = pd.read_html(url, header = 0)
df = html[0]
return df
df = load_data()
df = df.groupby('GICS Sector')
| [
"I fixed it, I just had to reassign the df variable to it's first index\nimport streamlit as st\nimport pandas as pd\n\[email protected]\ndef load_data():\n url = \"https://en.wikipedia.org/wiki/List_of_S%26P_500_companies\"\n html = pd.read_html(url, header=0)\n df = html[0]\n return df\n\ndf = load_data()\ndf = df[0]\ndf = df.groupby(\"GICS Sector\")\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python",
"streamlit"
] | stackoverflow_0074675820_pandas_python_streamlit.txt |
Q:
Parsing an XML file that contains HTML snippets, renaming HTML class names, and then write back the XML file
I've got XML files that contain HTML snippets. I'm trying to write a Python script that opens such an XML file, searches for the elements containing the HTML, renames the classes, and then writes back the new XML file to file.
Here's an XML example:
<?xml version="1.0" encoding="UTF-8"?>
<question_categories>
<question_category id="18883">
<name>templates</name>
<questions>
<question id="1419226">
<parent>0</parent>
<name>_template_master</name>
<questiontext>
<div class="wrapper">
<div class="wrapper element">
<span>Exercise 1</span>
</div>
</div>
</questiontext>
</question>
<question id="1419238">
<parent>0</parent>
<name>_template_singleDropDown</name>
<questiontext>
<div class="wrapper">
<div class="element wrapper">
<span>Exercise 2</span>
</div>
</div>
</questiontext>
</question>
</questions>
</question_category>
</question_categories>
The element containing the HTML is <questiontext>, the HTML class to be renamed is wrapper, and the new class name should be prefixed-wrapper.
I succeeded to loop through the XML, extracting the HTML and also to rename the class, but I don't know how to put everything together, so at the end I get an XML file with the renamed class names.
This is my code so far:
from bs4 import BeautifulSoup
with open('dummy_short.xml', 'r') as f:
file = f.read()
soup_xml = BeautifulSoup(file, 'xml')
for questiontext in soup_xml.find_all('questiontext'):
for singleclass in BeautifulSoup(questiontext.text, 'html.parser').find_all(class_='wrapper'):
pos = singleclass.attrs['class'].index('wrapper')
singleclass.attrs['class'][pos] = 'prefixed-wrapper'
print(soup_xml)
Unfortunately, when printing soup_xml at the end, the contents are unchanged, i.e. the class names aren't renamed.
EDIT: Since one and the same class name can occur in very different and complex contexts (for example along with other classes, i.e. class="xxx yyy wrapper zzz"), a static match isn't working. And instead of using complicated and non-comprehensible regexes, I have to use a parser like beautifulsoup (because they are made exactly for this purpose!).
A:
After your comment I have changed my code a little bit.
Now the html part is correct escaped, but the empty tags are gone. Anyway the XML is valid. It seems tree.write() have some trouble with mixed XML and inserted html sequences.
import xml.etree.ElementTree as ET
from html import escape, unescape
tree = ET.parse('source.xml')
root = tree.getroot()
def replace_html(elem):
dummyXML = ET.fromstring(elem)
for htm in dummyXML.iter('div'):
if htm.tag == "div" and htm.get('class') =="wrapper":
htm.set('class', "prefixed-wrapper")
return ET.tostring(dummyXML, method='html').decode('utf-8')
for elem in root.iter("questiontext"):
html = replace_html(unescape(elem.text))
elem.text = escape(html)
with open('new.xml', 'w') as f:
f.write(f'<?xml version="1.0" encoding="UTF-8"?>')
with open('new.xml', 'a') as f:
f.write(ET.tostring(root).decode('utf-8').replace('&','&'))
The source XML file is "source.xml" and the updated XML file name is "new.xml".
Output (changed part only):
<questiontext>
<div class="prefixed-wrapper">
<div class="wrapper element">
<span>Exercise 1</span>
</div>
</div>
</questiontext>
A:
Option2: Your prefered BeautifulSoup Solution
from bs4 import BeautifulSoup
#from xml.sax.saxutils import quoteattr, escape, unescape
import re
# Get the XML soup
with open('source.xml', 'r') as f:
file = f.read()
soup_xml = BeautifulSoup(file, 'xml')
def soup_htm(elm):
"""Modify attributes according request """
# Get the html soup
soup = BeautifulSoup(elm.string, 'html.parser')
for elem in soup.find_all('div'):
if elem.attrs== {'class': ['wrapper']}:
elem['class'] = ['prefixed-wrapper']
if elem.attrs== {'class': ['wrapper', 'element']}:
elem['class'] = ['prefixed-wrapper', 'element']
if elem.attrs== {'class': ['element', 'wrapper']}:
elem['class'] = ['element', 'prefixed-wrapper']
return re.sub('"','"', str(soup))
# Find element and replace it
for questiontext in soup_xml.find_all('questiontext'):
htm_changed = soup_htm(questiontext)
questiontext = questiontext.string.wrap(soup_xml.new_tag("questiontext")).replace_with(htm_changed)
# Print result
print(soup_xml.prettify())
I prefere the inbuild python, but this is also nice and maybe easier with such mixed XML/HTML documents. Anyway the single/ double quotes makes trouble. Maybe another user can help.
| Parsing an XML file that contains HTML snippets, renaming HTML class names, and then write back the XML file | I've got XML files that contain HTML snippets. I'm trying to write a Python script that opens such an XML file, searches for the elements containing the HTML, renames the classes, and then writes back the new XML file to file.
Here's an XML example:
<?xml version="1.0" encoding="UTF-8"?>
<question_categories>
<question_category id="18883">
<name>templates</name>
<questions>
<question id="1419226">
<parent>0</parent>
<name>_template_master</name>
<questiontext>
<div class="wrapper">
<div class="wrapper element">
<span>Exercise 1</span>
</div>
</div>
</questiontext>
</question>
<question id="1419238">
<parent>0</parent>
<name>_template_singleDropDown</name>
<questiontext>
<div class="wrapper">
<div class="element wrapper">
<span>Exercise 2</span>
</div>
</div>
</questiontext>
</question>
</questions>
</question_category>
</question_categories>
The element containing the HTML is <questiontext>, the HTML class to be renamed is wrapper, and the new class name should be prefixed-wrapper.
I succeeded to loop through the XML, extracting the HTML and also to rename the class, but I don't know how to put everything together, so at the end I get an XML file with the renamed class names.
This is my code so far:
from bs4 import BeautifulSoup
with open('dummy_short.xml', 'r') as f:
file = f.read()
soup_xml = BeautifulSoup(file, 'xml')
for questiontext in soup_xml.find_all('questiontext'):
for singleclass in BeautifulSoup(questiontext.text, 'html.parser').find_all(class_='wrapper'):
pos = singleclass.attrs['class'].index('wrapper')
singleclass.attrs['class'][pos] = 'prefixed-wrapper'
print(soup_xml)
Unfortunately, when printing soup_xml at the end, the contents are unchanged, i.e. the class names aren't renamed.
EDIT: Since one and the same class name can occur in very different and complex contexts (for example along with other classes, i.e. class="xxx yyy wrapper zzz"), a static match isn't working. And instead of using complicated and non-comprehensible regexes, I have to use a parser like beautifulsoup (because they are made exactly for this purpose!).
| [
"After your comment I have changed my code a little bit.\nNow the html part is correct escaped, but the empty tags are gone. Anyway the XML is valid. It seems tree.write() have some trouble with mixed XML and inserted html sequences.\nimport xml.etree.ElementTree as ET\nfrom html import escape, unescape\n\ntree = ET.parse('source.xml')\nroot = tree.getroot()\n\ndef replace_html(elem):\n dummyXML = ET.fromstring(elem)\n for htm in dummyXML.iter('div'):\n if htm.tag == \"div\" and htm.get('class') ==\"wrapper\":\n htm.set('class', \"prefixed-wrapper\") \n return ET.tostring(dummyXML, method='html').decode('utf-8')\n \nfor elem in root.iter(\"questiontext\"):\n html = replace_html(unescape(elem.text))\n elem.text = escape(html)\n \nwith open('new.xml', 'w') as f:\n f.write(f'<?xml version=\"1.0\" encoding=\"UTF-8\"?>')\n\nwith open('new.xml', 'a') as f:\n f.write(ET.tostring(root).decode('utf-8').replace('&','&'))\n\nThe source XML file is \"source.xml\" and the updated XML file name is \"new.xml\".\nOutput (changed part only):\n<questiontext>\n <div class="prefixed-wrapper">\n <div class="wrapper element">\n <span>Exercise 1</span>\n </div>\n </div>\n</questiontext>\n\n",
"Option2: Your prefered BeautifulSoup Solution\nfrom bs4 import BeautifulSoup\n#from xml.sax.saxutils import quoteattr, escape, unescape\nimport re\n\n# Get the XML soup\nwith open('source.xml', 'r') as f:\n file = f.read() \nsoup_xml = BeautifulSoup(file, 'xml')\n\ndef soup_htm(elm):\n \"\"\"Modify attributes according request \"\"\"\n # Get the html soup\n soup = BeautifulSoup(elm.string, 'html.parser')\n \n \n for elem in soup.find_all('div'):\n if elem.attrs== {'class': ['wrapper']}:\n elem['class'] = ['prefixed-wrapper']\n if elem.attrs== {'class': ['wrapper', 'element']}:\n elem['class'] = ['prefixed-wrapper', 'element']\n if elem.attrs== {'class': ['element', 'wrapper']}:\n elem['class'] = ['element', 'prefixed-wrapper'] \n return re.sub('\"','"', str(soup))\n\n# Find element and replace it \nfor questiontext in soup_xml.find_all('questiontext'):\n htm_changed = soup_htm(questiontext)\n questiontext = questiontext.string.wrap(soup_xml.new_tag(\"questiontext\")).replace_with(htm_changed)\n \n# Print result\nprint(soup_xml.prettify())\n\nI prefere the inbuild python, but this is also nice and maybe easier with such mixed XML/HTML documents. Anyway the single/ double quotes makes trouble. Maybe another user can help.\n"
] | [
1,
0
] | [] | [] | [
"beautifulsoup",
"html",
"python",
"xml"
] | stackoverflow_0074669395_beautifulsoup_html_python_xml.txt |
Q:
Understanding of degree calcuation in quadrants
I found something in my search, which I don't understand. The goal is to read out the angle of a pointer in a pressure gauge. In my research, I found this example:
https://circuits-ninja.pl/reading-an-indication-from-an-analog-pressure-gauge-using-the-esp32-cam-module-with-an-ov2640-and-opencv-camera/
He's calculating the degree as follows:
# Finding angle using the arc tan of y/x
res = np.arctan(np.divide(float(y_angle), float(x_angle)))
#Converting to degrees
res = np.rad2deg(res)
if x_angle > 0 and y_angle > 0: #in quadrant I
final_angle = 270 - res
if x_angle < 0 and y_angle > 0: #in quadrant II
final_angle = 90 - res
if x_angle < 0 and y_angle < 0: #in quadrant III
final_angle = 90 - res
if x_angle > 0 and y_angle < 0: #in quadrant IV
final_angle = 270 - res
I understand the reason of using quadrants in this case, but what i don't understand is why does he calculate 270 - res
if x_angle and y_angle > 0 and also calculate 270 - res if x_angle > 0 and y_angle < 0.
He's using the same formula for two different quadrants?
Thanks in forward
A:
I think this is because 0 degrees is located, if placed on a two-dimensional surface, on (0,1)=(cos(90),sin(90)) instead of (1,0)=(cos(0),sin(0)). This means it has an offset of 90 degrees.
A:
Much simpler way:
res = np.arctan2(float(y_angle), float(x_angle))
#Converting to degrees
res = np.rad2deg(res)
if (res < 0):
res += 360
That's all, arctan2 will account for all cases including zero x
| Understanding of degree calcuation in quadrants | I found something in my search, which I don't understand. The goal is to read out the angle of a pointer in a pressure gauge. In my research, I found this example:
https://circuits-ninja.pl/reading-an-indication-from-an-analog-pressure-gauge-using-the-esp32-cam-module-with-an-ov2640-and-opencv-camera/
He's calculating the degree as follows:
# Finding angle using the arc tan of y/x
res = np.arctan(np.divide(float(y_angle), float(x_angle)))
#Converting to degrees
res = np.rad2deg(res)
if x_angle > 0 and y_angle > 0: #in quadrant I
final_angle = 270 - res
if x_angle < 0 and y_angle > 0: #in quadrant II
final_angle = 90 - res
if x_angle < 0 and y_angle < 0: #in quadrant III
final_angle = 90 - res
if x_angle > 0 and y_angle < 0: #in quadrant IV
final_angle = 270 - res
I understand the reason of using quadrants in this case, but what i don't understand is why does he calculate 270 - res
if x_angle and y_angle > 0 and also calculate 270 - res if x_angle > 0 and y_angle < 0.
He's using the same formula for two different quadrants?
Thanks in forward
| [
"I think this is because 0 degrees is located, if placed on a two-dimensional surface, on (0,1)=(cos(90),sin(90)) instead of (1,0)=(cos(0),sin(0)). This means it has an offset of 90 degrees.\n",
"Much simpler way:\n res = np.arctan2(float(y_angle), float(x_angle))\n #Converting to degrees\n res = np.rad2deg(res)\n if (res < 0):\n res += 360 \n\nThat's all, arctan2 will account for all cases including zero x\n"
] | [
0,
0
] | [] | [] | [
"math",
"numpy",
"python"
] | stackoverflow_0074674308_math_numpy_python.txt |
Q:
Losing cell formats when accessing rows
In some circumstances the format (int, float, etc) of a cell is lost when accessing via its row.
In that example the first column has integers and the second floats. But the 111 is converted into 111.0.
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
# A 111.0
# B 1.3
# Name: 0, dtype: float64
print(dfA.loc[0])
# <class 'numpy.float64'>
print(type(dfA.loc[0].A))
The output I would expect is like this
A 111
B 1.3
<class 'numpy.int64'>
I have an idea why this happens. But IMHO this isn't user friendly. Can I solve this somehow? The goal is to access (e.g. read) each cells value without loseing its format.
In the full code below you can also see it is possible when one of the columns is of type string. Wired.
Minimal Working Example
#!/usr/bin/env python3
import pandas
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
print(dfA)
dfB = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
'C': ['one', 'two', 'three']
})
print(dfB)
print(dfA.loc[0])
print(type(dfA.loc[0].A))
print(dfB.loc[0])
print(type(dfB.loc[0].A))
Output
A B
0 111 1.3
1 222 2.4
2 333 3.5
A B C
0 111 1.3 one
1 222 2.4 two
2 333 3.5 three
A 111.0
B 1.3
Name: 0, dtype: float64
<class 'numpy.float64'>
A 111
B 1.3
C one
Name: 0, dtype: object
<class 'numpy.int64'>
A:
If you want to access a specific value in the DataFrame without losing its data type, you can use the at method instead of the loc method. The at method accesses a scalar value in the DataFrame, so it will preserve the data type of the value. See: print(type(dfA.at[0, 'A']))
In this example, the at method is used to access the value in the first row and first column of the DataFrame. This value is an integer, so the at method returns it as an integer, preserving its data type.
| Losing cell formats when accessing rows | In some circumstances the format (int, float, etc) of a cell is lost when accessing via its row.
In that example the first column has integers and the second floats. But the 111 is converted into 111.0.
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
# A 111.0
# B 1.3
# Name: 0, dtype: float64
print(dfA.loc[0])
# <class 'numpy.float64'>
print(type(dfA.loc[0].A))
The output I would expect is like this
A 111
B 1.3
<class 'numpy.int64'>
I have an idea why this happens. But IMHO this isn't user friendly. Can I solve this somehow? The goal is to access (e.g. read) each cells value without loseing its format.
In the full code below you can also see it is possible when one of the columns is of type string. Wired.
Minimal Working Example
#!/usr/bin/env python3
import pandas
dfA = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
})
print(dfA)
dfB = pandas.DataFrame({
'A': [111, 222, 333],
'B': [1.3, 2.4, 3.5],
'C': ['one', 'two', 'three']
})
print(dfB)
print(dfA.loc[0])
print(type(dfA.loc[0].A))
print(dfB.loc[0])
print(type(dfB.loc[0].A))
Output
A B
0 111 1.3
1 222 2.4
2 333 3.5
A B C
0 111 1.3 one
1 222 2.4 two
2 333 3.5 three
A 111.0
B 1.3
Name: 0, dtype: float64
<class 'numpy.float64'>
A 111
B 1.3
C one
Name: 0, dtype: object
<class 'numpy.int64'>
| [
"If you want to access a specific value in the DataFrame without losing its data type, you can use the at method instead of the loc method. The at method accesses a scalar value in the DataFrame, so it will preserve the data type of the value. See: print(type(dfA.at[0, 'A']))\nIn this example, the at method is used to access the value in the first row and first column of the DataFrame. This value is an integer, so the at method returns it as an integer, preserving its data type.\n"
] | [
1
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074677068_numpy_pandas_python.txt |
Q:
How to arrange all the alphabets in my name in sorted manner?
How to arrange alphabets in myname in sorted manner?
I have used the sort function but that din't work and solve it?
A:
Like this?
import re
my_name = "Mohammed Sardar Saajit"
my_name_with_the_letters_sorted = sorted([character for character in re.sub(r"[^\w]", "", my_name.lower())], key=ord)
print(my_name_with_the_letters_sorted)
['a', 'a', 'a', 'a', 'a', 'd', 'd', 'e', 'h', 'i', 'j', 'm', 'm', 'm', 'o', 'r', 'r', 's', 's', 't']
| How to arrange all the alphabets in my name in sorted manner? | How to arrange alphabets in myname in sorted manner?
I have used the sort function but that din't work and solve it?
| [
"Like this?\nimport re\nmy_name = \"Mohammed Sardar Saajit\"\nmy_name_with_the_letters_sorted = sorted([character for character in re.sub(r\"[^\\w]\", \"\", my_name.lower())], key=ord)\nprint(my_name_with_the_letters_sorted)\n\n['a', 'a', 'a', 'a', 'a', 'd', 'd', 'e', 'h', 'i', 'j', 'm', 'm', 'm', 'o', 'r', 'r', 's', 's', 't']\n\n"
] | [
0
] | [] | [] | [
"python",
"sorting",
"word"
] | stackoverflow_0074676020_python_sorting_word.txt |
Q:
Error (task exception was never retrieved) when running discord bot commands
I am making a discord bot using python, and I have run into an unexplainable error which I am unable to fix. I thought I fixed it by deleting the checks but I'm completely stumped by the massive block of errors I'm getting.
If anyone could please decode even some of this, I would be greatly appreciative.
Task exception was never retrieved
future: <Task finished name='CommandTree-invoker' coro=<CommandTree._from_interaction..wrapper() done, defined at C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py:1089> exception=OperationalError('no such table: blacklist')>
Traceback (most recent call last):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1091, in wrapper
await self._call(interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1242, in _call
await command._invoke_with_namespace(interaction, namespace)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 436, in _invoke_with_namespace
await command.prepare(ctx)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\core.py", line 919, in prepare
if not await self.can_run(ctx):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 524, in can_run
return await self.app_command._check_can_run(ctx.interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 418, in _check_can_run
if self.wrapped.checks and not await async_all(f(ctx) for f in self.wrapped.checks):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\utils.py", line 674, in async_all
elem = await elem
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\checks.py", line 31, in predicate
if await db_manager.is_blacklisted(context.author.id):
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\db_manager.py", line 8, in is_blacklisted
async with db.execute("SELECT * FROM blacklist WHERE user_id=?", (user_id,)) as cursor:
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\context.py", line 41, in aenter
self._obj = await self._coro
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 184, in execute
cursor = await self._execute(self._conn.execute, sql, parameters)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 129, in _execute
return await future
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: blacklist
A:
In the file C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\db_manager.py, line 8
async with db.execute("SELECT * FROM blacklist WHERE user_id=?", (user_id,)) as cursor:
Error message
sqlite3.OperationalError: no such table: blacklist
This error means that there's no table named blacklist in the database; you may want to create it before accessing it.
| Error (task exception was never retrieved) when running discord bot commands | I am making a discord bot using python, and I have run into an unexplainable error which I am unable to fix. I thought I fixed it by deleting the checks but I'm completely stumped by the massive block of errors I'm getting.
If anyone could please decode even some of this, I would be greatly appreciative.
Task exception was never retrieved
future: <Task finished name='CommandTree-invoker' coro=<CommandTree._from_interaction..wrapper() done, defined at C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py:1089> exception=OperationalError('no such table: blacklist')>
Traceback (most recent call last):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1091, in wrapper
await self._call(interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\app_commands\tree.py", line 1242, in _call
await command._invoke_with_namespace(interaction, namespace)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 436, in _invoke_with_namespace
await command.prepare(ctx)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\core.py", line 919, in prepare
if not await self.can_run(ctx):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 524, in can_run
return await self.app_command._check_can_run(ctx.interaction)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\ext\commands\hybrid.py", line 418, in _check_can_run
if self.wrapped.checks and not await async_all(f(ctx) for f in self.wrapped.checks):
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\discord\utils.py", line 674, in async_all
elem = await elem
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\checks.py", line 31, in predicate
if await db_manager.is_blacklisted(context.author.id):
File "C:\Users\mitsuk\Documents\rirakkumabot\main\helpers\db_manager.py", line 8, in is_blacklisted
async with db.execute("SELECT * FROM blacklist WHERE user_id=?", (user_id,)) as cursor:
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\context.py", line 41, in aenter
self._obj = await self._coro
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 184, in execute
cursor = await self._execute(self._conn.execute, sql, parameters)
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 129, in _execute
return await future
File "C:\Users\mitsuk\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\aiosqlite\core.py", line 102, in run
result = function()
sqlite3.OperationalError: no such table: blacklist
| [
"In the file C:\\Users\\mitsuk\\Documents\\rirakkumabot\\main\\helpers\\db_manager.py, line 8\nasync with db.execute(\"SELECT * FROM blacklist WHERE user_id=?\", (user_id,)) as cursor:\n\nError message\nsqlite3.OperationalError: no such table: blacklist\n\nThis error means that there's no table named blacklist in the database; you may want to create it before accessing it.\n"
] | [
0
] | [] | [] | [
"discord.py",
"python"
] | stackoverflow_0074675786_discord.py_python.txt |
Q:
How to improve performance - Merge two dataframes by closest geodetic distance
I have two dataframes, one radar which represents data on an equispaced grid with columns for longitude, latitude and height value, and one ice that has some information related to satellite observations, including the latitude and longitude of the observation. I want to merge the two so I can get ice with the 'height' column from radar, based on the geodetic distance point from each ice row to the closest radar point.
I'm currently doing it like this:
from geopy.distance import geodesic
import pandas as pd
def get_distance(out):
global radar
dists = radar['latlon'].apply(lambda x: geodesic(out['latlon'], x).km)
out['dist to radar']=min(dists)
out['rate_yr_radar']=radar.loc[dists.idxmin()]['rate_yr_radar']
return out
ICEvsRadar=ice.apply(get_distance, axis=1)
But it's very slow, I have around 200 points in my ice dataframe and around 50000 on the radar one. Is a slow performance to be expected based on the computational cost of calculating each distance, or could I improve something in how I apply the function?
edit: uploaded the example data on https://wetransfer.com/downloads/284036652e682a3e665994d360a3068920221203230651/5842f2
The code takes around 25 minutes to run, ice has lon, lat and latlon fields and is 180 rows long, and radar has 50000 rows with lon, lat, latlon and rate_yr_radar fields
A:
below code takes less than a second on my machine. Probably not working around equator/greenwich
import pandas as pd
import numpy as np
from scipy.spatial import KDTree
#reading data
radar = pd.read_csv("radar.csv")
ice = pd.read_csv("ice.csv")
#extrating points data
pts = np.array(radar.loc[:, ["lon", "lat"]])
#building tree
Tree = KDTree(pts)
#quering the nearest neighbour
distance, index = Tree.query(ice.loc[:, ["lon", "lat"]])
#getting relevant data from ice
reduced_radar = radar.loc[index, ["rate_yr_radar"]]
reduced_radar = reduced_radar.reset_index().rename({"index": "index_from_radar"}, axis=1)
#joining data
ice = ice.join(reduced_radar)
alternatively one could look at https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.sjoin_nearest.html
| How to improve performance - Merge two dataframes by closest geodetic distance | I have two dataframes, one radar which represents data on an equispaced grid with columns for longitude, latitude and height value, and one ice that has some information related to satellite observations, including the latitude and longitude of the observation. I want to merge the two so I can get ice with the 'height' column from radar, based on the geodetic distance point from each ice row to the closest radar point.
I'm currently doing it like this:
from geopy.distance import geodesic
import pandas as pd
def get_distance(out):
global radar
dists = radar['latlon'].apply(lambda x: geodesic(out['latlon'], x).km)
out['dist to radar']=min(dists)
out['rate_yr_radar']=radar.loc[dists.idxmin()]['rate_yr_radar']
return out
ICEvsRadar=ice.apply(get_distance, axis=1)
But it's very slow, I have around 200 points in my ice dataframe and around 50000 on the radar one. Is a slow performance to be expected based on the computational cost of calculating each distance, or could I improve something in how I apply the function?
edit: uploaded the example data on https://wetransfer.com/downloads/284036652e682a3e665994d360a3068920221203230651/5842f2
The code takes around 25 minutes to run, ice has lon, lat and latlon fields and is 180 rows long, and radar has 50000 rows with lon, lat, latlon and rate_yr_radar fields
| [
"below code takes less than a second on my machine. Probably not working around equator/greenwich\nimport pandas as pd\nimport numpy as np\nfrom scipy.spatial import KDTree\n\n#reading data\nradar = pd.read_csv(\"radar.csv\")\nice = pd.read_csv(\"ice.csv\")\n\n#extrating points data\npts = np.array(radar.loc[:, [\"lon\", \"lat\"]])\n\n#building tree\nTree = KDTree(pts)\n\n#quering the nearest neighbour\ndistance, index = Tree.query(ice.loc[:, [\"lon\", \"lat\"]])\n\n#getting relevant data from ice\nreduced_radar = radar.loc[index, [\"rate_yr_radar\"]]\nreduced_radar = reduced_radar.reset_index().rename({\"index\": \"index_from_radar\"}, axis=1)\n\n#joining data\nice = ice.join(reduced_radar)\n\nalternatively one could look at https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.sjoin_nearest.html\n"
] | [
1
] | [] | [] | [
"dataframe",
"distance",
"merge",
"pandas",
"python"
] | stackoverflow_0074669645_dataframe_distance_merge_pandas_python.txt |
Q:
error using pyinstaller exe when python ttp module is in place
I am trying convert my .py file to an exe file using pyinstaller. The .py file perfectly work fine, however, I am facing an issue after the program is converted to .exe file. The problem is shared right below. ttp.lazy_import_functions: failed to save problem with File not found indication.
[![enter image description here][1]][1]
I did a search in google if any similar error, it looks there is only one similar discussion in github which is not the %100 same problem. Because I am facing an issue when using .exe file. See https://github.com/dmulyalin/ttp/issues/54
However, I have checked ttp/ttp.py file, I can see following lazy_import_functions with the path_to_cache.
```log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")```
As it is also shown above picture, it looks that .exe file trying to find ttp/ttp.py file under _MEIXXXX cache file.
I have actually created a the following patch with the following changes in my ttp.py file to make .exe file work, however I have a few concerns here if someone explain it, I appricated it.
Changes in my ttp.py:
print(path_to_python_3x)
if path_to_python_3x:
os.startfile(f"{path_to_python_3x}\\patch.py")
def lazy_import_functions():
"""function to collect a list of all files/directories within ttp module,
parse .py files using ast and extract information about all functions
to cache them within _ttp_ dictionary
"""
_ttp_ = {
"macro": {},
"python_major_version": version_info.major,
"global_vars": {},
"template_obj": {},
"vars": {},
}
log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_temp_file = tempfile.gettempdir()
_MEI_regex = "_MEI.*"
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
path_to_temp_ttp = f"{path_to_temp_mei}" + "\\ttp"
path_to_cache = os.getenv("TTPCACHEFOLDER", path_to_temp_ttp)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
else:
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
#print(path_to_cache)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
With this patch file I am copying ttp/ folder includes ttp.py into _IMEXXXX cache file, so that .exe file finds the path, and worked fine, thankfully.
import os
import sys
import tempfile
import shutil
import re
path_to_python_3x = os.path.dirname(sys.executable)
# print(path_to_python_3x)
# print(os.getcwd())
path_to_site_packages = path_to_python_3x + "\\Lib\\site-packages"
#print(path_to_site_packages)
path_to_site_ttp = path_to_site_packages +"\\ttp"
#print(path_to_site_ttp)
_MEI_regex = "_MEI.*"
_MEI_regex_a_list = []
while True:
path_to_temp_file = tempfile.gettempdir()
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
_MEI_regex_a_list.append(path_to_temp_mei)
path_to_temp_ttp = os.path.join(path_to_temp_mei, "ttp")
try:
if "ttp" not in os.listdir(path_to_temp_mei):
shutil.copytree(path_to_site_ttp, path_to_temp_ttp)
except Exception as e:
print(e)```
My queires here is that:
1. Why the program does not work when installing with pyinstaller?
2. Why it checks /ttp/ttp.py file under under Temp?
3. Any way to change cache directory when converting with pyinstaller?
4. As you can see, I have a workaround for now. However, I won't work if cache file started to be kept other than Temp/_IMEXXXX. Because my regex string chooses the files startswidth _IME. Not sure if any possiblity here as well.
Thanks in advance!
[1]: https://i.stack.imgur.com/n0H3j.png
A:
From what i see, the ttp module tries to access its files and has references to the installation path for ttp which it cannot get to using the os module after its bundled by pyinstaller.
One simpler workaround than changing the module files and applying the patch file that you did, would be to just copy the installation folders of the ttp module to the bundle output folder using pyinstaller itself or do it manually.
This way it would find all the ttp module files which it was not after the bundle process using pyinstaller.
I was facing the same issue and it fixed it for me.
| error using pyinstaller exe when python ttp module is in place | I am trying convert my .py file to an exe file using pyinstaller. The .py file perfectly work fine, however, I am facing an issue after the program is converted to .exe file. The problem is shared right below. ttp.lazy_import_functions: failed to save problem with File not found indication.
[![enter image description here][1]][1]
I did a search in google if any similar error, it looks there is only one similar discussion in github which is not the %100 same problem. Because I am facing an issue when using .exe file. See https://github.com/dmulyalin/ttp/issues/54
However, I have checked ttp/ttp.py file, I can see following lazy_import_functions with the path_to_cache.
```log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")```
As it is also shown above picture, it looks that .exe file trying to find ttp/ttp.py file under _MEIXXXX cache file.
I have actually created a the following patch with the following changes in my ttp.py file to make .exe file work, however I have a few concerns here if someone explain it, I appricated it.
Changes in my ttp.py:
print(path_to_python_3x)
if path_to_python_3x:
os.startfile(f"{path_to_python_3x}\\patch.py")
def lazy_import_functions():
"""function to collect a list of all files/directories within ttp module,
parse .py files using ast and extract information about all functions
to cache them within _ttp_ dictionary
"""
_ttp_ = {
"macro": {},
"python_major_version": version_info.major,
"global_vars": {},
"template_obj": {},
"vars": {},
}
log.info("ttp.lazy_import_functions: starting functions lazy import")
# try to load previously pickled/cached _ttp_ dictionary
path_to_temp_file = tempfile.gettempdir()
_MEI_regex = "_MEI.*"
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
path_to_temp_ttp = f"{path_to_temp_mei}" + "\\ttp"
path_to_cache = os.getenv("TTPCACHEFOLDER", path_to_temp_ttp)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
else:
path_to_cache = os.getenv("TTPCACHEFOLDER", os.path.dirname(__file__))
#print(path_to_cache)
cache_file = os.path.join(path_to_cache, "ttp_dict_cache.pickle")
With this patch file I am copying ttp/ folder includes ttp.py into _IMEXXXX cache file, so that .exe file finds the path, and worked fine, thankfully.
import os
import sys
import tempfile
import shutil
import re
path_to_python_3x = os.path.dirname(sys.executable)
# print(path_to_python_3x)
# print(os.getcwd())
path_to_site_packages = path_to_python_3x + "\\Lib\\site-packages"
#print(path_to_site_packages)
path_to_site_ttp = path_to_site_packages +"\\ttp"
#print(path_to_site_ttp)
_MEI_regex = "_MEI.*"
_MEI_regex_a_list = []
while True:
path_to_temp_file = tempfile.gettempdir()
for temp_file in os.listdir(path_to_temp_file):
if re.search(_MEI_regex, temp_file):
path_to_temp_mei = path_to_temp_file +f"\\{temp_file}"
_MEI_regex_a_list.append(path_to_temp_mei)
path_to_temp_ttp = os.path.join(path_to_temp_mei, "ttp")
try:
if "ttp" not in os.listdir(path_to_temp_mei):
shutil.copytree(path_to_site_ttp, path_to_temp_ttp)
except Exception as e:
print(e)```
My queires here is that:
1. Why the program does not work when installing with pyinstaller?
2. Why it checks /ttp/ttp.py file under under Temp?
3. Any way to change cache directory when converting with pyinstaller?
4. As you can see, I have a workaround for now. However, I won't work if cache file started to be kept other than Temp/_IMEXXXX. Because my regex string chooses the files startswidth _IME. Not sure if any possiblity here as well.
Thanks in advance!
[1]: https://i.stack.imgur.com/n0H3j.png
| [
"From what i see, the ttp module tries to access its files and has references to the installation path for ttp which it cannot get to using the os module after its bundled by pyinstaller.\nOne simpler workaround than changing the module files and applying the patch file that you did, would be to just copy the installation folders of the ttp module to the bundle output folder using pyinstaller itself or do it manually.\nThis way it would find all the ttp module files which it was not after the bundle process using pyinstaller.\nI was facing the same issue and it fixed it for me.\n"
] | [
0
] | [] | [] | [
"exe",
"pyinstaller",
"python",
"python_3.x"
] | stackoverflow_0074173221_exe_pyinstaller_python_python_3.x.txt |
Q:
Pandas resample drops (static) datetime column, how do I keep it?
I'm working with a pandas Multiindex that is given by the three keys:
[Verbundzuordnung, ProjektIndex, Datum],
I would like to resample the dataframe on Datum hourly, which drops the right colum TagDesAbdichtens, I would like to keep it as it's static.
Verbundzuordnung ProjektIndex Datum TagDesAbdichtens
1 81679 2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
... ... ... ...
2 94574 2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
285192 rows × 1 columns
There are aditional columns that I left out here for easier comprehension.
I am currently applying this to resample the dataframe
all_merged = all_merged.groupby([
pd.Grouper(level='Verbundzuordnung'),
pd.Grouper(level='ProjektIndex'),
pd.Grouper(level='Datum', freq='H')]
)
all_merged.mean() gives me the wanted output with TagDesAbdichtens missing.
This value ist for each Verbundzuordnung and ProjektIndex unique and static and I would like to have it back in the resampled version.
Is there a way to do it with native pandas functions?
A:
I've had success resampling using the native resample function. For example,
resample_dict = {
'Verbundzuordnung': 'mean',
'ProjektIndex': 'mean',
'TagDesAbdichtens': 'first'
}
data = data.resample("60T", closed='left', label='left').apply(resample_dict)
You can apply whichever grouping keys (in place of mean) to your columns (e.g. first, min, max, etc).
See https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html for more.
A:
Instead of mean() you can do the following
agg({'TagDesAbdichtens': 'first', 'another_col': 'mean', 'another_col2': 'mean', ... })
That is, you can specify a different aggregate function for each column.
| Pandas resample drops (static) datetime column, how do I keep it? | I'm working with a pandas Multiindex that is given by the three keys:
[Verbundzuordnung, ProjektIndex, Datum],
I would like to resample the dataframe on Datum hourly, which drops the right colum TagDesAbdichtens, I would like to keep it as it's static.
Verbundzuordnung ProjektIndex Datum TagDesAbdichtens
1 81679 2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
2021-11-10 00:00:00+00:00 2021-12-08
... ... ... ...
2 94574 2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
2022-02-28 23:00:00+00:00 2022-01-31
285192 rows × 1 columns
There are aditional columns that I left out here for easier comprehension.
I am currently applying this to resample the dataframe
all_merged = all_merged.groupby([
pd.Grouper(level='Verbundzuordnung'),
pd.Grouper(level='ProjektIndex'),
pd.Grouper(level='Datum', freq='H')]
)
all_merged.mean() gives me the wanted output with TagDesAbdichtens missing.
This value ist for each Verbundzuordnung and ProjektIndex unique and static and I would like to have it back in the resampled version.
Is there a way to do it with native pandas functions?
| [
"I've had success resampling using the native resample function. For example,\n resample_dict = { \n 'Verbundzuordnung': 'mean', \n 'ProjektIndex': 'mean',\n 'TagDesAbdichtens': 'first'\n }\n\n data = data.resample(\"60T\", closed='left', label='left').apply(resample_dict)\n\nYou can apply whichever grouping keys (in place of mean) to your columns (e.g. first, min, max, etc).\nSee https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html for more.\n",
"Instead of mean() you can do the following\nagg({'TagDesAbdichtens': 'first', 'another_col': 'mean', 'another_col2': 'mean', ... })\n\nThat is, you can specify a different aggregate function for each column.\n"
] | [
0,
0
] | [] | [] | [
"datetime",
"group_by",
"pandas",
"pandas_resample",
"python"
] | stackoverflow_0074675902_datetime_group_by_pandas_pandas_resample_python.txt |
Q:
How can I display the new value made by input for the next screen in Kivy
I have been trying to make this code work. Im using ScreenManager to manage my screen.
I want the Input I entered on the first screen to be displayed the next screen. But instead, it just shows the initial value, and it doesn't change to the Inputted value.
Here is the Code i have done
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.properties import ObjectProperty
from kivy.clock import Clock
Builder.load_string("""
<MenuScreen>:
promptObject: prompts
BoxLayout:
orientation: 'horizontal'
TextInput:
id: prompts
pos: 20,20
Button:
text: "Enter Prompt"
pos: 30,30
size: 100, 30
on_press: root.submit()
<Newscreen>
BoxLayout:
orientation: 'vertical'
TextInput:
id: display_output
text: root.output
readonly: True
""")
class MenuScreen(Screen):
promptObject = ObjectProperty()
prompt = ''
def submit(self):
prompt = self.promptObject.text
global result
result = prompt
sm.add_widget(NewScreen(name="Settings"))
sm.switch_to(sm.get_screen("Settings"))
NewScreen.display(self)
class NewScreen(Screen):
output = "testing testing"
def display(self):
self.output = result
print(result) #To test if it works
class TestApp(App):
def build(self):
global sm
sm = ScreenManager()
sm.add_widget(MenuScreen(name='menu'))
return sm
if __name__ == '__main__':
TestApp().run()
I'm also thinking if i can instead Declare the Layout for the second screen later right before i call for the next Screen. Maybe that could work but if you have other method, it would be nice to see it.
A:
thank you for the concise single-file example. this is a very helpful way to submit a kivy question. I have modified and tested the below app with various changes.
I changed the root.submit to app.submit. This is not strictly required, it is just a choice in this example to put the logic in the main app. it is also possible to use root.submit and put the logic in the widget but one would have to pass a reference to the screen manager into that widget in that case.
imported TextInput object instead of using ObjectProperty. when using an IDE it is helpful to declare objects with the specific type because it enables auto-complete
assigned the ScreenManager to self.sm so this object is available throughout the app.
finally, got rid of any reference to global. I think it is better to avoid use of this keyword and explicitly create the variable at the highest level where you need it and pass the value into the objects requiring it.
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
# from kivy.properties import ObjectProperty
from kivy.uix.textinput import TextInput
from kivy.properties import StringProperty
from kivy.clock import Clock
Builder.load_string("""
<MenuScreen>:
promptObject: prompts
BoxLayout:
orientation: 'horizontal'
TextInput:
id: prompts
pos: 20,20
Button:
text: "Enter Prompt"
pos: 30,30
size: 100, 30
on_press: app.submit()
<Newscreen>
BoxLayout:
orientation: 'vertical'
TextInput:
id: display_output
text: root.output
readonly: True
""")
class MenuScreen(Screen):
promptObject = TextInput()
class NewScreen(Screen):
output = StringProperty()
def __init__(self, **kw):
super().__init__(**kw)
def display(self, result):
# set the string property equal to the value you sent
self.output = result
print(result) # To test if it works
class TestApp(App):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# create screen manager with self so that you have access
# anywhere inside the App
self.sm = ScreenManager()
# create the main screen
self.menu_screen = MenuScreen(name='menu')
# this could be deferred, or created at initialization
self.settings_screen = NewScreen(name='Settings')
#
def submit(self):
prompt = self.menu_screen.promptObject.text
result = prompt
# optional, deferred creation
# self.settings_screen = NewScreen(name='Settings')
# add to the screen manager
self.sm.add_widget(self.settings_screen)
# enter the value into your other screen
self.settings_screen.display(result)
# switch to this screen
self.sm.current="Settings"
def build(self) -> ScreenManager:
# could create this screen right away, depending...
# self.sm.add_widget(self.settings_screen)
# of course you need the main screen
self.sm.add_widget(self.menu_screen)
# redundant, unless you create all screens at the beginning
self.sm.current = 'menu'
return self.sm
if __name__ == '__main__':
TestApp().run()
| How can I display the new value made by input for the next screen in Kivy | I have been trying to make this code work. Im using ScreenManager to manage my screen.
I want the Input I entered on the first screen to be displayed the next screen. But instead, it just shows the initial value, and it doesn't change to the Inputted value.
Here is the Code i have done
from kivy.app import App
from kivy.lang import Builder
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.properties import ObjectProperty
from kivy.clock import Clock
Builder.load_string("""
<MenuScreen>:
promptObject: prompts
BoxLayout:
orientation: 'horizontal'
TextInput:
id: prompts
pos: 20,20
Button:
text: "Enter Prompt"
pos: 30,30
size: 100, 30
on_press: root.submit()
<Newscreen>
BoxLayout:
orientation: 'vertical'
TextInput:
id: display_output
text: root.output
readonly: True
""")
class MenuScreen(Screen):
promptObject = ObjectProperty()
prompt = ''
def submit(self):
prompt = self.promptObject.text
global result
result = prompt
sm.add_widget(NewScreen(name="Settings"))
sm.switch_to(sm.get_screen("Settings"))
NewScreen.display(self)
class NewScreen(Screen):
output = "testing testing"
def display(self):
self.output = result
print(result) #To test if it works
class TestApp(App):
def build(self):
global sm
sm = ScreenManager()
sm.add_widget(MenuScreen(name='menu'))
return sm
if __name__ == '__main__':
TestApp().run()
I'm also thinking if i can instead Declare the Layout for the second screen later right before i call for the next Screen. Maybe that could work but if you have other method, it would be nice to see it.
| [
"thank you for the concise single-file example. this is a very helpful way to submit a kivy question. I have modified and tested the below app with various changes.\nI changed the root.submit to app.submit. This is not strictly required, it is just a choice in this example to put the logic in the main app. it is also possible to use root.submit and put the logic in the widget but one would have to pass a reference to the screen manager into that widget in that case.\nimported TextInput object instead of using ObjectProperty. when using an IDE it is helpful to declare objects with the specific type because it enables auto-complete\nassigned the ScreenManager to self.sm so this object is available throughout the app.\nfinally, got rid of any reference to global. I think it is better to avoid use of this keyword and explicitly create the variable at the highest level where you need it and pass the value into the objects requiring it.\nfrom kivy.app import App\nfrom kivy.lang import Builder\nfrom kivy.uix.screenmanager import ScreenManager, Screen\n# from kivy.properties import ObjectProperty\nfrom kivy.uix.textinput import TextInput\nfrom kivy.properties import StringProperty\nfrom kivy.clock import Clock\n\nBuilder.load_string(\"\"\"\n<MenuScreen>:\n promptObject: prompts\n\n BoxLayout:\n orientation: 'horizontal'\n TextInput:\n id: prompts\n pos: 20,20\n Button:\n text: \"Enter Prompt\"\n pos: 30,30\n size: 100, 30\n on_press: app.submit()\n\n\n<Newscreen>\n BoxLayout:\n orientation: 'vertical'\n TextInput:\n id: display_output\n text: root.output\n readonly: True\n\n\"\"\")\n\n\nclass MenuScreen(Screen):\n promptObject = TextInput()\n\n\nclass NewScreen(Screen):\n output = StringProperty()\n\n def __init__(self, **kw):\n super().__init__(**kw)\n\n def display(self, result):\n # set the string property equal to the value you sent\n self.output = result\n print(result) # To test if it works\n\n\nclass TestApp(App):\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n # create screen manager with self so that you have access\n # anywhere inside the App\n self.sm = ScreenManager()\n # create the main screen\n self.menu_screen = MenuScreen(name='menu')\n\n # this could be deferred, or created at initialization\n self.settings_screen = NewScreen(name='Settings')\n\n #\n def submit(self):\n prompt = self.menu_screen.promptObject.text\n result = prompt\n\n # optional, deferred creation\n # self.settings_screen = NewScreen(name='Settings')\n\n # add to the screen manager\n self.sm.add_widget(self.settings_screen)\n # enter the value into your other screen\n self.settings_screen.display(result)\n\n # switch to this screen\n self.sm.current=\"Settings\"\n\n def build(self) -> ScreenManager:\n # could create this screen right away, depending...\n # self.sm.add_widget(self.settings_screen)\n # of course you need the main screen\n self.sm.add_widget(self.menu_screen)\n # redundant, unless you create all screens at the beginning\n self.sm.current = 'menu'\n return self.sm\n\n\nif __name__ == '__main__':\n TestApp().run()\n\n"
] | [
0
] | [] | [] | [
"kivy",
"kivy_language",
"python"
] | stackoverflow_0074675350_kivy_kivy_language_python.txt |
Q:
How to get the most recent message of a channel in discord.py?
Is there a way to get the most recent message of a specific channel using discord.py? I looked at the official docs and didn't find a way to.
A:
I've now figured it out by myself:
For a discord.Client class you just need these lines of code for the last message:
(await self.get_channel(CHANNEL_ID).history(limit=1).flatten())[0]
If you use a discord.ext.commands.Bot @thegamecracks' answer is correct.
A:
(Answer uses discord.ext.commands.Bot instead of discord.Client; I haven't worked with the lower level parts of the API, so this may not apply to discord.Client)
In this case, you can use Bot.get_channel(ID) to acquire the channel you want to inspect.
channel = self.bot.get_channel(int(ID))
Then, you can use channel.last_message_id to get the ID of the last message, and acquire the message with channel.fetch_message(ID).
message = await channel.fetch_message(
channel.last_message_id)
Combined, a command to get the last message of a channel may look like this:
@commands.command(
name='getlastmessage')
async def client_getlastmessage(self, ctx, ID):
"""Get the last message of a text channel."""
channel = self.bot.get_channel(int(ID))
if channel is None:
await ctx.send('Could not find that channel.')
return
# NOTE: get_channel can return a TextChannel, VoiceChannel,
# or CategoryChannel. You may want to add a check to make sure
# the ID is for text channels only
message = await channel.fetch_message(
channel.last_message_id)
# NOTE: channel.last_message_id could return None; needs a check
await ctx.send(
f'Last message in {channel.name} sent by {message.author.name}:\n'
+ message.content
)
# NOTE: message may need to be trimmed to fit within 2000 chars
A:
Another way
for the newer versions
messages = [message async for message in interaction.guild.get_channel(0).history(limit=12)]
messages[0].content
| How to get the most recent message of a channel in discord.py? | Is there a way to get the most recent message of a specific channel using discord.py? I looked at the official docs and didn't find a way to.
| [
"I've now figured it out by myself:\nFor a discord.Client class you just need these lines of code for the last message:\n\n(await self.get_channel(CHANNEL_ID).history(limit=1).flatten())[0]\n\n\nIf you use a discord.ext.commands.Bot @thegamecracks' answer is correct.\n",
"(Answer uses discord.ext.commands.Bot instead of discord.Client; I haven't worked with the lower level parts of the API, so this may not apply to discord.Client)\nIn this case, you can use Bot.get_channel(ID) to acquire the channel you want to inspect.\nchannel = self.bot.get_channel(int(ID))\n\nThen, you can use channel.last_message_id to get the ID of the last message, and acquire the message with channel.fetch_message(ID).\nmessage = await channel.fetch_message(\n channel.last_message_id)\n\nCombined, a command to get the last message of a channel may look like this:\[email protected](\n name='getlastmessage')\nasync def client_getlastmessage(self, ctx, ID):\n \"\"\"Get the last message of a text channel.\"\"\"\n channel = self.bot.get_channel(int(ID))\n if channel is None:\n await ctx.send('Could not find that channel.')\n return\n # NOTE: get_channel can return a TextChannel, VoiceChannel,\n # or CategoryChannel. You may want to add a check to make sure\n # the ID is for text channels only\n\n message = await channel.fetch_message(\n channel.last_message_id)\n # NOTE: channel.last_message_id could return None; needs a check\n\n await ctx.send(\n f'Last message in {channel.name} sent by {message.author.name}:\\n'\n + message.content\n )\n # NOTE: message may need to be trimmed to fit within 2000 chars\n\n",
"Another way\nfor the newer versions\nmessages = [message async for message in interaction.guild.get_channel(0).history(limit=12)]\nmessages[0].content\n\n"
] | [
9,
6,
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0064080277_discord_discord.py_python.txt |
Q:
Is there any way to install and unpack a github repository through code without using git bash and the like?
Currently I have a problem where I need to install all contents of a github repository (https://github.com/reversinglabs/reversinglabs-yara-rules) through code without using git bash or the like.
In this case I need to fully install the yara repository from said github.
Any one knows a way to do it in c,c++,c#,python?
Unfortunately till now I have yet to succeed in any way.
A:
It's not clear what part of bash, etc, you do not want to use. A simple way otherwise is to just call git through std::system()
#include <cstdlib>
int main(int argc, char**argv) {
std::system("git clone ...");
}
I have used it in many cases where I need to integrate git commands in a c++ program.
A:
GitHub offers a zip download of all the code it hosts.
Use whatever language and library you like to do the equivalent of:
curl -o yara-rules.zip https://github.com/reversinglabs/reversinglabs-yara-rules/archive/refs/heads/develop.zip
unzip yara-rules.zip
A:
only if you are on linux you can use:
#Python
import os
url = input("Url: ")
os.system("git clone " + url)
or in c++
#include <iostream>
using namespace std;
int main()
{
string inputUrl;
cin >> inputUrl;
inputUrl = "git clone " + inputUrl;
system (inputUrl.c_str()); //You need to convert it with .c_str()
}
Hope that this can be useful!
| Is there any way to install and unpack a github repository through code without using git bash and the like? | Currently I have a problem where I need to install all contents of a github repository (https://github.com/reversinglabs/reversinglabs-yara-rules) through code without using git bash or the like.
In this case I need to fully install the yara repository from said github.
Any one knows a way to do it in c,c++,c#,python?
Unfortunately till now I have yet to succeed in any way.
| [
"It's not clear what part of bash, etc, you do not want to use. A simple way otherwise is to just call git through std::system()\n#include <cstdlib>\n\nint main(int argc, char**argv) {\n std::system(\"git clone ...\");\n}\n\nI have used it in many cases where I need to integrate git commands in a c++ program.\n",
"GitHub offers a zip download of all the code it hosts.\nUse whatever language and library you like to do the equivalent of:\ncurl -o yara-rules.zip https://github.com/reversinglabs/reversinglabs-yara-rules/archive/refs/heads/develop.zip \nunzip yara-rules.zip\n\n",
"only if you are on linux you can use:\n#Python \nimport os \nurl = input(\"Url: \")\nos.system(\"git clone \" + url)\n\nor in c++\n#include <iostream>\nusing namespace std;\nint main()\n{\n string inputUrl;\n cin >> inputUrl;\n inputUrl = \"git clone \" + inputUrl;\n system (inputUrl.c_str()); //You need to convert it with .c_str()\n}\n\nHope that this can be useful!\n"
] | [
1,
1,
0
] | [] | [] | [
"c#",
"c++",
"git",
"python"
] | stackoverflow_0074360270_c#_c++_git_python.txt |
Q:
Plot duration of processes along with date, start and end timestamps
I am trying to plot duration of processes, starting from the following data frame
variable
year
start
end
seconds
hours
start_time
start_date
0
10m_u_component_of_wind
2005
2022-04-25 13:14:45
2022-04-26 02:13:56
46751
12.986389
13:14
1
10m_u_component_of_wind
2006
2022-04-26 04:56:26
2022-04-26 14:56:35
36009
10.002500
04:56
2
10m_u_component_of_wind
2007
2022-04-26 05:01:04
2022-04-26 20:45:38
56674
15.742778
05:01
..
..
..
..
..
..
..
..
in the following, for example, way:
cm = 1 / 2.54
figure, ax = plt.subplots(figsize=(50*cm, 20*cm))
plot = sns.stripplot(
data=timings,
x=timings.start_date,
y=timings.start_time.sort_values(ascending=False),
hue='variable',
)
# ax.set_ylim(["00:00", "23:59"])
plt.suptitle('Duration of processes')
plt.title('using patching script (commit fb14897574f55915bfc65d8be23e6242c7bdbb99)')
plt.legend()
plt.show()
If I comment out the ax.set_ylim(["00:00", "23:59"]) line, no data are plotted. Without it, the y-scale, obviously does not start at 00:00 and end at 23:59.
Question
How can I plot the duration of processes and show along start date, start time and end time, putting in one axis the hours scale? The point is (also) to emphasize how many processes needed to start very early or late (like before 08:30 AM and after 17:30 PM).
Draft hand-drawn plot
The following is surely not in proper scale. Just an idea to show in a plot:
start date
duration of a process (horizontal for easier visual estimation?)
start time (y axis)
Alternative ideas much appreciated.
A:
I came up with something close and convinced it is easier than thought to create a plot, as per the question, using the awesomeness of Bokeh:
Source code posted at: https://discourse.bokeh.org/t/plotting-timestamps-values-and-highlighting-time-ranges/9804?u=nikosalexandris
| Plot duration of processes along with date, start and end timestamps | I am trying to plot duration of processes, starting from the following data frame
variable
year
start
end
seconds
hours
start_time
start_date
0
10m_u_component_of_wind
2005
2022-04-25 13:14:45
2022-04-26 02:13:56
46751
12.986389
13:14
1
10m_u_component_of_wind
2006
2022-04-26 04:56:26
2022-04-26 14:56:35
36009
10.002500
04:56
2
10m_u_component_of_wind
2007
2022-04-26 05:01:04
2022-04-26 20:45:38
56674
15.742778
05:01
..
..
..
..
..
..
..
..
in the following, for example, way:
cm = 1 / 2.54
figure, ax = plt.subplots(figsize=(50*cm, 20*cm))
plot = sns.stripplot(
data=timings,
x=timings.start_date,
y=timings.start_time.sort_values(ascending=False),
hue='variable',
)
# ax.set_ylim(["00:00", "23:59"])
plt.suptitle('Duration of processes')
plt.title('using patching script (commit fb14897574f55915bfc65d8be23e6242c7bdbb99)')
plt.legend()
plt.show()
If I comment out the ax.set_ylim(["00:00", "23:59"]) line, no data are plotted. Without it, the y-scale, obviously does not start at 00:00 and end at 23:59.
Question
How can I plot the duration of processes and show along start date, start time and end time, putting in one axis the hours scale? The point is (also) to emphasize how many processes needed to start very early or late (like before 08:30 AM and after 17:30 PM).
Draft hand-drawn plot
The following is surely not in proper scale. Just an idea to show in a plot:
start date
duration of a process (horizontal for easier visual estimation?)
start time (y axis)
Alternative ideas much appreciated.
| [
"I came up with something close and convinced it is easier than thought to create a plot, as per the question, using the awesomeness of Bokeh:\n\nSource code posted at: https://discourse.bokeh.org/t/plotting-timestamps-values-and-highlighting-time-ranges/9804?u=nikosalexandris\n"
] | [
0
] | [] | [] | [
"datetime",
"matplotlib",
"python",
"seaborn"
] | stackoverflow_0073442993_datetime_matplotlib_python_seaborn.txt |
Q:
How to do Delete confirmation for a table data with bootstrap Modal in Django?
I'm having a table to show a list of actions in my app. I can delete any action in that table. So, I have added a delete button in every row. This delete button will trigger a 'delete confirmation' bootstrap modal.
<table class="table table-hover">
<thead>
<tr>
<th scope="col">#</th>
<th scope="col" class="th-lg">Name</th>
</tr>
</thead>
{% for action in actions_list %}
<tbody>
<tr class="test">
<th scope="row" class="align-middle">{{ forloop.counter }}</th>
<td class="align-middle">
{{action.action_name}}
</td>
<td class="align-middle">
{{action.id}}
</td>
<td>
<div class="row justify-content-end">
<button
id="edit"
type="button"
class="btn btn-sm btn-dark col col-lg-2"
style="color: rgb(255,0,0,0)"
>
<i class="lni-pencil"></i>
</button>
<button
id="trash"
type="button"
class="btn btn-sm btn-dark col col-lg-2"
style="color: rgb(255,0,0,0)"
data-toggle="modal"
data-target="#modalConfirmDelete"
>
<i class="lni-trash"></i>
</button>
</div>
</td>
</tr>
</tbody>
{% endfor %}
</table>
Below is the code for 'Delete Confirmation' bootstrap modal. It will have 'Yes' and 'No' buttons.
If I click 'Yes', then that particular action id will be passed to URL and that particular action id will be deleted.
{% block modalcontent %}
<!--Modal: modalConfirmDelete-->
<div
class="modal fade"
id="modalConfirmDelete"
tabindex="-1"
role="dialog"
aria-labelledby="exampleModalLabel"
aria-hidden="true"
>
<div class="modal-dialog modal-sm modal-notify modal-danger" role="document">
<!--Content-->
<div class="modal-content text-center">
<!--Header-->
<div class="modal-header d-flex justify-content-center">
<p class="heading">Are you sure?</p>
</div>
<!--Body-->
<div class="modal-body">
<i class="fas fa-times fa-4x animated rotateIn"></i>
</div>
<!--Footer-->
<div class="modal-footer flex-center">
<form action="{% url 'delete_action' aid=action.id %}">
{% csrf_token %}
<button class="btn btn-outline-danger">Yes</button>
</form>
<a
type="button"
class="btn btn-danger waves-effect"
data-dismiss="modal"
>No</a
>
</div>
</div>
<!--/.Content-->
</div>
</div>
{% endblock %}
In above code, I'm using a form tag for the delete action then that action id URL will trigger.
Below is the URL to delete an action,
re_path(r'^delete_action/(?P<aid>\d+)/',
views.delete_action, name='delete_action')
Problem I'm Facing :
I need action.id value in the modal which I'm not getting!
Please help me to solve this. thanks in advance :)
A:
If any of you are going through this scenario, I have a quick fix.
The main idea is to change the form's action URL using Javascript
views.py
class DeleteAddressView(DeleteView):
success_url = reverse_lazy("home")
I will try to provide the minimum solution here:
my link in the list for delete item will be:
<a
href="{% url 'item-delete' item.id %}"
class="dropdown-item text-danger"
data-toggle="modal"
data-target="#delete-item-modal"
id="delete-item"
>
Remove
</a>
modal that popup will be:
<div class="modal fade" id="delete-item-modal">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-body">
<p>Are you sure, You want to remove this item?</p>
</div>
<div class="justify-content-between mb-2 mr-2 text-right">
<form method="post"
id="item-delete-form"
>
<button type="button" class="btn btn-secondary mr-1" data-dismiss="modal">Cancel</button>
{% csrf_token %}
<button type="submit" class="btn btn-danger" id="confirm-delete-item-button">Delete</button>
</form>
</div>
</div>
</div>
</div>
Now we have to change the form action URL with the item's a href value
<script>
$(document).on('click', '#delete-item', () => {
document.getElementById("item-delete-form").action = document.querySelector('#delete-item').href
});
</script>
I know this is too late for your question but can be helpful for others. This is the easiest way to remove an item from the list without redirecting the page to the confirmation page.
NOTE: frontend framework bootstrap is used to display the modal, so you must check if bootstrap is working or not before continuing with this solution.
A:
For more explanation on Gorkali's answer, you can check here: https://elennion.wordpress.com/2018/10/08/bootstrap-4-delete-confirmation-modal-for-list-of-items/
This is how is how I solved it, based on the above answer, using plain JavaScript, and adding some more functionality:
in my_template.html:
<a href="{% url 'report_generate' %}"
class="btn btn-primary" id="generate_{{report.id}}"
data-toggle="modal" data-target="#confirmModal"
data-message="If you proceed, the existing report will be overwritten."
data-buttontext="Proceed">
Regenerate
</a>
<a href="{% url 'report_generate'"
class="btn btn-primary" id="finalize_{{report.id}}"
data-toggle="modal" data-target="#confirmModal"
data-message="If you proceed, the existing report will be finalized. After that, it can no longer be edited."
data-buttontext="Finalize Report">
Finalize
</a>
{% include "includes/confirm_modal.html" %}
with the include file confirm_modal.html:
<div class="modal fade" id="confirmModal" tabindex="-1" caller-id="" role="dialog" aria-labelledby="confirmModalLabel" aria-hidden="true">
<div class="modal-dialog modal-dialog-centered" role="document">
<div class="modal-content">
<div class="modal-body" id="modal-message">
Do you wish to proceed?
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-primary" data-dismiss="modal" id="confirmButtonModal">Confirm</button>
</div>
</div>
</div>
</div>
<script type="text/javascript">
document.addEventListener('DOMContentLoaded', () => {
var buttons = document.querySelectorAll("[data-target='#confirmModal']");
for (const button of buttons) {
button.addEventListener("click", function(event) {
// find the modal and add the caller-id as an attribute
var modal = document.getElementById("confirmModal");
modal.setAttribute("caller-id", this.getAttribute("id"));
// extract texts from calling element and replace the modals texts with it
if ("message" in this.dataset) {
document.getElementById("modal-message").innerHTML = this.dataset.message;
};
if ("buttontext" in this.dataset) {
document.getElementById("confirmButtonModal").innerHTML = this.dataset.buttontext;
};
})
}
document.getElementById("confirmButtonModal").onclick = () => {
// when the Confirm Button in the modal is clicked
var button_clicked = event.target
var caller_id = button_clicked.closest("#confirmModal").getAttribute("caller-id");
var caller = document.getElementById(caller_id);
// open the url that was specified for the caller
window.location = caller.getAttribute("href");
};
});
</script>
A:
Delete link:
<a href="javascript:void(0)" data-toggle="modal"
class="confirm-delete"
data-url="{% url 'account:delete_address' pk=address.id %}"
data-target="#deleteItemModal"
data-message="Êtes-vous sûr de supprimer l'article ?"
>
<i class="far fa-trash-alt"></i>
<span>Supprimer</span>
</a>
Modal:
<!-- Modal -->
<div id="container_delete">
<div class="modal fade" id="deleteItemModal" tabindex="-1" role="dialog"
aria-labelledby="deleteItemModalLabel" aria-hidden="true">
<div class="modal-dialog" role="document"> </div>
<div class="modal-content">
<div class="modal-header">
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body confirm-delete text-center" >
<div class="alert" id="delete_item_alert"></div>
<div id="modal-message"></div>
<hr>
<form action="" method="post" id="form_confirm_modal">
{% csrf_token %}
<button type="button" class="btn btn-danger" data-dismiss="modal" id="confirmButtonModal">Oui</button>
<button type="button" class="btn btn-primary" data-dismiss="modal">Non</button>
</form>
<input type="hidden" id="address_suppress"/>
</div>
</div>
</div>
</div>
<script type="text/javascript">
document.addEventListener('DOMContentLoaded', () => {
let form_confirm = document.querySelector('#form_confirm_modal')
let buttons = document.querySelectorAll("[data-target='#deleteItemModal']");
buttons.forEach(button => {
button.addEventListener("click", () => {
// extract texts from calling element and replace the modals texts with it
if (button.dataset.message) {
document.getElementById("modal-message").innerHTML = button.dataset.message;
}
// extract url from calling element and replace the modals texts with it
if (button.dataset.url) {
form_confirm.action = button.dataset.url;
}
})
});
let confirmModal = document.getElementById("confirmButtonModal")
confirmModal.addEventListener('click', () => {
form_confirm.submit();
});
});
</script>
Views:
class DeleteAddressView(DeleteView, SuccessMessageMixin):
template_name = 'account/address.html'
success_message = 'Adresse supprimée'
# model = Address
def get_object(self, queryset=None):
_id = int(self.kwargs.get('pk'))
address = get_object_or_404(Address, pk=_id)
return address
def get_success_url(self):
pk = self.request.user.id
return reverse_lazy('account:address', args=(pk,))
A:
Try this
In your delete link
<a href="{% url 'your-delete-url' pk=your.id %}" class="confirm-delete" title="Delete" data-toggle="modal" data-target="#confirmDeleteModal" id="deleteButton{{your.id}}">
Your modal
<div class="modal fade" id="confirmDeleteModal" tabindex="-1" caller-id="" role="dialog" aria-labelledby="confirmDeleteModalLabel" aria-hidden="true">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-body confirm-delete">
This action is permanent!
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-dismiss="modal">Cancel</button>
<button type="button" class="btn btn-danger" data-dismiss="modal" id="confirmDeleteButtonModal">Delete</button>
</div>
</div>
</div>
</div>
<script type="text/javascript">
$(document).on('click', '.confirm-delete', function () {
$("#confirmDeleteModal").attr("caller-id", $(this).attr("id"));
});
$(document).on('click', '#confirmDeleteButtonModal', function () {
var caller = $("#confirmDeleteButtonModal").closest(".modal").attr("caller-id");
window.location = $("#".concat(caller)).attr("href");
});
</script>
A:
I got the example from @dipesh, but to works for me I needed to change somethings(tag 'a' and javascript) to get the correct element.
my script
function delete_user(selected_user){
document.getElementById("item-delete-form").action = selected_user.href
}
my link in the list for delete item will be:
<a
href="{% url 'item-delete' item.id %}"
class="dropdown-item text-danger"
data-toggle="modal"
data-target="#delete-item-modal"
onclick="delete_user(this)""
>
Remove
</a>
my modal
<div class="modal fade" id="delete-item-modal">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-body">
<p>Are you sure, You want to remove this item?</p>
</div>
<div class="justify-content-between mb-2 mr-2 text-right">
<form method="post"
id="item-delete-form"
>
<button type="button" class="btn btn-secondary mr-1" data-dismiss="modal">Cancel</button>
{% csrf_token %}
<button type="submit" class="btn btn-danger" id="confirm-delete-item-button">Delete</button>
</form>
</div>
</div>
</div>
</div>
A:
You can also do it without JavaScript (see also option 1 in the answer by @lemayzeur in this question). Give the modal a variable name and call it using item.id:
# in your link to the modal
data-target="#delete-item-modal-{{item.id}}"
# in your modal
id="delete-item-modal-{{item.id}}"
| How to do Delete confirmation for a table data with bootstrap Modal in Django? | I'm having a table to show a list of actions in my app. I can delete any action in that table. So, I have added a delete button in every row. This delete button will trigger a 'delete confirmation' bootstrap modal.
<table class="table table-hover">
<thead>
<tr>
<th scope="col">#</th>
<th scope="col" class="th-lg">Name</th>
</tr>
</thead>
{% for action in actions_list %}
<tbody>
<tr class="test">
<th scope="row" class="align-middle">{{ forloop.counter }}</th>
<td class="align-middle">
{{action.action_name}}
</td>
<td class="align-middle">
{{action.id}}
</td>
<td>
<div class="row justify-content-end">
<button
id="edit"
type="button"
class="btn btn-sm btn-dark col col-lg-2"
style="color: rgb(255,0,0,0)"
>
<i class="lni-pencil"></i>
</button>
<button
id="trash"
type="button"
class="btn btn-sm btn-dark col col-lg-2"
style="color: rgb(255,0,0,0)"
data-toggle="modal"
data-target="#modalConfirmDelete"
>
<i class="lni-trash"></i>
</button>
</div>
</td>
</tr>
</tbody>
{% endfor %}
</table>
Below is the code for 'Delete Confirmation' bootstrap modal. It will have 'Yes' and 'No' buttons.
If I click 'Yes', then that particular action id will be passed to URL and that particular action id will be deleted.
{% block modalcontent %}
<!--Modal: modalConfirmDelete-->
<div
class="modal fade"
id="modalConfirmDelete"
tabindex="-1"
role="dialog"
aria-labelledby="exampleModalLabel"
aria-hidden="true"
>
<div class="modal-dialog modal-sm modal-notify modal-danger" role="document">
<!--Content-->
<div class="modal-content text-center">
<!--Header-->
<div class="modal-header d-flex justify-content-center">
<p class="heading">Are you sure?</p>
</div>
<!--Body-->
<div class="modal-body">
<i class="fas fa-times fa-4x animated rotateIn"></i>
</div>
<!--Footer-->
<div class="modal-footer flex-center">
<form action="{% url 'delete_action' aid=action.id %}">
{% csrf_token %}
<button class="btn btn-outline-danger">Yes</button>
</form>
<a
type="button"
class="btn btn-danger waves-effect"
data-dismiss="modal"
>No</a
>
</div>
</div>
<!--/.Content-->
</div>
</div>
{% endblock %}
In above code, I'm using a form tag for the delete action then that action id URL will trigger.
Below is the URL to delete an action,
re_path(r'^delete_action/(?P<aid>\d+)/',
views.delete_action, name='delete_action')
Problem I'm Facing :
I need action.id value in the modal which I'm not getting!
Please help me to solve this. thanks in advance :)
| [
"If any of you are going through this scenario, I have a quick fix.\n\nThe main idea is to change the form's action URL using Javascript\n\nviews.py\nclass DeleteAddressView(DeleteView):\n success_url = reverse_lazy(\"home\")\n\nI will try to provide the minimum solution here:\nmy link in the list for delete item will be:\n<a\n href=\"{% url 'item-delete' item.id %}\"\n class=\"dropdown-item text-danger\"\n data-toggle=\"modal\"\n data-target=\"#delete-item-modal\"\n id=\"delete-item\"\n>\n Remove\n</a>\n\nmodal that popup will be:\n<div class=\"modal fade\" id=\"delete-item-modal\">\n <div class=\"modal-dialog\">\n <div class=\"modal-content\">\n <div class=\"modal-body\">\n <p>Are you sure, You want to remove this item?</p>\n </div>\n <div class=\"justify-content-between mb-2 mr-2 text-right\">\n <form method=\"post\"\n id=\"item-delete-form\"\n >\n <button type=\"button\" class=\"btn btn-secondary mr-1\" data-dismiss=\"modal\">Cancel</button>\n {% csrf_token %}\n <button type=\"submit\" class=\"btn btn-danger\" id=\"confirm-delete-item-button\">Delete</button>\n </form>\n </div>\n </div>\n </div>\n</div>\n\n\nNow we have to change the form action URL with the item's a href value\n\n<script>\n $(document).on('click', '#delete-item', () => {\n document.getElementById(\"item-delete-form\").action = document.querySelector('#delete-item').href\n });\n</script>\n\nI know this is too late for your question but can be helpful for others. This is the easiest way to remove an item from the list without redirecting the page to the confirmation page.\n\nNOTE: frontend framework bootstrap is used to display the modal, so you must check if bootstrap is working or not before continuing with this solution.\n\n",
"For more explanation on Gorkali's answer, you can check here: https://elennion.wordpress.com/2018/10/08/bootstrap-4-delete-confirmation-modal-for-list-of-items/\nThis is how is how I solved it, based on the above answer, using plain JavaScript, and adding some more functionality:\nin my_template.html:\n<a href=\"{% url 'report_generate' %}\" \n class=\"btn btn-primary\" id=\"generate_{{report.id}}\"\n data-toggle=\"modal\" data-target=\"#confirmModal\" \n data-message=\"If you proceed, the existing report will be overwritten.\"\n data-buttontext=\"Proceed\">\n Regenerate\n</a>\n<a href=\"{% url 'report_generate'\" \n class=\"btn btn-primary\" id=\"finalize_{{report.id}}\"\n data-toggle=\"modal\" data-target=\"#confirmModal\" \n data-message=\"If you proceed, the existing report will be finalized. After that, it can no longer be edited.\"\n data-buttontext=\"Finalize Report\">\n Finalize\n</a>\n\n{% include \"includes/confirm_modal.html\" %}\n\nwith the include file confirm_modal.html:\n<div class=\"modal fade\" id=\"confirmModal\" tabindex=\"-1\" caller-id=\"\" role=\"dialog\" aria-labelledby=\"confirmModalLabel\" aria-hidden=\"true\">\n <div class=\"modal-dialog modal-dialog-centered\" role=\"document\">\n <div class=\"modal-content\">\n <div class=\"modal-body\" id=\"modal-message\">\n Do you wish to proceed?\n </div>\n <div class=\"modal-footer\">\n <button type=\"button\" class=\"btn btn-secondary\" data-dismiss=\"modal\">Cancel</button>\n <button type=\"button\" class=\"btn btn-primary\" data-dismiss=\"modal\" id=\"confirmButtonModal\">Confirm</button>\n </div>\n </div>\n </div>\n</div>\n\n<script type=\"text/javascript\">\n document.addEventListener('DOMContentLoaded', () => {\n var buttons = document.querySelectorAll(\"[data-target='#confirmModal']\");\n for (const button of buttons) {\n button.addEventListener(\"click\", function(event) {\n // find the modal and add the caller-id as an attribute\n var modal = document.getElementById(\"confirmModal\");\n modal.setAttribute(\"caller-id\", this.getAttribute(\"id\"));\n\n // extract texts from calling element and replace the modals texts with it\n if (\"message\" in this.dataset) {\n document.getElementById(\"modal-message\").innerHTML = this.dataset.message;\n };\n if (\"buttontext\" in this.dataset) {\n document.getElementById(\"confirmButtonModal\").innerHTML = this.dataset.buttontext;\n };\n })\n }\n\n document.getElementById(\"confirmButtonModal\").onclick = () => {\n // when the Confirm Button in the modal is clicked\n var button_clicked = event.target\n var caller_id = button_clicked.closest(\"#confirmModal\").getAttribute(\"caller-id\");\n var caller = document.getElementById(caller_id);\n // open the url that was specified for the caller\n window.location = caller.getAttribute(\"href\");\n };\n });\n</script>\n\n",
"Delete link:\n<a href=\"javascript:void(0)\" data-toggle=\"modal\"\n class=\"confirm-delete\"\n data-url=\"{% url 'account:delete_address' pk=address.id %}\"\n data-target=\"#deleteItemModal\"\n data-message=\"Êtes-vous sûr de supprimer l'article ?\"\n >\n <i class=\"far fa-trash-alt\"></i>\n <span>Supprimer</span>\n </a>\n\nModal:\n <!-- Modal -->\n<div id=\"container_delete\">\n<div class=\"modal fade\" id=\"deleteItemModal\" tabindex=\"-1\" role=\"dialog\"\n aria-labelledby=\"deleteItemModalLabel\" aria-hidden=\"true\">\n <div class=\"modal-dialog\" role=\"document\"> </div>\n <div class=\"modal-content\">\n\n <div class=\"modal-header\">\n <button type=\"button\" class=\"close\" data-dismiss=\"modal\" aria-label=\"Close\">\n <span aria-hidden=\"true\">×</span>\n </button>\n </div>\n\n <div class=\"modal-body confirm-delete text-center\" >\n <div class=\"alert\" id=\"delete_item_alert\"></div>\n <div id=\"modal-message\"></div>\n <hr>\n <form action=\"\" method=\"post\" id=\"form_confirm_modal\">\n {% csrf_token %}\n <button type=\"button\" class=\"btn btn-danger\" data-dismiss=\"modal\" id=\"confirmButtonModal\">Oui</button>\n <button type=\"button\" class=\"btn btn-primary\" data-dismiss=\"modal\">Non</button>\n </form>\n <input type=\"hidden\" id=\"address_suppress\"/>\n </div>\n\n </div>\n\n </div>\n</div>\n\n<script type=\"text/javascript\">\n\n document.addEventListener('DOMContentLoaded', () => {\n let form_confirm = document.querySelector('#form_confirm_modal')\n let buttons = document.querySelectorAll(\"[data-target='#deleteItemModal']\");\n buttons.forEach(button => {\n button.addEventListener(\"click\", () => {\n\n // extract texts from calling element and replace the modals texts with it\n if (button.dataset.message) {\n document.getElementById(\"modal-message\").innerHTML = button.dataset.message;\n }\n // extract url from calling element and replace the modals texts with it\n if (button.dataset.url) {\n form_confirm.action = button.dataset.url;\n }\n\n })\n });\n let confirmModal = document.getElementById(\"confirmButtonModal\")\n confirmModal.addEventListener('click', () => {\n form_confirm.submit();\n\n });\n });\n</script>\n\nViews:\nclass DeleteAddressView(DeleteView, SuccessMessageMixin):\n template_name = 'account/address.html'\n success_message = 'Adresse supprimée'\n # model = Address\n\n def get_object(self, queryset=None):\n _id = int(self.kwargs.get('pk'))\n address = get_object_or_404(Address, pk=_id)\n return address\n\n def get_success_url(self):\n pk = self.request.user.id\n return reverse_lazy('account:address', args=(pk,))\n\n",
"Try this \n In your delete link\n\n <a href=\"{% url 'your-delete-url' pk=your.id %}\" class=\"confirm-delete\" title=\"Delete\" data-toggle=\"modal\" data-target=\"#confirmDeleteModal\" id=\"deleteButton{{your.id}}\">\n\nYour modal\n<div class=\"modal fade\" id=\"confirmDeleteModal\" tabindex=\"-1\" caller-id=\"\" role=\"dialog\" aria-labelledby=\"confirmDeleteModalLabel\" aria-hidden=\"true\">\n <div class=\"modal-dialog\" role=\"document\">\n <div class=\"modal-content\">\n <div class=\"modal-body confirm-delete\">\n This action is permanent!\n </div>\n <div class=\"modal-footer\">\n <button type=\"button\" class=\"btn btn-secondary\" data-dismiss=\"modal\">Cancel</button>\n <button type=\"button\" class=\"btn btn-danger\" data-dismiss=\"modal\" id=\"confirmDeleteButtonModal\">Delete</button>\n </div>\n </div>\n </div>\n</div>\n\n\n<script type=\"text/javascript\">\n $(document).on('click', '.confirm-delete', function () {\n $(\"#confirmDeleteModal\").attr(\"caller-id\", $(this).attr(\"id\"));\n });\n\n $(document).on('click', '#confirmDeleteButtonModal', function () {\n var caller = $(\"#confirmDeleteButtonModal\").closest(\".modal\").attr(\"caller-id\");\n window.location = $(\"#\".concat(caller)).attr(\"href\");\n });\n</script>\n\n",
"I got the example from @dipesh, but to works for me I needed to change somethings(tag 'a' and javascript) to get the correct element.\nmy script\nfunction delete_user(selected_user){\n document.getElementById(\"item-delete-form\").action = selected_user.href\n}\n\nmy link in the list for delete item will be:\n<a\n href=\"{% url 'item-delete' item.id %}\"\n class=\"dropdown-item text-danger\"\n data-toggle=\"modal\"\n data-target=\"#delete-item-modal\"\n onclick=\"delete_user(this)\"\"\n>\n Remove\n</a>\n\nmy modal\n<div class=\"modal fade\" id=\"delete-item-modal\">\n <div class=\"modal-dialog\">\n <div class=\"modal-content\">\n <div class=\"modal-body\">\n <p>Are you sure, You want to remove this item?</p>\n </div>\n <div class=\"justify-content-between mb-2 mr-2 text-right\">\n <form method=\"post\"\n id=\"item-delete-form\"\n >\n <button type=\"button\" class=\"btn btn-secondary mr-1\" data-dismiss=\"modal\">Cancel</button>\n {% csrf_token %}\n <button type=\"submit\" class=\"btn btn-danger\" id=\"confirm-delete-item-button\">Delete</button>\n </form>\n </div>\n </div>\n </div>\n</div>\n\n",
"You can also do it without JavaScript (see also option 1 in the answer by @lemayzeur in this question). Give the modal a variable name and call it using item.id:\n# in your link to the modal\ndata-target=\"#delete-item-modal-{{item.id}}\"\n\n# in your modal\nid=\"delete-item-modal-{{item.id}}\"\n\n"
] | [
5,
1,
1,
0,
0,
0
] | [] | [] | [
"bootstrap_modal",
"django",
"django_templates",
"jinja2",
"python"
] | stackoverflow_0059566549_bootstrap_modal_django_django_templates_jinja2_python.txt |
Q:
Searching for one even and one odd number in string
So I'm going through old advent of codes and came across this one and it asks me to search each string to make sure it has at least one even and one odd number in it. However, my function doesn't correctly sort the list. It runs without errors, but it never filters anything and just prints out everything. I don't really know where I'm going wrong so if there are any pointers to fix it, I would gladly appreciate it.
def one_even_one_odd(pass_str: str) -> bool:
for i in range(5):
if pass_str[i] == pass_str % 2 == 0 and pass_str[i] == pass_str % 2 == 1:
return True
return False
def result(range_from: int, range_to: int) -> int:
amount_passwords = 0
each_password = []
for password in range(range_from, range_to + 1):
pass_str = str(password)
if not pass_str == ''.join(sorted(pass_str)):
continue
if not one_even_one_odd(pass_str):
continue
each_password.append(pass_str)
amount_passwords += 1
return amount_passwords, each_password
def main():
range_from = 138345
range_to = 836215
print(f'Amount of passwords followed by list of passwords: {result(range_from, range_to)}')
In this case, the list would print every number in the range for example "111, 112, 222" but I want it to only print 112 as that is the only number that contains at least one even and one odd number in it.
A:
I would use set operations:
odds = set('13579')
evens = set('02468')
def one_even_one_odd(string):
S = set(string)
return bool(odds & S) and bool(evens & S)
one_even_one_odd('ABCD125')
# True
one_even_one_odd('ABCD135')
# False
| Searching for one even and one odd number in string | So I'm going through old advent of codes and came across this one and it asks me to search each string to make sure it has at least one even and one odd number in it. However, my function doesn't correctly sort the list. It runs without errors, but it never filters anything and just prints out everything. I don't really know where I'm going wrong so if there are any pointers to fix it, I would gladly appreciate it.
def one_even_one_odd(pass_str: str) -> bool:
for i in range(5):
if pass_str[i] == pass_str % 2 == 0 and pass_str[i] == pass_str % 2 == 1:
return True
return False
def result(range_from: int, range_to: int) -> int:
amount_passwords = 0
each_password = []
for password in range(range_from, range_to + 1):
pass_str = str(password)
if not pass_str == ''.join(sorted(pass_str)):
continue
if not one_even_one_odd(pass_str):
continue
each_password.append(pass_str)
amount_passwords += 1
return amount_passwords, each_password
def main():
range_from = 138345
range_to = 836215
print(f'Amount of passwords followed by list of passwords: {result(range_from, range_to)}')
In this case, the list would print every number in the range for example "111, 112, 222" but I want it to only print 112 as that is the only number that contains at least one even and one odd number in it.
| [
"I would use set operations:\nodds = set('13579')\nevens = set('02468')\n\ndef one_even_one_odd(string):\n S = set(string)\n return bool(odds & S) and bool(evens & S)\n \n \none_even_one_odd('ABCD125')\n# True\n\none_even_one_odd('ABCD135')\n# False\n\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074677212_python.txt |
Q:
Detect if Multiple HDDs are RAID and it's RAID mode(windows)
In order to monitor system storage, I need to programmatically find :
if Multiple HDDs are used (done)
if Multiple HDDs are RAID (not done)
if they are RAID, what is the RAID mode (RAID0, RAID1, ...) (not even closed)
What I know/can:
There are 2 types of RAID , Hardware and Software.
If it's Software then it can be RIAD0(striped), RAID1(mirror) and RAID5.
I can find if it's software RAID by checking the volume name that is similar in every disk.
what I need to find :
If it's software RAID how should I know RAID type? (in storage management I can see if the volume is striped or mirror but I cannot get this information programmatically)
If it's hardware RAID how could I know that? how could I find it's RAID type
the programming language is not important, it could be CMD command, Powershell command or WMI or .....
| Detect if Multiple HDDs are RAID and it's RAID mode(windows) | In order to monitor system storage, I need to programmatically find :
if Multiple HDDs are used (done)
if Multiple HDDs are RAID (not done)
if they are RAID, what is the RAID mode (RAID0, RAID1, ...) (not even closed)
What I know/can:
There are 2 types of RAID , Hardware and Software.
If it's Software then it can be RIAD0(striped), RAID1(mirror) and RAID5.
I can find if it's software RAID by checking the volume name that is similar in every disk.
what I need to find :
If it's software RAID how should I know RAID type? (in storage management I can see if the volume is striped or mirror but I cannot get this information programmatically)
If it's hardware RAID how could I know that? how could I find it's RAID type
the programming language is not important, it could be CMD command, Powershell command or WMI or .....
| [] | [] | [
"idk maybe you should try to search online for a reg key that contains this info\nbut i guess that the os doesn't know this info, but anyway you should try maybe you'll find something about it\n"
] | [
-1
] | [
"c#",
"c++",
"cmd",
"powershell",
"python"
] | stackoverflow_0049852709_c#_c++_cmd_powershell_python.txt |
Q:
Python - compound interest calculation issue - cs1301 edx extra practice 5
I have the following problem I can't manage to solve:
Find "How much do I need to invest to have a certain amount by a certain year?" For example, "How much do I need to invest to have $50,000 in 5 years at 5% (0.05) interest?"
Mathematically, the formula for this is:
goal / e ^ (rate * number of years) = principal
Add some code below that will print the amount of principal
needed to reach the given savings goal within the number of years and interest rate specified.
my solution is:
import math
goal = float(goal)
years = float(rate)
rate = rate
principal = goal / (math.e ** (rate * years))
rounded_principal = round(principal, 2)
print(rounded_principal)
it should print 38940.04 but instead it prints 49875.16. If i use goal = 200, rate 0.1 and years 1, it returns 198.01 when it should return 180.97
I tried turning the rate into a percentage again by multiplying by 100, adding and deleting parenthesis, tried using a formula found online, not rounding the result, and making e be its pure number (to like 15 decimals).
A:
You are using rate instead of years for the year.
goal = float(goal)
years = float(rate) <-- Here
rate = rate
| Python - compound interest calculation issue - cs1301 edx extra practice 5 | I have the following problem I can't manage to solve:
Find "How much do I need to invest to have a certain amount by a certain year?" For example, "How much do I need to invest to have $50,000 in 5 years at 5% (0.05) interest?"
Mathematically, the formula for this is:
goal / e ^ (rate * number of years) = principal
Add some code below that will print the amount of principal
needed to reach the given savings goal within the number of years and interest rate specified.
my solution is:
import math
goal = float(goal)
years = float(rate)
rate = rate
principal = goal / (math.e ** (rate * years))
rounded_principal = round(principal, 2)
print(rounded_principal)
it should print 38940.04 but instead it prints 49875.16. If i use goal = 200, rate 0.1 and years 1, it returns 198.01 when it should return 180.97
I tried turning the rate into a percentage again by multiplying by 100, adding and deleting parenthesis, tried using a formula found online, not rounding the result, and making e be its pure number (to like 15 decimals).
| [
"You are using rate instead of years for the year.\ngoal = float(goal)\nyears = float(rate) <-- Here\nrate = rate\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074677246_python_python_3.x.txt |
Q:
Offsetting a timestamp on a Cassandra query
Probably a dumb question but I'm using toTimestamp(now()) to retrieve the timestamp. Is there any way to offset the now() by my specified timeframe.
What I have now:
> print(session.execute('SELECT toTimestamp(now()) FROM system.local').one())
2022-12-04 12:12:47.011000
My goal:
> print(session.execute('SELECT toTimestamp(now() - 1h) FROM system.local').one())
2022-12-04 11:12:47.011000
A:
To offset the timestamp returned by the toTimestamp(now()) function in Apache Cassandra, you can use the dateOf function to subtract a specified amount of time from the current timestamp.
Here is an example of how you can use this query in your code:
result = session.execute('SELECT toTimestamp(dateOf(now()) - 1h) FROM system.local').one()
print(result)
You can use the same syntax to offset the timestamp by any amount of time (like 1d for 1 day).
A:
You're on the right track.
But with the current example, it looks like you're trying to subtract an hour from now(). Now is a type-1 UUID (timeUUID in Cassandra). The date arithmetic operators will only work with dates and timestamps, so just pull that - 1h out one level of parens:
> SELECT toTimestamp(now()) - 1h FROM system.local;
system.totimestamp(system.now()) - 1h
---------------------------------------
2022-12-04 12:38:35.747000+0000
(1 rows)
And then this works:
row = session.execute("SELECT toTimestamp(now()) - 1h FROM system.local;").one()
if row:
print(row[0])
2022-12-04 12:52:19.187000
NOTE: The parser is a little strict on this one. Make sure that the operator and duration are appropriately spaced.
This works:
SELECT toTimestamp(now()) - 1h
This fails:
SELECT toTimestamp(now())-1h
| Offsetting a timestamp on a Cassandra query | Probably a dumb question but I'm using toTimestamp(now()) to retrieve the timestamp. Is there any way to offset the now() by my specified timeframe.
What I have now:
> print(session.execute('SELECT toTimestamp(now()) FROM system.local').one())
2022-12-04 12:12:47.011000
My goal:
> print(session.execute('SELECT toTimestamp(now() - 1h) FROM system.local').one())
2022-12-04 11:12:47.011000
| [
"To offset the timestamp returned by the toTimestamp(now()) function in Apache Cassandra, you can use the dateOf function to subtract a specified amount of time from the current timestamp.\nHere is an example of how you can use this query in your code:\nresult = session.execute('SELECT toTimestamp(dateOf(now()) - 1h) FROM system.local').one()\n\nprint(result)\n\nYou can use the same syntax to offset the timestamp by any amount of time (like 1d for 1 day).\n",
"You're on the right track.\nBut with the current example, it looks like you're trying to subtract an hour from now(). Now is a type-1 UUID (timeUUID in Cassandra). The date arithmetic operators will only work with dates and timestamps, so just pull that - 1h out one level of parens:\n> SELECT toTimestamp(now()) - 1h FROM system.local;\n\n system.totimestamp(system.now()) - 1h\n---------------------------------------\n 2022-12-04 12:38:35.747000+0000\n\n(1 rows)\n\nAnd then this works:\nrow = session.execute(\"SELECT toTimestamp(now()) - 1h FROM system.local;\").one()\nif row:\n print(row[0])\n\n2022-12-04 12:52:19.187000\n\nNOTE: The parser is a little strict on this one. Make sure that the operator and duration are appropriately spaced.\nThis works:\nSELECT toTimestamp(now()) - 1h\n\nThis fails:\nSELECT toTimestamp(now())-1h\n\n"
] | [
0,
0
] | [] | [] | [
"cassandra",
"cql",
"python",
"python_3.x"
] | stackoverflow_0074675507_cassandra_cql_python_python_3.x.txt |
Q:
Assigning functions to dynamically created buttons in kivy?
I am working on this program in which a list buttons gets created dynamically, based on the items in a list. The code I am using for this is:
self.list_of_btns = []
def create(self, list=items): #Creates Categorie Buttons
self.h = 1
for i in list:
self.h = self.h - 0.2
self.btn = Button(text= f"{i}", size_hint=(.2,.22), pos_hint={"center_y":self.h, "center_x":.5})
self.list_of_btns.append(self.btn)
self.add_widget(self.btn)
```
Now I want to add a function to each button. The function is suppose to write the button name (i) to a .txt file. This part of the code gets handled in an external module.
The problem I am having rn is i cant bind the buttons to their individual functions. I alwasy get the error File "kivy/_event.pyx", line 238, in kivy._event.EventDispatcher.__init__
TypeError: Properties ['command'] passed to __init__ may not be existing property names. Valid properties are ['always_release', 'anchors', 'background_color', 'background_disabled_down', 'background_disabled_normal', 'background_down', 'background_normal', 'base_direction', 'bold', 'border', 'center', 'center_x', 'center_y', 'children', 'cls', 'color', 'disabled', 'disabled_color', 'disabled_image', 'disabled_outline_color', 'ellipsis_options', 'font_blended', 'font_context', 'font_family', 'font_features', 'font_hinting', 'font_kerning', 'font_name', 'font_size', 'halign', 'height', 'ids', 'is_shortened', 'italic', 'last_touch', 'line_height', 'markup', 'max_lines', 'min_state_time', 'mipmap', 'motion_filter', 'opacity', 'outline_color', 'outline_width', 'padding', 'padding_x', 'padding_y', 'parent', 'pos', 'pos_hint', 'refs', 'right', 'shorten', 'shorten_from', 'size', 'size_hint', 'size_hint_max', 'size_hint_max_x', 'size_hint_max_y', 'size_hint_min', 'size_hint_min_x', 'size_hint_min_y', 'size_hint_x', 'size_hint_y', 'split_str', 'state', 'state_image', 'strikethrough', 'strip', 'text', 'text_language', 'text_size', 'texture', 'texture_size', 'top', 'underline', 'unicode_errors', 'valign', 'width', 'x', 'y']
A:
.bind() method can be used. In this example, partial is used in order to preset an argument so that each button does something unique. self.btn was changed to _btn because the references are being added to a list and self.btn was repeatedly assigned a new object. I didn't think this was intended, but it is not a critical part of this answer.
The self.h was changed to self.height in the dictionary describing the button because I think that may be the source of the error printed in the question.
from functools import partial
self.list_of_btns = []
def your_function(self, your_argument):
print(your_argument)
def create(self, list=items): #Creates Categorie Buttons
self.h = 1
for i in list:
self.h = self.h - 0.2
_btn = Button(text= f"{i}", size_hint=(.2,.22), pos_hint={"center_y":self.height, "center_x":.5})
myfun = partial(self.your_function, your_argument=i)
# this should be a function, not function()
_btn.bind(on_press=myfun)
self.list_of_btns.append(_btn)
self.add_widget(_btn)
| Assigning functions to dynamically created buttons in kivy? | I am working on this program in which a list buttons gets created dynamically, based on the items in a list. The code I am using for this is:
self.list_of_btns = []
def create(self, list=items): #Creates Categorie Buttons
self.h = 1
for i in list:
self.h = self.h - 0.2
self.btn = Button(text= f"{i}", size_hint=(.2,.22), pos_hint={"center_y":self.h, "center_x":.5})
self.list_of_btns.append(self.btn)
self.add_widget(self.btn)
```
Now I want to add a function to each button. The function is suppose to write the button name (i) to a .txt file. This part of the code gets handled in an external module.
The problem I am having rn is i cant bind the buttons to their individual functions. I alwasy get the error File "kivy/_event.pyx", line 238, in kivy._event.EventDispatcher.__init__
TypeError: Properties ['command'] passed to __init__ may not be existing property names. Valid properties are ['always_release', 'anchors', 'background_color', 'background_disabled_down', 'background_disabled_normal', 'background_down', 'background_normal', 'base_direction', 'bold', 'border', 'center', 'center_x', 'center_y', 'children', 'cls', 'color', 'disabled', 'disabled_color', 'disabled_image', 'disabled_outline_color', 'ellipsis_options', 'font_blended', 'font_context', 'font_family', 'font_features', 'font_hinting', 'font_kerning', 'font_name', 'font_size', 'halign', 'height', 'ids', 'is_shortened', 'italic', 'last_touch', 'line_height', 'markup', 'max_lines', 'min_state_time', 'mipmap', 'motion_filter', 'opacity', 'outline_color', 'outline_width', 'padding', 'padding_x', 'padding_y', 'parent', 'pos', 'pos_hint', 'refs', 'right', 'shorten', 'shorten_from', 'size', 'size_hint', 'size_hint_max', 'size_hint_max_x', 'size_hint_max_y', 'size_hint_min', 'size_hint_min_x', 'size_hint_min_y', 'size_hint_x', 'size_hint_y', 'split_str', 'state', 'state_image', 'strikethrough', 'strip', 'text', 'text_language', 'text_size', 'texture', 'texture_size', 'top', 'underline', 'unicode_errors', 'valign', 'width', 'x', 'y']
| [
".bind() method can be used. In this example, partial is used in order to preset an argument so that each button does something unique. self.btn was changed to _btn because the references are being added to a list and self.btn was repeatedly assigned a new object. I didn't think this was intended, but it is not a critical part of this answer.\nThe self.h was changed to self.height in the dictionary describing the button because I think that may be the source of the error printed in the question.\nfrom functools import partial\n\n\nself.list_of_btns = []\n\ndef your_function(self, your_argument):\n print(your_argument)\n\ndef create(self, list=items): #Creates Categorie Buttons\n self.h = 1\n for i in list:\n self.h = self.h - 0.2\n _btn = Button(text= f\"{i}\", size_hint=(.2,.22), pos_hint={\"center_y\":self.height, \"center_x\":.5})\n myfun = partial(self.your_function, your_argument=i)\n # this should be a function, not function()\n _btn.bind(on_press=myfun)\n self.list_of_btns.append(_btn)\n self.add_widget(_btn)\n\n"
] | [
0
] | [] | [] | [
"dynamic",
"kivy",
"python"
] | stackoverflow_0074677171_dynamic_kivy_python.txt |
Q:
Create regularly-spaced vector points from irregular XY geographic data in Python
I have point (vector) coordinates in meters (x and y in 1-D arrays) which are irregularly spaced. I would like to re-sample the points so that they are regularly spaced by 10 m between each set of XY points.
I have managed to regularly re-sample the points in the X direction (see code below), however when trying to use the scipy.interpolate.interp1d function on the Y variable, the points are obviously no longer spaced by 10 m. The code I use is as follows:
f_bot = interpolate.interp1d(x, y)
xnew_bot = np.arange(np.min(x),np.max(x),10) # 10 m spaced
ynew_bot = f_bot(xnew_bot)
plt.plot(x, y, 'o', xnew_bot, ynew_bot,'-')
plt.show()
If one measures the space between each orange point (in say, QGIS) or by doing:
dist_total = np.hstack([0,np.cumsum(np.hypot(np.diff(xnew_bot),np.diff(ynew_bot)))])
diff_dist = np.diff(dist_total) # calculate distance difference between each points
diff_dist will show that points are still irregularly spaced due to the interpolation of the Y points. Another way to see is to compute the distance between each X points, which is exactly 10 m, and compute the same for the Y points which show they are irregular.
Is there a function or approach I could use to make sure both X and Y are spaced regularly? All I need is that each set of X Y points is spaced apart by 10 m, which should be simple to do but I can't find a better way so far! Any help would be appreciated.
Data for X and Y are as follows:
| x | y |
| -------- | -------------- |
| -1091590.00|158697
| -1091580.00|158702
| -1091580.00|158708
| -1091570.00|158713
| -1091560.00|158719
| -1091550.00|158724
|...|...
| -1079450.00|164674
| -1079440.00|164677
| -1079430.00|164680
| -1079420.00|164683
A:
Would you consider
finding the interpolation line ST_LineInterpolatePoints
measure the length of the line [in meters], ST_Length
divide it by 10 [m] to find number of slots,
divide the line by number of slots to find the coordinates for each group of points, ST_LineSubstring
divide number of points by number of slots and assign them to coordinates from p. 4 (you can find the distance between points and division coordinates with ST_Distance
Regards,
Grzegorz
A:
What you want is to create evenly spaced points from interpolation using your original data points.
There's plenty of examples here:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html
This is how you can use it, you have to specify max_len
from scipy import interpolate
f1 = interpolate.interp1d(x, y)
x_new = np.linspace(x[0], x[-1], max_len)
y_new = f1(x_new)
| Create regularly-spaced vector points from irregular XY geographic data in Python | I have point (vector) coordinates in meters (x and y in 1-D arrays) which are irregularly spaced. I would like to re-sample the points so that they are regularly spaced by 10 m between each set of XY points.
I have managed to regularly re-sample the points in the X direction (see code below), however when trying to use the scipy.interpolate.interp1d function on the Y variable, the points are obviously no longer spaced by 10 m. The code I use is as follows:
f_bot = interpolate.interp1d(x, y)
xnew_bot = np.arange(np.min(x),np.max(x),10) # 10 m spaced
ynew_bot = f_bot(xnew_bot)
plt.plot(x, y, 'o', xnew_bot, ynew_bot,'-')
plt.show()
If one measures the space between each orange point (in say, QGIS) or by doing:
dist_total = np.hstack([0,np.cumsum(np.hypot(np.diff(xnew_bot),np.diff(ynew_bot)))])
diff_dist = np.diff(dist_total) # calculate distance difference between each points
diff_dist will show that points are still irregularly spaced due to the interpolation of the Y points. Another way to see is to compute the distance between each X points, which is exactly 10 m, and compute the same for the Y points which show they are irregular.
Is there a function or approach I could use to make sure both X and Y are spaced regularly? All I need is that each set of X Y points is spaced apart by 10 m, which should be simple to do but I can't find a better way so far! Any help would be appreciated.
Data for X and Y are as follows:
| x | y |
| -------- | -------------- |
| -1091590.00|158697
| -1091580.00|158702
| -1091580.00|158708
| -1091570.00|158713
| -1091560.00|158719
| -1091550.00|158724
|...|...
| -1079450.00|164674
| -1079440.00|164677
| -1079430.00|164680
| -1079420.00|164683
| [
"Would you consider\n\nfinding the interpolation line ST_LineInterpolatePoints\nmeasure the length of the line [in meters], ST_Length\ndivide it by 10 [m] to find number of slots,\ndivide the line by number of slots to find the coordinates for each group of points, ST_LineSubstring\ndivide number of points by number of slots and assign them to coordinates from p. 4 (you can find the distance between points and division coordinates with ST_Distance\n\nRegards,\nGrzegorz\n",
"What you want is to create evenly spaced points from interpolation using your original data points.\nThere's plenty of examples here:\nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html\nThis is how you can use it, you have to specify max_len\nfrom scipy import interpolate\nf1 = interpolate.interp1d(x, y)\nx_new = np.linspace(x[0], x[-1], max_len)\ny_new = f1(x_new)\n\n"
] | [
0,
0
] | [] | [] | [
"coordinates",
"gis",
"python",
"qgis",
"vector"
] | stackoverflow_0070771182_coordinates_gis_python_qgis_vector.txt |
Q:
How to cut out part of an image based on coordinates of a given circle
I'm trying to cut out part of an image based on some circles coordinates, my initial attempts have been to try doing this
startx = circle[0]
starty = circle[1]
radius = circle[2]
recImage = cv2.rectangle(image,(startx-radius,starty-radius), (startx+radius,starty+radius), (0,0,255),2)
miniImage = recImage[startx-radius:startx+radius,starty-radius:starty+radius]
circle[0] and circle[1] being the x and y coords of the circle centre and circle[2] being the radius. The recImage should draw the rectangle and then the miniImage should be a smaller image of that rectangle. However they don't line up.
the image of squares to cut out
one of the actual cut outs
I was expecting them to line up as their starting and ending values are identical but they don't. Thanks
A:
There is a mistake in your code. You are using the coordinates of the center of the circle to draw the rectangle and cut out the mini image. However, the coordinates of the top left corner of the rectangle should be used to draw the rectangle and cut out the mini image.
Here is the updated code:
startx = circle[0]
starty = circle[1]
radius = circle[2]
# Calculate the coordinates of the top left corner of the rectangle
x1 = startx - radius
y1 = starty - radius
x2 = startx + radius
y2 = starty + radius
# Draw the rectangle on the image
recImage = cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255), 2)
# Cut out the mini image from the rectangle
miniImage = recImage[y1:y2, x1:x2]
# Show the mini image
cv2.imshow('Mini Image', miniImage)
cv2.waitKey(0)
cv2.destroyAllWindows()
| How to cut out part of an image based on coordinates of a given circle | I'm trying to cut out part of an image based on some circles coordinates, my initial attempts have been to try doing this
startx = circle[0]
starty = circle[1]
radius = circle[2]
recImage = cv2.rectangle(image,(startx-radius,starty-radius), (startx+radius,starty+radius), (0,0,255),2)
miniImage = recImage[startx-radius:startx+radius,starty-radius:starty+radius]
circle[0] and circle[1] being the x and y coords of the circle centre and circle[2] being the radius. The recImage should draw the rectangle and then the miniImage should be a smaller image of that rectangle. However they don't line up.
the image of squares to cut out
one of the actual cut outs
I was expecting them to line up as their starting and ending values are identical but they don't. Thanks
| [
"There is a mistake in your code. You are using the coordinates of the center of the circle to draw the rectangle and cut out the mini image. However, the coordinates of the top left corner of the rectangle should be used to draw the rectangle and cut out the mini image.\nHere is the updated code:\nstartx = circle[0]\nstarty = circle[1]\nradius = circle[2]\n\n# Calculate the coordinates of the top left corner of the rectangle\nx1 = startx - radius\ny1 = starty - radius\nx2 = startx + radius\ny2 = starty + radius\n\n# Draw the rectangle on the image\nrecImage = cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255), 2)\n\n# Cut out the mini image from the rectangle\nminiImage = recImage[y1:y2, x1:x2]\n\n# Show the mini image\ncv2.imshow('Mini Image', miniImage)\ncv2.waitKey(0)\ncv2.destroyAllWindows()\n\n"
] | [
0
] | [] | [] | [
"image",
"opencv",
"python"
] | stackoverflow_0074677301_image_opencv_python.txt |
Q:
terminal user interface with python
what is the best TUI module to use with python , I used prompt-toolkit and I can't use so many things in it like Layout and I can't use a view in textual there is so many errors .
I want to build TUI for myself, and I want a well documented and working TUI or CUI python module
A:
There are many TUI libraries available for Python, and the best one for you will depend on your specific needs and preferences. Some popular options include curses, npyscreen, urwid, and blessings. All of these libraries provide basic TUI functionality, such as creating and positioning text and interactive elements on the screen, and handling user input.
If you need more advanced features, such as multiple windows and panels, or rich text formatting, you may want to consider using a library that provides a higher-level interface, such as PyQt or PyGTK. These libraries can be more complex to use, but they offer a wider range of features and more flexibility in terms of design and layout.
Ultimately, the best TUI library for you will depend on the requirements of your specific project, so it's worth experimenting with different options to see which one works best for you.
A:
An AppSession is an interactive session, usually connected to one terminal. Within one such session, interaction with many applications can happen, one after the other.
The input/output device is not supposed to change during one session.
Warning: Always use the create_app_session function to create an instance, so that it gets activated correctly.
Parameters
input – Use this as a default input for all applications running in this session, unless an input is passed to the Application explicitely.
output – Use this as a default output.
| terminal user interface with python | what is the best TUI module to use with python , I used prompt-toolkit and I can't use so many things in it like Layout and I can't use a view in textual there is so many errors .
I want to build TUI for myself, and I want a well documented and working TUI or CUI python module
| [
"There are many TUI libraries available for Python, and the best one for you will depend on your specific needs and preferences. Some popular options include curses, npyscreen, urwid, and blessings. All of these libraries provide basic TUI functionality, such as creating and positioning text and interactive elements on the screen, and handling user input.\nIf you need more advanced features, such as multiple windows and panels, or rich text formatting, you may want to consider using a library that provides a higher-level interface, such as PyQt or PyGTK. These libraries can be more complex to use, but they offer a wider range of features and more flexibility in terms of design and layout.\nUltimately, the best TUI library for you will depend on the requirements of your specific project, so it's worth experimenting with different options to see which one works best for you.\n",
"An AppSession is an interactive session, usually connected to one terminal. Within one such session, interaction with many applications can happen, one after the other.\nThe input/output device is not supposed to change during one session.\nWarning: Always use the create_app_session function to create an instance, so that it gets activated correctly.\nParameters\ninput – Use this as a default input for all applications running in this session, unless an input is passed to the Application explicitely.\noutput – Use this as a default output.\n"
] | [
0,
0
] | [] | [] | [
"command_line_interface",
"python",
"tui"
] | stackoverflow_0074677311_command_line_interface_python_tui.txt |
Q:
Can't open CSV file even with full file path
I'm using Python 3.5 and I'm having some problems opening a CSV file. I've tried entering the entire path but it still doesn't work, but the file is clearly in the folder. (My code is called 'simplecsvtest.py')
Here's the code snippet:
import csv
import sys
file = open(r"C:\python35\files\results.csv", 'rt')
try:
reader = csv.reader(file, delimiter='\t')
... some code here ...
finally:
file.close()
And here's what PowerShell says:
PS C:\python35\files> python simplecsvtest.py
Traceback (most recent call last):
File "simplecsvtest.py", line 20, in
file = open(r"C:\python35\files\results.csv", 'rt')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\python35\\files\\results.csv'
Well, I'm very certain that 'results.csv' is in that folder: here's the filepath in Windows Explorer:
C:\Python35\files
(Note: The folder has capital 'P' for Python35, and I've tried having both capitalized and uncapitalized 'P' in the code, neither works)
The CSV file is a "Microsoft Excel Comma Separated Values Files", if that matters, but the extension is still csv. Can anyone tell me what's wrong?
A:
I would suggest creating a folder inside your project folder and then use a relative path :
file = open(r".\files\results.csv", 'rt')
. implies that the path is relative to your current directory
A:
I figured out a work-around solution myself:
Somehow, if I copy all the data from the csv and paste it in a new excel spreadsheet and save it as a csv, it works. I don't know why.
A:
Although the file path is absolute it wouldn't work this way.
You have to use "forward" slashes in the path name instead of "backward" slashes. As
file = open(r"C:/python35/files/results.csv", 'rt')
| Can't open CSV file even with full file path | I'm using Python 3.5 and I'm having some problems opening a CSV file. I've tried entering the entire path but it still doesn't work, but the file is clearly in the folder. (My code is called 'simplecsvtest.py')
Here's the code snippet:
import csv
import sys
file = open(r"C:\python35\files\results.csv", 'rt')
try:
reader = csv.reader(file, delimiter='\t')
... some code here ...
finally:
file.close()
And here's what PowerShell says:
PS C:\python35\files> python simplecsvtest.py
Traceback (most recent call last):
File "simplecsvtest.py", line 20, in
file = open(r"C:\python35\files\results.csv", 'rt')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\python35\\files\\results.csv'
Well, I'm very certain that 'results.csv' is in that folder: here's the filepath in Windows Explorer:
C:\Python35\files
(Note: The folder has capital 'P' for Python35, and I've tried having both capitalized and uncapitalized 'P' in the code, neither works)
The CSV file is a "Microsoft Excel Comma Separated Values Files", if that matters, but the extension is still csv. Can anyone tell me what's wrong?
| [
"I would suggest creating a folder inside your project folder and then use a relative path : \nfile = open(r\".\\files\\results.csv\", 'rt') \n. implies that the path is relative to your current directory\n",
"I figured out a work-around solution myself:\nSomehow, if I copy all the data from the csv and paste it in a new excel spreadsheet and save it as a csv, it works. I don't know why.\n",
"Although the file path is absolute it wouldn't work this way.\nYou have to use \"forward\" slashes in the path name instead of \"backward\" slashes. As\nfile = open(r\"C:/python35/files/results.csv\", 'rt')\n\n"
] | [
0,
0,
0
] | [] | [] | [
"csv",
"python"
] | stackoverflow_0044631654_csv_python.txt |
Q:
How to create a functioning multipage streamlit app?
I am creating a web app using Streamlit. I have created a multipage app where the sidebar has a drop-down menu to go to a particular page. I have created a page that allows the user to input a sequence and count the number of characters (for example a DNA sequence and count the number of nucleotide bases). The architecture of my web app is as follows:
Web_App
|
|__app.py (main file)
|__multipage.py
|__home.py
|
|__apps (another folder)
|__nucleotide_eda.py
The problem that I am facing right now is when I run the streamlit keeping the sequence page (nucleotide.py) as the home page (meaning just running the sequence page) the inbuilt calculations in the nucleotide.py is running fine (the user submits the query sequence and automatically the A/T/G/C counts are given along with a bar plot). But when I add the nucleotide.py as a multipage and run my main file (app.py) the inbuilt functions are not working. I only see the text box area where the user can give the input sequence.
I have given it multiple tries but still, the problem persists. My code looks like this:
app.py (main file):
import streamlit as st
from multiapp import MultiApp
from apps import home,nucleotide_eda
app = MultiApp()
# Add all your applications here
app.add_app("Home", home.app)
app.add_app("Nucleotide EDA", nucleotide_eda.app)
app.run()
Nucleotide.py (sequence file):
import streamlit as st
def app():
st.title('Exploratory Data Analysis of Genomic Sequence')
st.write('Nucleotide EDA allows you to perform Exploratory Data Analysis on any submitted genomic sequence')
st.header("Enter Your DNA Sequence in FASTA Format")
sequence_input = ">DNA\nATGCGCTAGGATACA"
sequence = st.text_area("Sequence Input", sequence_input, height=250)
sequence = sequence.splitlines()
sequence = sequence[1:]
sequence = ''.join(sequence)
st.write('''***''')
st.header("Input Query Sequence")
sequence
st.write('''***''')
st.header("Nucleotide Count")
def nucleotide_count(seq):
d = dict([('A', seq.count("A")),('T',seq.count("T")),('G',seq.count("G")),('C',seq.count("C"))])
return d
X = nucleotide_count(sequence)
X_label = list(X)
X_values = list(X.values())
X
st.write('''***''')
All I see is the text box prompting the user to give input. But once a sequence is submitted nothing happens. I expect that once a user submits the sequence, the web app should first print the input query using:
st.header("Input Query Sequence")
sequence
and then runs the "nucleotide_count" function and gives out the corresponding dictionary.
Any help would be much appreciated
A:
Use st.write().
st.header("Input Query Sequence")
st.write(sequence) # <----------------------------------- this
st.write('''***''')
st.header("Nucleotide Count")
def nucleotide_count(seq):
d = dict([('A', seq.count("A")),('T',seq.count("T")),('G',seq.count("G")),('C',seq.count("C"))])
return d
X = nucleotide_count(sequence)
X_label = list(X)
X_values = list(X.values())
st.write(X) # <----------------------------------- this
st.write('''***''')
Sample output
| How to create a functioning multipage streamlit app? | I am creating a web app using Streamlit. I have created a multipage app where the sidebar has a drop-down menu to go to a particular page. I have created a page that allows the user to input a sequence and count the number of characters (for example a DNA sequence and count the number of nucleotide bases). The architecture of my web app is as follows:
Web_App
|
|__app.py (main file)
|__multipage.py
|__home.py
|
|__apps (another folder)
|__nucleotide_eda.py
The problem that I am facing right now is when I run the streamlit keeping the sequence page (nucleotide.py) as the home page (meaning just running the sequence page) the inbuilt calculations in the nucleotide.py is running fine (the user submits the query sequence and automatically the A/T/G/C counts are given along with a bar plot). But when I add the nucleotide.py as a multipage and run my main file (app.py) the inbuilt functions are not working. I only see the text box area where the user can give the input sequence.
I have given it multiple tries but still, the problem persists. My code looks like this:
app.py (main file):
import streamlit as st
from multiapp import MultiApp
from apps import home,nucleotide_eda
app = MultiApp()
# Add all your applications here
app.add_app("Home", home.app)
app.add_app("Nucleotide EDA", nucleotide_eda.app)
app.run()
Nucleotide.py (sequence file):
import streamlit as st
def app():
st.title('Exploratory Data Analysis of Genomic Sequence')
st.write('Nucleotide EDA allows you to perform Exploratory Data Analysis on any submitted genomic sequence')
st.header("Enter Your DNA Sequence in FASTA Format")
sequence_input = ">DNA\nATGCGCTAGGATACA"
sequence = st.text_area("Sequence Input", sequence_input, height=250)
sequence = sequence.splitlines()
sequence = sequence[1:]
sequence = ''.join(sequence)
st.write('''***''')
st.header("Input Query Sequence")
sequence
st.write('''***''')
st.header("Nucleotide Count")
def nucleotide_count(seq):
d = dict([('A', seq.count("A")),('T',seq.count("T")),('G',seq.count("G")),('C',seq.count("C"))])
return d
X = nucleotide_count(sequence)
X_label = list(X)
X_values = list(X.values())
X
st.write('''***''')
All I see is the text box prompting the user to give input. But once a sequence is submitted nothing happens. I expect that once a user submits the sequence, the web app should first print the input query using:
st.header("Input Query Sequence")
sequence
and then runs the "nucleotide_count" function and gives out the corresponding dictionary.
Any help would be much appreciated
| [
"Use st.write().\n st.header(\"Input Query Sequence\")\n st.write(sequence) # <----------------------------------- this\n \n st.write('''***''')\n \n st.header(\"Nucleotide Count\") \n def nucleotide_count(seq):\n d = dict([('A', seq.count(\"A\")),('T',seq.count(\"T\")),('G',seq.count(\"G\")),('C',seq.count(\"C\"))])\n return d\n \n X = nucleotide_count(sequence)\n X_label = list(X)\n X_values = list(X.values())\n st.write(X) # <----------------------------------- this\n st.write('''***''')\n\nSample output\n\n"
] | [
0
] | [] | [] | [
"python",
"streamlit",
"web"
] | stackoverflow_0074673627_python_streamlit_web.txt |
Q:
How to remove whitespaces in a string except from between certain elements
I have a string similar to (the below one is simplified):
" word= {his or her} whatever "
I want to delete every whitespace except between {}, so that my modified string will be:
"word={his or her}whatever"
lstrip or rstrip doesn't work of course. If I delete all whitespaces the whitespaces between {} are deleted as well. I tried to look up solutions for limiting the replace function to certain areas but even if I found out it I haven't been able to implement it. There are some stuff with regex (I am not sure if they are relevant here) but I haven't been able to understand them.
EDIT: If I wanted to except the area between, say {} and "", that is:
if I wanted to turn this string:
" word= {his or her} and "his or her" whatever "
into this:
"word={his or her}and"his or her"whatever"
What would I change
re.sub(r'\s+(?![^{]*})', '', list_name) into?
A:
To solve this problem, you can use regular expressions to find and replace the whitespace characters. In particular, you can use the re.sub function to search for whitespace characters outside of the curly braces and replace them with an empty string.
Here is an example of how you can use re.sub to solve this problem:
import re
# Define the input string
input_str = " word= {his or her} whatever "
# Use a regular expression to search for whitespace characters outside of the curly braces
output_str = re.sub(r'\s+(?![^{]*})', '', input_str)
# Print the result
print(output_str)
This code will print the modified string as follows:
word={his or her}whatever
The regular expression r'\s+(?![^{]*})' matches the whitespace that you want to remove from the string. The negative lookahead assertion ensures that the match is not followed by a string of the form {...}, so that the whitespace between the curly brackets is not removed. The `re.sub function replaces these matches with an empty string, effectively removing the whitespace characters from the input string.
You can use this approach to modify your string and remove the whitespace characters outside of the curly braces.
A:
See instead going arround re you can replace uisng string.replace. Which will be much more easier and less complex when you playing around strings. Espacillay when you have multiple substitutions you end up bigger regex.
st =" word= {his or her} whatever "
st2=""" word= {his or her} and "his or her" whatever """
new = " ".join(st2.split())
new = new.replace("= ", "=").replace("} ", "}").replace('" ' , '"').replace(' "' , '"')
print(new)
Some outputs
Example 1 output
word={his or her}whatever
Example 2 output
word={his or her}and"his or her"whatever
| How to remove whitespaces in a string except from between certain elements | I have a string similar to (the below one is simplified):
" word= {his or her} whatever "
I want to delete every whitespace except between {}, so that my modified string will be:
"word={his or her}whatever"
lstrip or rstrip doesn't work of course. If I delete all whitespaces the whitespaces between {} are deleted as well. I tried to look up solutions for limiting the replace function to certain areas but even if I found out it I haven't been able to implement it. There are some stuff with regex (I am not sure if they are relevant here) but I haven't been able to understand them.
EDIT: If I wanted to except the area between, say {} and "", that is:
if I wanted to turn this string:
" word= {his or her} and "his or her" whatever "
into this:
"word={his or her}and"his or her"whatever"
What would I change
re.sub(r'\s+(?![^{]*})', '', list_name) into?
| [
"To solve this problem, you can use regular expressions to find and replace the whitespace characters. In particular, you can use the re.sub function to search for whitespace characters outside of the curly braces and replace them with an empty string.\nHere is an example of how you can use re.sub to solve this problem:\nimport re\n\n# Define the input string\ninput_str = \" word= {his or her} whatever \"\n\n# Use a regular expression to search for whitespace characters outside of the curly braces\noutput_str = re.sub(r'\\s+(?![^{]*})', '', input_str)\n\n# Print the result\nprint(output_str)\n\nThis code will print the modified string as follows:\nword={his or her}whatever\n\nThe regular expression r'\\s+(?![^{]*})' matches the whitespace that you want to remove from the string. The negative lookahead assertion ensures that the match is not followed by a string of the form {...}, so that the whitespace between the curly brackets is not removed. The `re.sub function replaces these matches with an empty string, effectively removing the whitespace characters from the input string.\nYou can use this approach to modify your string and remove the whitespace characters outside of the curly braces.\n",
"See instead going arround re you can replace uisng string.replace. Which will be much more easier and less complex when you playing around strings. Espacillay when you have multiple substitutions you end up bigger regex.\nst =\" word= {his or her} whatever \"\nst2=\"\"\" word= {his or her} and \"his or her\" whatever \"\"\"\n\nnew = \" \".join(st2.split())\nnew = new.replace(\"= \", \"=\").replace(\"} \", \"}\").replace('\" ' , '\"').replace(' \"' , '\"')\nprint(new)\n\nSome outputs\nExample 1 output\nword={his or her}whatever\n\nExample 2 output\nword={his or her}and\"his or her\"whatever\n\n"
] | [
1,
1
] | [
"You can use by replace\ndef remove(string):\n return string.replace(\" \", \"\")\n\nstring = 'hell o whatever'\nprint(remove(string)) // Output: hellowhatever\n\n"
] | [
-2
] | [
"python",
"python_3.x",
"removing_whitespace",
"replace",
"string"
] | stackoverflow_0074675792_python_python_3.x_removing_whitespace_replace_string.txt |
Q:
print("Are you a human: "str(human)) what did i do wrong
(https://i.stack.imgur.com/IFtgk.png)
can someone help me plzzzzz
A:
Add '+' between two variables , like print("Are You human"+str(human))
A:
if you want to output whit print always remember to use + between variables even
there are string
human=True
print(" are ypu a human "+str(human))
| print("Are you a human: "str(human)) what did i do wrong | (https://i.stack.imgur.com/IFtgk.png)
can someone help me plzzzzz
| [
"Add '+' between two variables , like print(\"Are You human\"+str(human))\n",
"if you want to output whit print always remember to use + between variables even\nthere are string\nhuman=True\nprint(\" are ypu a human \"+str(human))\n"
] | [
0,
0
] | [] | [] | [
"boolean",
"python"
] | stackoverflow_0074677299_boolean_python.txt |
Q:
Python (RenPy): Textbuttons executing a function they don't explicitly call every time they're pressed
In the game I'm working on, I use an array to track the current stats of the player's company, and the following function to edit the array.
init python:
#The following store item objects, which include an array of their own stats
#Stores currently owned equipment
Equipment = []
#Stores items available to buy
Items = []
#Stores currently equipped equipment
Equipped = []
#The company's current stats
SecArray = [0, 0, 0, 0, 0, 0]
#Called whenever moving something in or out of the currently equipped items.
#Pass the current item's stat array, the stat array, and a + or - symbol
def arrayedit(array, SecArray, symbol):
Notify("The stats have changed")
if symbol == "+":
SecArray[0] += array[0]
SecArray[1] += array[1]
SecArray[2] += array[2]
SecArray[3] += array[3]
SecArray[4] += array[4]
SecArray[5] += array[5]
if symbol == "-":
SecArray[0] -= array[0]
SecArray[1] -= array[1]
SecArray[2] -= array[2]
SecArray[3] -= array[3]
SecArray[4] -= array[4]
SecArray[5] -= array[5]
return()
If any items are in the "Equipment" array, however, their stats are added to the current stats every time a textbutton is clicked (so, for example, if an item has 3 in a stat, the player's current stats will increase by 3 every time any button is clicked, counting infinitely upward). Similarly, if any items are in the "Equipped" array, their current stats are subtracted from the player's current stats every time a textbutton is clicked. Items in the "Items" array do not have any effect.
The following code is for windows to shop and equip/dequip equipment.
screen shopping1():
frame:
xpos (config.screen_width*25/64) ypos (config.screen_height*11/64)
ysize (config.screen_height*31/64)
xsize (config.screen_width*36/64)
has side "c r b"
viewport:
yadjustment tutorials_adjustment
mousewheel True
vbox:
xpos (config.screen_width*2/5) ypos (config.screen_height*3/16)
ysize (config.screen_height/2)
xsize (config.screen_width/2)
for i in Items:
if i.kind == "Item":
if i.cost <= Money:
textbutton "[i.title] $[i.cost]":
action [AddToSet(Equipment, i), RemoveFromSet(Items, i), Hide("shopping1"), Return(i.cost)]
left_padding 20
xfill True
hovered Notify(i.hover)
else:
null height 10
text i.title alt ""
null height 5
for i in Policies:
if i.kind == "Policy":
if i.cost <= Money:
textbutton "[i.title] $[i.cost]":
action [AddToSet(OwnedPolicies, i), RemoveFromSet(Policies, i), Hide("shopping1"), Return(i.cost)]
left_padding 20
xfill True
hovered Notify(i.hover)
else:
null height 10
text i.title alt ""
null height 5
for i in Trainings:
if i.kind == "Training":
if i.cost <= Money:
textbutton "[i.title] $[i.cost]":
action [AddToSet(OwnedTrainings, i), RemoveFromSet(Trainings, i), Hide("shopping1"), Return(i.cost)]
left_padding 20
xfill True
hovered Notify(i.hover)
else:
null height 10
text i.title alt ""
null height 5
bar adjustment tutorials_adjustment style "vscrollbar"
textbutton _("Return"):
xfill True
action [Hide("shopping1")]
top_margin 10
screen equipmentedit():
frame:
xpos (config.screen_width*5/128) ypos (config.screen_height*2/64)
ysize (config.screen_height*47/64)
xsize (config.screen_width*19/64)
has side "c r b"
viewport:
yadjustment tutorials_adjustment
mousewheel True
vbox:
null height 10
text "Unequipped Items" alt ""
null height 5
for i in Equipment:
if i.kind == "Item":
textbutton "[i.title] $[i.cost]":
action [arrayedit(i.stats, SecArray, "+"), AddToSet(Equipped, i), RemoveFromSet(Equipment, i), Hide("equipmentedit"), Return(i.cost)]
left_padding 20
xfill True
else:
null height 10
text i.title alt ""
null height 5
null height 10
text "Equipped Items" alt ""
null height 5
for i in Equipped:
if i.kind == "Item":
textbutton "[i.title] $[i.cost]":
action [arrayedit(i.stats, SecArray, "-"), AddToSet(Equipment, i), RemoveFromSet(Equipped, i), Hide("equipmentedit"), Return(i.cost)]
left_padding 20
xfill True
else:
null height 10
text i.title alt ""
null height 5
bar adjustment tutorials_adjustment style "vscrollbar"
textbutton _("Return"):
xfill True
action [Hide("equipmentedit")]
top_margin 10
Outside of this, the arrays and functions used are not called or referenced elsewhere in the program. I believe the "arrayedit" function is being called for items in the equipped and equpiment arrays every time a button is clicked, including the return buttons, but I'm unsure of why. Any insight would be greatly appreciated!
A:
I had the same problem and I believe you're you're falling victim to Renpy's prediction, as demonstrated by this thread on GitHub: https://github.com/renpy/renpy/issues/3718.
Renpy runs through all the code in a screen when it's shown (and multiple other times) including searching any function calls in order to pre-load assets. Usually this shouldn't cause a problem because most of the Renpy calls utilise renpy.restart_interaction() which reverts any changes to variables. Unfortunately it can struggle when it comes to directly calling Python objects.
The solution I used for this was to have the screen element call an intermediate function which included the renpy.restart_interaction() call (for some reason putting the restart directly into my Python object's function was causing other problems).
Something like this:
init python:
def intermediateFunc(funcToCall, obj, arg):
funcToCall(obj, arg)
renpy.restart_interaction()
screen myScreen:
imagebutton:
# ...
action Function(intermediateFunc, MyObj.DoSomething, obj, "an argument")
| Python (RenPy): Textbuttons executing a function they don't explicitly call every time they're pressed | In the game I'm working on, I use an array to track the current stats of the player's company, and the following function to edit the array.
init python:
#The following store item objects, which include an array of their own stats
#Stores currently owned equipment
Equipment = []
#Stores items available to buy
Items = []
#Stores currently equipped equipment
Equipped = []
#The company's current stats
SecArray = [0, 0, 0, 0, 0, 0]
#Called whenever moving something in or out of the currently equipped items.
#Pass the current item's stat array, the stat array, and a + or - symbol
def arrayedit(array, SecArray, symbol):
Notify("The stats have changed")
if symbol == "+":
SecArray[0] += array[0]
SecArray[1] += array[1]
SecArray[2] += array[2]
SecArray[3] += array[3]
SecArray[4] += array[4]
SecArray[5] += array[5]
if symbol == "-":
SecArray[0] -= array[0]
SecArray[1] -= array[1]
SecArray[2] -= array[2]
SecArray[3] -= array[3]
SecArray[4] -= array[4]
SecArray[5] -= array[5]
return()
If any items are in the "Equipment" array, however, their stats are added to the current stats every time a textbutton is clicked (so, for example, if an item has 3 in a stat, the player's current stats will increase by 3 every time any button is clicked, counting infinitely upward). Similarly, if any items are in the "Equipped" array, their current stats are subtracted from the player's current stats every time a textbutton is clicked. Items in the "Items" array do not have any effect.
The following code is for windows to shop and equip/dequip equipment.
screen shopping1():
frame:
xpos (config.screen_width*25/64) ypos (config.screen_height*11/64)
ysize (config.screen_height*31/64)
xsize (config.screen_width*36/64)
has side "c r b"
viewport:
yadjustment tutorials_adjustment
mousewheel True
vbox:
xpos (config.screen_width*2/5) ypos (config.screen_height*3/16)
ysize (config.screen_height/2)
xsize (config.screen_width/2)
for i in Items:
if i.kind == "Item":
if i.cost <= Money:
textbutton "[i.title] $[i.cost]":
action [AddToSet(Equipment, i), RemoveFromSet(Items, i), Hide("shopping1"), Return(i.cost)]
left_padding 20
xfill True
hovered Notify(i.hover)
else:
null height 10
text i.title alt ""
null height 5
for i in Policies:
if i.kind == "Policy":
if i.cost <= Money:
textbutton "[i.title] $[i.cost]":
action [AddToSet(OwnedPolicies, i), RemoveFromSet(Policies, i), Hide("shopping1"), Return(i.cost)]
left_padding 20
xfill True
hovered Notify(i.hover)
else:
null height 10
text i.title alt ""
null height 5
for i in Trainings:
if i.kind == "Training":
if i.cost <= Money:
textbutton "[i.title] $[i.cost]":
action [AddToSet(OwnedTrainings, i), RemoveFromSet(Trainings, i), Hide("shopping1"), Return(i.cost)]
left_padding 20
xfill True
hovered Notify(i.hover)
else:
null height 10
text i.title alt ""
null height 5
bar adjustment tutorials_adjustment style "vscrollbar"
textbutton _("Return"):
xfill True
action [Hide("shopping1")]
top_margin 10
screen equipmentedit():
frame:
xpos (config.screen_width*5/128) ypos (config.screen_height*2/64)
ysize (config.screen_height*47/64)
xsize (config.screen_width*19/64)
has side "c r b"
viewport:
yadjustment tutorials_adjustment
mousewheel True
vbox:
null height 10
text "Unequipped Items" alt ""
null height 5
for i in Equipment:
if i.kind == "Item":
textbutton "[i.title] $[i.cost]":
action [arrayedit(i.stats, SecArray, "+"), AddToSet(Equipped, i), RemoveFromSet(Equipment, i), Hide("equipmentedit"), Return(i.cost)]
left_padding 20
xfill True
else:
null height 10
text i.title alt ""
null height 5
null height 10
text "Equipped Items" alt ""
null height 5
for i in Equipped:
if i.kind == "Item":
textbutton "[i.title] $[i.cost]":
action [arrayedit(i.stats, SecArray, "-"), AddToSet(Equipment, i), RemoveFromSet(Equipped, i), Hide("equipmentedit"), Return(i.cost)]
left_padding 20
xfill True
else:
null height 10
text i.title alt ""
null height 5
bar adjustment tutorials_adjustment style "vscrollbar"
textbutton _("Return"):
xfill True
action [Hide("equipmentedit")]
top_margin 10
Outside of this, the arrays and functions used are not called or referenced elsewhere in the program. I believe the "arrayedit" function is being called for items in the equipped and equpiment arrays every time a button is clicked, including the return buttons, but I'm unsure of why. Any insight would be greatly appreciated!
| [
"I had the same problem and I believe you're you're falling victim to Renpy's prediction, as demonstrated by this thread on GitHub: https://github.com/renpy/renpy/issues/3718.\nRenpy runs through all the code in a screen when it's shown (and multiple other times) including searching any function calls in order to pre-load assets. Usually this shouldn't cause a problem because most of the Renpy calls utilise renpy.restart_interaction() which reverts any changes to variables. Unfortunately it can struggle when it comes to directly calling Python objects.\nThe solution I used for this was to have the screen element call an intermediate function which included the renpy.restart_interaction() call (for some reason putting the restart directly into my Python object's function was causing other problems).\nSomething like this:\ninit python:\n def intermediateFunc(funcToCall, obj, arg):\n funcToCall(obj, arg)\n renpy.restart_interaction()\n\nscreen myScreen:\n imagebutton:\n # ...\n action Function(intermediateFunc, MyObj.DoSomething, obj, \"an argument\")\n\n"
] | [
0
] | [] | [] | [
"python",
"renpy"
] | stackoverflow_0059776715_python_renpy.txt |
Q:
How to extract HTML data from pandas dataframe column
I'm trying to extract certain elements from a block of webpages that I've extracted and put into a pandas column. I've tried lxml and can't get to pull out very specific phrases in text. What is the pythonistic way to go?
Tried this:
def scrape_details(s):
results = requests.get(s)
results2 = etree.parse(StringIO(str(results.content)), parser)
return results2.xpath('//p')
df['data'] = df['URL'].map(lambda x: scrape_details(x))
Returned:
0 The Kingsley School https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/ warwickshire [[], [[]], [], [], [[], [], [], []], [], [[]], [[]], [[]], [[], [], []], [[], [<Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>]], [[]], [[]], [[], []], [[]], [[], []], [[], []], [[], []], [[], []], [[]], [[]], [[]], [[]], [[]], [[]], [], [], []]
A:
What elements are you trying to extract? If its general HTML elements you could use the html.parser python package. I just modified the generic parser class they provide near the bottom of the page a lil:
import requests
import pandas as pd
from html.parser import HTMLParser
from html.entities import name2codepoint
class MyHTMLParser(HTMLParser):
def __init__(self):
self.parsed_html = []
super().__init__()
def handle_starttag(self, tag, attrs):
self.parsed_html.append(f"Start tag: {tag}")
for attr in attrs:
self.parsed_html.append(f" attr: {attr}")
def handle_endtag(self, tag):
self.parsed_html.append(f"End tag : {tag}")
def handle_data(self, data):
self.parsed_html.append(f"Data : {data.strip()}")
def handle_comment(self, data):
self.parsed_html.append(f"Comment : {data.strip()}")
def handle_entityref(self, name):
c = chr(name2codepoint[name])
self.parsed_html.append(f"Named ent: {c}")
def handle_charref(self, name):
if name.startswith('x'):
c = chr(int(name[1:], 16))
else:
c = chr(int(name))
self.parsed_html.append(f"Num ent : {c}")
def handle_decl(self, data):
self.parsed_html.append(f"Decl : {data.strip()}")
def scrape_details(url):
parser = MyHTMLParser()
html_text = requests.get(url).text
parser.feed(html_text)
return parser.parsed_html
df = pd.DataFrame(
{'URL': ['https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/']})
df['data'] = df['URL'].map(lambda url: scrape_details(url))
for data in df['data']:
print('\n'.join(data))
some of result:
Data :
Decl : DOCTYPE html
Data :
Start tag: html
attr: ('lang', 'en')
Data :
Start tag: head
Data :
Start tag: meta
attr: ('charset', 'UTF-8')
End tag : meta
Data :
Start tag: title
Data : The Kingsley School, Royal Leamington Spa - ISC
End tag : title
Data :
Start tag: meta
attr: ('name', 'robots')
attr: ('content', 'index, follow')
End tag : meta
Data :
Start tag: link
attr: ('rel', 'canonical')
attr: ('href', '/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/')
End tag : link
Data :
Start tag: meta
attr: ('name', 'description')
attr: ('content', 'Information about Royal Leamington Spa, The Kingsley School')
End tag : meta
Data :
Start tag: meta
attr: ('name', 'viewport')
attr: ('content', 'width=device-width, initial-scale=1.0')
Data :
Start tag: meta
attr: ('name', 'format-detection')
attr: ('content', 'telephone-no')
End tag : meta
Data :
Comment : Google Tag Manager
Data :
Start tag: script
Data : (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-TCCJT7P');
End tag : script
Data :
Comment : End Google Tag Manager
Data :
Start tag: script
attr: ('src', '/bundles/upper?v=C-4SuuHJ23ObmSV6TVgJvlUZTwUHbE9Mit8bUEVmwIo1')
End tag : script
Data :
Start tag: link
attr: ('href', '/bundles/site?v=_nmQ8YRQiebaCD8eMIdpKjF42e4TM9br-6hkul2424o1')
attr: ('rel', 'stylesheet')
End tag : link
Data :
Start tag: link
attr: ('rel', 'apple-touch-icon')
attr: ('sizes', '180x180')
attr: ('href', '/favicon/isc/apple-touch-icon.png')
Data :
Start tag: link
attr: ('rel', 'icon')
attr: ('type', 'image/png')
attr: ('sizes', '32x32')
attr: ('href', '/favicon/isc/favicon-32x32.png')
Data :
Start tag: link
attr: ('rel', 'icon')
attr: ('type', 'image/png')
attr: ('sizes', '16x16')
attr: ('href', '/favicon/isc/favicon-16x16.png')
Data :
Start tag: link
attr: ('rel', 'manifest')
attr: ('href', '/favicon/isc/site.webmanifest')
Data :
Start tag: link
attr: ('rel', 'mask-icon')
attr: ('href', '/favicon/isc/safari-pinned-tab.svg')
attr: ('color', '#5bbad5')
Data :
Start tag: meta
attr: ('name', 'msapplication-TileColor')
attr: ('content', '#ffc40d')
Data :
Start tag: meta
attr: ('name', 'msapplication-config')
attr: ('content', '/favicon/isc/browserconfig.xml')
Data :
Start tag: meta
attr: ('name', 'theme-color')
attr: ('content', '#ffffff')
Data :
Start tag: script
Data : var aId = 0; // mz values
var sId = 0;
var mID = 0;
End tag : script
Data :
Start tag: script
attr: ('src', 'https://www.google.com/recaptcha/api.js')
End tag : script
Data :
Start tag: link
attr: ('href', 'https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css')
attr: ('rel', 'stylesheet')
attr: ('integrity', 'sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN')
attr: ('crossorigin', 'anonymous')
Data :
Start tag: script
attr: ('type', 'application/ld+json')
Data : {
"@context": "https://schema.org",
"@type": "School",
"name": "The Kingsley School",
"url": "https://www.thekingsleyschool.co.uk/",
"description": "Kingsley provides a friendly, supportive family environment where individuals are encouraged to enjoy learning, and to aim for and achieve
excellence. Our principal aim is to enable all our pupils to fulfil their potential wherever it may lie, through our wide range of subjects, lively extra-c
urricular activities, small classes and highly effective pastoral care.\r\n",
"address": {
"@type": "PostalAddress",
"addressCountry": "England",
"addressLocality": "Royal Leamington Spa",
"streetAddress": "Beauchamp Avenue",
"postalCode": "CV32 5RD",
"telephone": "+44 (0)1926 425127"
},
"geo": {
"@type": "GeoCoordinates",
"latitude": 52.2942769,
"longitude": -1.5379113
},
"email": "[email protected]"
}
End tag : script
UPDATE:
specific terms I want to extract:
Head's name: Mr James Murphy-O'Connor (Principal)
ISC associations: HMC, IAPS, GSA, AGBIS, ISBA
Religious affiliation: Church in Wales
Day/boarding type: Day and Full Boarding
Gender profile: Boys and girls taught separately (diamond structure)
Size: 1236
Boys - age range & pupil numbers:
Day: 3 to 18 (522)
Boarding: 7 to 18 (150)
Sixth form: (156)
Girls - age range & pupil numbers:
Day: 3 to 18 (456)
Boarding: 7 to 18 (108)
Sixth form: (104)
...
| How to extract HTML data from pandas dataframe column | I'm trying to extract certain elements from a block of webpages that I've extracted and put into a pandas column. I've tried lxml and can't get to pull out very specific phrases in text. What is the pythonistic way to go?
Tried this:
def scrape_details(s):
results = requests.get(s)
results2 = etree.parse(StringIO(str(results.content)), parser)
return results2.xpath('//p')
df['data'] = df['URL'].map(lambda x: scrape_details(x))
Returned:
0 The Kingsley School https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/ warwickshire [[], [[]], [], [], [[], [], [], []], [], [[]], [[]], [[]], [[], [], []], [[], [<Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>, <Element a at 0x7fd3c003fec0>, <Element a at 0x7fd3c0035a80>]], [[]], [[]], [[], []], [[]], [[], []], [[], []], [[], []], [[], []], [[]], [[]], [[]], [[]], [[]], [[]], [], [], []]
| [
"What elements are you trying to extract? If its general HTML elements you could use the html.parser python package. I just modified the generic parser class they provide near the bottom of the page a lil:\nimport requests\nimport pandas as pd\n\n\n\n\n\n\n\n\nfrom html.parser import HTMLParser\nfrom html.entities import name2codepoint\n\n\nclass MyHTMLParser(HTMLParser):\n def __init__(self):\n self.parsed_html = []\n super().__init__()\n\n def handle_starttag(self, tag, attrs):\n self.parsed_html.append(f\"Start tag: {tag}\")\n for attr in attrs:\n self.parsed_html.append(f\" attr: {attr}\")\n\n def handle_endtag(self, tag):\n self.parsed_html.append(f\"End tag : {tag}\")\n\n def handle_data(self, data):\n self.parsed_html.append(f\"Data : {data.strip()}\")\n\n def handle_comment(self, data):\n self.parsed_html.append(f\"Comment : {data.strip()}\")\n\n def handle_entityref(self, name):\n c = chr(name2codepoint[name])\n self.parsed_html.append(f\"Named ent: {c}\")\n\n def handle_charref(self, name):\n if name.startswith('x'):\n c = chr(int(name[1:], 16))\n else:\n c = chr(int(name))\n self.parsed_html.append(f\"Num ent : {c}\")\n\n def handle_decl(self, data):\n self.parsed_html.append(f\"Decl : {data.strip()}\")\n\n\ndef scrape_details(url):\n parser = MyHTMLParser()\n html_text = requests.get(url).text\n parser.feed(html_text)\n return parser.parsed_html\n\n\ndf = pd.DataFrame(\n {'URL': ['https://www.isc.co.uk/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/']})\ndf['data'] = df['URL'].map(lambda url: scrape_details(url))\n\nfor data in df['data']:\n print('\\n'.join(data))\n\nsome of result:\nData : \nDecl : DOCTYPE html\nData :\nStart tag: html\n attr: ('lang', 'en')\nData :\nStart tag: head\nData :\nStart tag: meta\n attr: ('charset', 'UTF-8')\nEnd tag : meta\nData :\nStart tag: title\nData : The Kingsley School, Royal Leamington Spa - ISC\nEnd tag : title\nData :\nStart tag: meta\n attr: ('name', 'robots')\n attr: ('content', 'index, follow')\nEnd tag : meta\nData :\nStart tag: link\n attr: ('rel', 'canonical')\n attr: ('href', '/schools/england/warwickshire/royal-leamington-spa/the-kingsley-school/')\nEnd tag : link\nData :\nStart tag: meta\n attr: ('name', 'description')\n attr: ('content', 'Information about Royal Leamington Spa, The Kingsley School')\nEnd tag : meta\nData :\nStart tag: meta\n attr: ('name', 'viewport')\n attr: ('content', 'width=device-width, initial-scale=1.0')\nData :\nStart tag: meta\n attr: ('name', 'format-detection')\n attr: ('content', 'telephone-no')\nEnd tag : meta\nData :\nComment : Google Tag Manager\nData :\nStart tag: script\nData : (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':\n new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],\n j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=\n 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);\n })(window,document,'script','dataLayer','GTM-TCCJT7P');\nEnd tag : script\nData :\nComment : End Google Tag Manager\nData :\nStart tag: script\n attr: ('src', '/bundles/upper?v=C-4SuuHJ23ObmSV6TVgJvlUZTwUHbE9Mit8bUEVmwIo1')\nEnd tag : script\nData :\nStart tag: link\n attr: ('href', '/bundles/site?v=_nmQ8YRQiebaCD8eMIdpKjF42e4TM9br-6hkul2424o1')\n attr: ('rel', 'stylesheet')\nEnd tag : link\nData :\nStart tag: link\n attr: ('rel', 'apple-touch-icon')\n attr: ('sizes', '180x180')\n attr: ('href', '/favicon/isc/apple-touch-icon.png')\nData :\nStart tag: link\n attr: ('rel', 'icon')\n attr: ('type', 'image/png')\n attr: ('sizes', '32x32')\n attr: ('href', '/favicon/isc/favicon-32x32.png')\nData :\nStart tag: link\n attr: ('rel', 'icon')\n attr: ('type', 'image/png')\n attr: ('sizes', '16x16')\n attr: ('href', '/favicon/isc/favicon-16x16.png')\nData :\nStart tag: link\n attr: ('rel', 'manifest')\n attr: ('href', '/favicon/isc/site.webmanifest')\nData :\nStart tag: link\n attr: ('rel', 'mask-icon')\n attr: ('href', '/favicon/isc/safari-pinned-tab.svg')\n attr: ('color', '#5bbad5')\nData :\nStart tag: meta\n attr: ('name', 'msapplication-TileColor')\n attr: ('content', '#ffc40d')\nData :\nStart tag: meta\n attr: ('name', 'msapplication-config')\n attr: ('content', '/favicon/isc/browserconfig.xml')\nData :\nStart tag: meta\n attr: ('name', 'theme-color')\n attr: ('content', '#ffffff')\nData :\nStart tag: script\nData : var aId = 0; // mz values\n var sId = 0;\n var mID = 0;\nEnd tag : script\nData :\nStart tag: script\n attr: ('src', 'https://www.google.com/recaptcha/api.js')\nEnd tag : script\nData :\nStart tag: link\n attr: ('href', 'https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css')\n attr: ('rel', 'stylesheet')\n attr: ('integrity', 'sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN')\n attr: ('crossorigin', 'anonymous')\nData :\nStart tag: script\n attr: ('type', 'application/ld+json')\nData : {\n \"@context\": \"https://schema.org\",\n \"@type\": \"School\",\n \"name\": \"The Kingsley School\",\n \"url\": \"https://www.thekingsleyschool.co.uk/\",\n \"description\": \"Kingsley provides a friendly, supportive family environment where individuals are encouraged to enjoy learning, and to aim for and achieve\n excellence. Our principal aim is to enable all our pupils to fulfil their potential wherever it may lie, through our wide range of subjects, lively extra-c\nurricular activities, small classes and highly effective pastoral care.\\r\\n\",\n \"address\": {\n \"@type\": \"PostalAddress\",\n \"addressCountry\": \"England\",\n \"addressLocality\": \"Royal Leamington Spa\",\n \"streetAddress\": \"Beauchamp Avenue\",\n \"postalCode\": \"CV32 5RD\",\n \"telephone\": \"+44 (0)1926 425127\"\n },\n \"geo\": {\n \"@type\": \"GeoCoordinates\",\n \"latitude\": 52.2942769,\n \"longitude\": -1.5379113\n },\n \"email\": \"[email protected]\"\n}\nEnd tag : script\n\nUPDATE:\n\nspecific terms I want to extract:\n\nHead's name: Mr James Murphy-O'Connor (Principal)\nISC associations: HMC, IAPS, GSA, AGBIS, ISBA\n\nReligious affiliation: Church in Wales\n\nDay/boarding type: Day and Full Boarding\n\nGender profile: Boys and girls taught separately (diamond structure)\nSize: 1236\n\nBoys - age range & pupil numbers:\n\nDay: 3 to 18 (522)\n\nBoarding: 7 to 18 (150)\n\nSixth form: (156)\n\nGirls - age range & pupil numbers:\n\nDay: 3 to 18 (456)\n\nBoarding: 7 to 18 (108)\n\nSixth form: (104)\n\n...\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074675148_pandas_python.txt |
Q:
I have a problem with my python Minecraft copy
I was working with "Ursina Engine"
My project is to make a copy of Minecraft, then I found out a problem that every time I run the program
and when I want to right-click to place a block, nothing happens.
Thanks to someone who can help me find the issue and tell me how to fix it * Here is my Code:*
from ursina import *
from ursina.prefabs.first_person_controller import FirstPersonController
class Vovel(Button):
def __init__(self, position = (0,0,0)):
super().__init__(
parent=scene,
position=position,
model='cube',
origin_y = 0.5,
texture= 'white_cube',
color= color.white,
highlight_color = color.lime,
)
def Input(self, key):
if self.hovered:
if key == 'left mouse down':
vovel = Vovel(position= self.position + mouse.normal)
if key == 'right mouse down':
destroy(self)
app = Ursina()
for z in range(8):
for x in range(8):
vovel = Vovel(position= (x,0,z))
player = FirstPersonController()
app.run()
End.
A:
The name of the input function is wrong. Input should be input
A:
The input function should be input and not Input, rest of the code is absolutely correct. So, your code should be:
from ursina import *
from ursina.prefabs.first_person_controller import FirstPersonController
class Vovel(Button):
def __init__(self, position=(0, 0, 0)):
super().__init__(
parent=scene,
position=position,
model='cube',
origin_y=0.5,
texture='white_cube',
color=color.white,
highlight_color=color.lime,
)
def input(self, key):
if self.hovered:
if key == 'left mouse down':
vovel = Vovel(position=self.position + mouse.normal)
if key == 'right mouse down':
destroy(self)
app = Ursina()
for z in range(8):
for x in range(8):
vovel = Vovel(position=(x, 0, z))
player = FirstPersonController()
app.run()
This code works, you can place a block with left click and remove a block with right click!
A:
You only have to replace left mouse down with right mouse down and right mouse down with left mouse down, but I'm using this code for "Minecraft":
`
from ursina.prefabs.first_person_controller import *
app=Ursina()
FirstPersonController()
Sky()
def voxel(position:Vec3):
Voxel=Entity(model="assets/block.obj", position=position, collider="box", texture="assets/sand_block.jpg",origin_y=0.5,scale=0.5,on_click=lambda:destroy(Voxel))
for x in range(20):
for z in range(20):
voxel(position=Vec3(x,0,z))
def input(key):
if key=="right mouse down":
vox=voxel(position=Vec3(round(mouse.world_point.x),ceil(mouse.world_point.y),round(mouse.world_point.z)))
app.run()`
| I have a problem with my python Minecraft copy | I was working with "Ursina Engine"
My project is to make a copy of Minecraft, then I found out a problem that every time I run the program
and when I want to right-click to place a block, nothing happens.
Thanks to someone who can help me find the issue and tell me how to fix it * Here is my Code:*
from ursina import *
from ursina.prefabs.first_person_controller import FirstPersonController
class Vovel(Button):
def __init__(self, position = (0,0,0)):
super().__init__(
parent=scene,
position=position,
model='cube',
origin_y = 0.5,
texture= 'white_cube',
color= color.white,
highlight_color = color.lime,
)
def Input(self, key):
if self.hovered:
if key == 'left mouse down':
vovel = Vovel(position= self.position + mouse.normal)
if key == 'right mouse down':
destroy(self)
app = Ursina()
for z in range(8):
for x in range(8):
vovel = Vovel(position= (x,0,z))
player = FirstPersonController()
app.run()
End.
| [
"The name of the input function is wrong. Input should be input\n",
"The input function should be input and not Input, rest of the code is absolutely correct. So, your code should be:\nfrom ursina import *\nfrom ursina.prefabs.first_person_controller import FirstPersonController\n\n\nclass Vovel(Button):\ndef __init__(self, position=(0, 0, 0)):\n super().__init__(\n parent=scene,\n position=position,\n model='cube',\n origin_y=0.5,\n texture='white_cube',\n color=color.white,\n highlight_color=color.lime,\n )\n\ndef input(self, key):\n if self.hovered:\n if key == 'left mouse down':\n vovel = Vovel(position=self.position + mouse.normal)\n if key == 'right mouse down':\n destroy(self)\n\n\napp = Ursina()\nfor z in range(8):\nfor x in range(8):\n vovel = Vovel(position=(x, 0, z))\nplayer = FirstPersonController()\napp.run()\n\nThis code works, you can place a block with left click and remove a block with right click!\n",
"You only have to replace left mouse down with right mouse down and right mouse down with left mouse down, but I'm using this code for \"Minecraft\":\n`\nfrom ursina.prefabs.first_person_controller import *\napp=Ursina()\nFirstPersonController()\nSky()\ndef voxel(position:Vec3):\n Voxel=Entity(model=\"assets/block.obj\", position=position, collider=\"box\", texture=\"assets/sand_block.jpg\",origin_y=0.5,scale=0.5,on_click=lambda:destroy(Voxel))\nfor x in range(20):\n for z in range(20):\n voxel(position=Vec3(x,0,z))\ndef input(key):\n if key==\"right mouse down\":\n vox=voxel(position=Vec3(round(mouse.world_point.x),ceil(mouse.world_point.y),round(mouse.world_point.z)))\napp.run()`\n\n"
] | [
6,
4,
0
] | [
"Ah now I'm understanding your problem, you have to change input to Input, the rest is fine.\n"
] | [
-1
] | [
"python",
"ursina",
"user_interface"
] | stackoverflow_0069450738_python_ursina_user_interface.txt |
Q:
Slash commands don't disappearing (nextcord)
I'm developing a discord bot using nextcord.
When i'm registering slash command and deleting it later, it's also staying at discord command list.
What can I do to delete non-existent slash commands or sync actual bot's command list with discord?
P.S. All of my commands are in different cogs
I was waiting about 4 hours for registering and sync slash commands by discord but to no avail.
A:
You should try kicking your bot and then inviting it back to your server. If this doesn't work, regenerate your bot's token. This should sync it back with Discord, and the command should be gone.
A:
The solution is adding all servers in default_guild_ids variable
Use methods on_ready and on_guild_join
Example code:
class Bot(nextcord.ext.commands.Bot):
async def on_ready(self):
for guild in self.guilds:
self.default_guild_ids.append(guild.id)
async def on_guild_join(self, guild: nextcord.Guild):
self.default_guild_ids.append(guild.id)
| Slash commands don't disappearing (nextcord) | I'm developing a discord bot using nextcord.
When i'm registering slash command and deleting it later, it's also staying at discord command list.
What can I do to delete non-existent slash commands or sync actual bot's command list with discord?
P.S. All of my commands are in different cogs
I was waiting about 4 hours for registering and sync slash commands by discord but to no avail.
| [
"You should try kicking your bot and then inviting it back to your server. If this doesn't work, regenerate your bot's token. This should sync it back with Discord, and the command should be gone.\n",
"The solution is adding all servers in default_guild_ids variable\nUse methods on_ready and on_guild_join\nExample code:\nclass Bot(nextcord.ext.commands.Bot):\n async def on_ready(self):\n for guild in self.guilds:\n self.default_guild_ids.append(guild.id)\n\n async def on_guild_join(self, guild: nextcord.Guild):\n self.default_guild_ids.append(guild.id)\n\n"
] | [
0,
0
] | [] | [] | [
"discord",
"nextcord",
"python"
] | stackoverflow_0074582360_discord_nextcord_python.txt |
Q:
Pyspark MapReduce - how to get number occurrences in a list of tuple
I have a list like:
A 2022-08-13
B 2022-08-14
B 2022-08-13
A 2022-05-04
B 2022-05-04
C 2022-08-14
...
and I applied the following map functions to map each row with the # of occurrences:
map(lambda x: ((x.split(',')[0], x.split(',')[1]), 1))
To get this:
[
(('A', '2022-08-13'), 1),
(('B', '2022-08-14'), 1),
(('B', '2022-08-13'), 1),
(('A', '2022-05-04'), 1),
(('B', '2022-05-04'), 1),
(('C', '2022-08-14'), 1),
...
]
My end goal is to find the number of occurrences where two persons (denoted by the letter) have the same dates, to output something like this for the example above:
[
('A', 'B', 2),
('B', 'C', 1),
...
]
This is my code so far, but the reduceByKey is not working as expected:
shifts_mapped = worker_shifts.map(lambda x: (x.split(',')[1], 1))
shifts_mapped = worker_shifts.map(lambda x: ((x.split(',')[0], x.split(',')[1]), 1))
count = shifts_mapped.reduceByKey(lambda x, y: x[0][1] + y[0][1])
A:
Group by multiple times, first by "person", "date" and then by "date", "count" and collect persons with same date and count.
Then generate pair combinations, explode, and separate pair.
I extended your sample dataset to include persons "D" & "E" same as "A" & "B" to generate more combinations.
df = spark.createDataFrame(data=[["A","2022-08-13"],["E","2022-08-13"],["D","2022-08-13"],["B","2022-08-14"],["B","2022-08-13"],["D","2022-05-04"],["E","2022-05-04"],["A","2022-05-04"],["B","2022-05-04"],["C","2022-08-14"]], schema=["person", "date"])
df = df.groupBy("person", "date").count()
df = df.groupBy("date", "count") \
.agg(F.collect_list("person").alias("persons"))
@F.udf(returnType="array<struct<col1:string, col2:string>>")
def combinations(arr):
import itertools
return list(itertools.combinations(sorted(arr), 2))
df = df.withColumn("persons", combinations("persons"))
df = df.withColumn("persons", F.explode("persons"))
df = df.withColumn("person_1", F.col("persons").getField("col1")) \
.withColumn("person_2", F.col("persons").getField("col2"))
df = df.groupBy("person_1", "person_2").count()
Output:
+--------+--------+-----+
|person_1|person_2|count|
+--------+--------+-----+
|B |C |1 |
|D |E |2 |
|A |E |2 |
|A |D |2 |
|B |D |2 |
|A |B |2 |
|B |E |2 |
+--------+--------+-----+
| Pyspark MapReduce - how to get number occurrences in a list of tuple | I have a list like:
A 2022-08-13
B 2022-08-14
B 2022-08-13
A 2022-05-04
B 2022-05-04
C 2022-08-14
...
and I applied the following map functions to map each row with the # of occurrences:
map(lambda x: ((x.split(',')[0], x.split(',')[1]), 1))
To get this:
[
(('A', '2022-08-13'), 1),
(('B', '2022-08-14'), 1),
(('B', '2022-08-13'), 1),
(('A', '2022-05-04'), 1),
(('B', '2022-05-04'), 1),
(('C', '2022-08-14'), 1),
...
]
My end goal is to find the number of occurrences where two persons (denoted by the letter) have the same dates, to output something like this for the example above:
[
('A', 'B', 2),
('B', 'C', 1),
...
]
This is my code so far, but the reduceByKey is not working as expected:
shifts_mapped = worker_shifts.map(lambda x: (x.split(',')[1], 1))
shifts_mapped = worker_shifts.map(lambda x: ((x.split(',')[0], x.split(',')[1]), 1))
count = shifts_mapped.reduceByKey(lambda x, y: x[0][1] + y[0][1])
| [
"Group by multiple times, first by \"person\", \"date\" and then by \"date\", \"count\" and collect persons with same date and count.\nThen generate pair combinations, explode, and separate pair.\nI extended your sample dataset to include persons \"D\" & \"E\" same as \"A\" & \"B\" to generate more combinations.\ndf = spark.createDataFrame(data=[[\"A\",\"2022-08-13\"],[\"E\",\"2022-08-13\"],[\"D\",\"2022-08-13\"],[\"B\",\"2022-08-14\"],[\"B\",\"2022-08-13\"],[\"D\",\"2022-05-04\"],[\"E\",\"2022-05-04\"],[\"A\",\"2022-05-04\"],[\"B\",\"2022-05-04\"],[\"C\",\"2022-08-14\"]], schema=[\"person\", \"date\"])\n\ndf = df.groupBy(\"person\", \"date\").count()\n\ndf = df.groupBy(\"date\", \"count\") \\\n .agg(F.collect_list(\"person\").alias(\"persons\"))\n\[email protected](returnType=\"array<struct<col1:string, col2:string>>\")\ndef combinations(arr): \n import itertools\n return list(itertools.combinations(sorted(arr), 2))\n\ndf = df.withColumn(\"persons\", combinations(\"persons\"))\n\ndf = df.withColumn(\"persons\", F.explode(\"persons\"))\n\ndf = df.withColumn(\"person_1\", F.col(\"persons\").getField(\"col1\")) \\\n .withColumn(\"person_2\", F.col(\"persons\").getField(\"col2\"))\n\ndf = df.groupBy(\"person_1\", \"person_2\").count()\n\nOutput:\n+--------+--------+-----+\n|person_1|person_2|count|\n+--------+--------+-----+\n|B |C |1 |\n|D |E |2 |\n|A |E |2 |\n|A |D |2 |\n|B |D |2 |\n|A |B |2 |\n|B |E |2 |\n+--------+--------+-----+\n\n"
] | [
0
] | [] | [] | [
"apache_spark",
"pyspark",
"python"
] | stackoverflow_0074663401_apache_spark_pyspark_python.txt |
Q:
Ebay Scraping, filter out international results, select parents that do not have specific descendants
EDIT:
I have solved this thanks to @Driftr95
Here is the working code:
import xlwings as xw
from bs4 import BeautifulSoup
import requests
import statistics
@xw.func
def get_prices(url,args =[]):
url = requests.get(url).content
soup = BeautifulSoup(url,'lxml')
products = []
rsecSel = 'li:not(.srp-river-answer--REWRITE_START ~ li)'
iDetSel = f'div[id="srp-river-results"] {rsecSel} div.s-item__details'
results = soup.select(f'{iDetSel}:not(:has(span.s-item__location))')
for item in results:
price = item.find('span', class_='s-item__price').text.replace('$', '').replace(',', '')
if 'to' not in price:
price = float(price)
products.append(price)
mean = round(statistics.mean(products), 2)
median = round(statistics.median(products), 2)
return mean, median
I now have a working function in excel that will automatically look up sold prices on ebay that I can iterate over a large amount of products instantly!
I have some code that I put together to scrape ebay sold prices using BeautifulSoup and so far it is working pretty good. The only issue I currently have is that it also pulls prices for 2 categories that ebay adds to the search results page (International Sellers, and results matching fewer words)
I am struggling to filter these out. Its like I need to identify if the listing (the parent) contains a specific descendant and then filter out that parent. I hope that is clear, here is a sample:
https://www.ebay.com/sch/57988/i.html?_from=R40&_nkw=Fjallraven%20nuuk%20parka&LH_Complete=1&LH_Sold=1&_udlo=50&_udhi=600&LH_PrefLoc=1
The picture is an example of an item that I would like to filter out. It has a Span Class for location. This class only exists if the item is from an international seller.
import xlwings as xw
import bs4 as bs
import requests
import statistics
@xw.func
def get_prices(url,args =[]):
url_base = requests.get(url).text
soup = bs.BeautifulSoup(url_base,'lxml')
products = []
results = soup.find('div', {'class': 'srp-river-results clearfix'}).find_all('div', {'class': ['s-item__details clearfix'] })
for item in results:
price = item.find('span', class_='s-item__price').text.replace('$', '').replace(',', '')
if 'to' not in price:
price = float(price)
products.append(price)
#def calculate_averages(products):
mean = round(statistics.mean(products), 2)
median = round(statistics.median(products), 2)
mode = round(statistics.mode(products), 2)
return mean, median, mode
I have tried several different methods but cannot seem to filter out the parents based on a class in one of the children.
A:
It has a Span Class for location. This class only exists if the item is from an international seller.
Assuming the class you mean is s-item__location you can use .select with the :has and :not pseudo-classes as below
iDetSel = 'div[id="srp-river-results"] div.s-item__details'
# results = soup.select(iDetSel) # --> your current resultset
results = soup.select(f'{iDetSel}:not(:has(span.s-item__location))')
or, if you want to only use find...:
results = soup.find_all(
lambda r: r.name == 'div' and
r.get('class') == ['s-item__details', 'clearfix'] and
r.find_parent('div', {'class': 'srp-river-results clearfix'}) and
not r.find('span', {'class': 's-item__location'})
)
(I find the .select-with-CSS-selectors method much more convenient.)
As an example:
url = 'https://www.ebay.com/sch/57988/i.html?_from=R40&_nkw=Fjallraven%20nuuk%20parka&LH_Complete=1&LH_Sold=1&_udlo=50&_udhi=600&LH_PrefLoc=1'
soup = BeautifulSoup(requests.get(url).content)
iDetSel = 'div[id="srp-river-results"] div.s-item__details'
selectors = [
('With Location', f'{iDetSel}:has(span.s-item__location)') ,
('Without Location', f'{iDetSel}:not(:has(span.s-item__location))')
]
for t, sel in selectors:
print(f'\n\n{t}')
for r in soup.select(sel):
print(' ', ' '.join(w for w in r.get_text(' ').split() if w)) # minimize whitespace
prints
With Location
$151.40 Best offer accepted +$16.72 shipping from Lithuania Free returns Sponsored
Without Location
$349.95 Buy It Now Free shipping Sponsored
$299.40 Was: Previous Price $499.00 40% off or Best Offer +$29.00 shipping Sponsored
$425.00 Best offer accepted +$13.45 shipping Sponsored
$329.99 Buy It Now +$19.99 shipping Sponsored
$349.30 Was: Previous Price $499.00 30% off or Best Offer +$29.00 shipping Sponsored
$296.65 Was: Previous Price $349.00 15% off Buy It Now +$20.00 shipping Sponsored
$339.99 Buy It Now +$17.87 shipping Sponsored
$361.00 Buy It Now +$10.51 shipping Sponsored
$202.00 16 bids +$25.05 shipping Sponsored
$236.00 Buy It Now +$5.99 shipping Extra 15% off Sponsored
$300.00 or Best Offer +$12.75 shipping Sponsored
$330.00 or Best Offer +$11.00 shipping Sponsored
$330.00 Best offer accepted +$11.00 shipping Sponsored
$289.00 Best offer accepted +$16.75 shipping Sponsored
Added EDIT: To only get the first section, you can do
rsecSel = 'li:not(.srp-river-answer--REWRITE_START ~ li)'
iDetSel = f'div[id="srp-river-results"] {rsecSel} div.s-item__details'
results = soup.select(f'{iDetSel}:not(:has(span.s-item__location))')
(Although, since the international sellers seem to be in a separate section, the :not(:has(span.s-item__location)) part might not be necessary...)
| Ebay Scraping, filter out international results, select parents that do not have specific descendants | EDIT:
I have solved this thanks to @Driftr95
Here is the working code:
import xlwings as xw
from bs4 import BeautifulSoup
import requests
import statistics
@xw.func
def get_prices(url,args =[]):
url = requests.get(url).content
soup = BeautifulSoup(url,'lxml')
products = []
rsecSel = 'li:not(.srp-river-answer--REWRITE_START ~ li)'
iDetSel = f'div[id="srp-river-results"] {rsecSel} div.s-item__details'
results = soup.select(f'{iDetSel}:not(:has(span.s-item__location))')
for item in results:
price = item.find('span', class_='s-item__price').text.replace('$', '').replace(',', '')
if 'to' not in price:
price = float(price)
products.append(price)
mean = round(statistics.mean(products), 2)
median = round(statistics.median(products), 2)
return mean, median
I now have a working function in excel that will automatically look up sold prices on ebay that I can iterate over a large amount of products instantly!
I have some code that I put together to scrape ebay sold prices using BeautifulSoup and so far it is working pretty good. The only issue I currently have is that it also pulls prices for 2 categories that ebay adds to the search results page (International Sellers, and results matching fewer words)
I am struggling to filter these out. Its like I need to identify if the listing (the parent) contains a specific descendant and then filter out that parent. I hope that is clear, here is a sample:
https://www.ebay.com/sch/57988/i.html?_from=R40&_nkw=Fjallraven%20nuuk%20parka&LH_Complete=1&LH_Sold=1&_udlo=50&_udhi=600&LH_PrefLoc=1
The picture is an example of an item that I would like to filter out. It has a Span Class for location. This class only exists if the item is from an international seller.
import xlwings as xw
import bs4 as bs
import requests
import statistics
@xw.func
def get_prices(url,args =[]):
url_base = requests.get(url).text
soup = bs.BeautifulSoup(url_base,'lxml')
products = []
results = soup.find('div', {'class': 'srp-river-results clearfix'}).find_all('div', {'class': ['s-item__details clearfix'] })
for item in results:
price = item.find('span', class_='s-item__price').text.replace('$', '').replace(',', '')
if 'to' not in price:
price = float(price)
products.append(price)
#def calculate_averages(products):
mean = round(statistics.mean(products), 2)
median = round(statistics.median(products), 2)
mode = round(statistics.mode(products), 2)
return mean, median, mode
I have tried several different methods but cannot seem to filter out the parents based on a class in one of the children.
| [
"\nIt has a Span Class for location. This class only exists if the item is from an international seller.\n\nAssuming the class you mean is s-item__location you can use .select with the :has and :not pseudo-classes as below\n iDetSel = 'div[id=\"srp-river-results\"] div.s-item__details'\n # results = soup.select(iDetSel) # --> your current resultset\n results = soup.select(f'{iDetSel}:not(:has(span.s-item__location))')\n\nor, if you want to only use find...:\n results = soup.find_all(\n lambda r: r.name == 'div' and \n r.get('class') == ['s-item__details', 'clearfix'] and\n r.find_parent('div', {'class': 'srp-river-results clearfix'}) and\n not r.find('span', {'class': 's-item__location'})\n )\n\n(I find the .select-with-CSS-selectors method much more convenient.)\n\nAs an example:\nurl = 'https://www.ebay.com/sch/57988/i.html?_from=R40&_nkw=Fjallraven%20nuuk%20parka&LH_Complete=1&LH_Sold=1&_udlo=50&_udhi=600&LH_PrefLoc=1'\nsoup = BeautifulSoup(requests.get(url).content)\n\niDetSel = 'div[id=\"srp-river-results\"] div.s-item__details' \nselectors = [\n ('With Location', f'{iDetSel}:has(span.s-item__location)') , \n ('Without Location', f'{iDetSel}:not(:has(span.s-item__location))') \n] \nfor t, sel in selectors:\n print(f'\\n\\n{t}')\n for r in soup.select(sel):\n print(' ', ' '.join(w for w in r.get_text(' ').split() if w)) # minimize whitespace\n\nprints\n\n\nWith Location\n $151.40 Best offer accepted +$16.72 shipping from Lithuania Free returns Sponsored\n\n\nWithout Location\n $349.95 Buy It Now Free shipping Sponsored\n $299.40 Was: Previous Price $499.00 40% off or Best Offer +$29.00 shipping Sponsored\n $425.00 Best offer accepted +$13.45 shipping Sponsored\n $329.99 Buy It Now +$19.99 shipping Sponsored\n $349.30 Was: Previous Price $499.00 30% off or Best Offer +$29.00 shipping Sponsored\n $296.65 Was: Previous Price $349.00 15% off Buy It Now +$20.00 shipping Sponsored\n $339.99 Buy It Now +$17.87 shipping Sponsored\n $361.00 Buy It Now +$10.51 shipping Sponsored\n $202.00 16 bids +$25.05 shipping Sponsored\n $236.00 Buy It Now +$5.99 shipping Extra 15% off Sponsored\n $300.00 or Best Offer +$12.75 shipping Sponsored\n $330.00 or Best Offer +$11.00 shipping Sponsored\n $330.00 Best offer accepted +$11.00 shipping Sponsored\n $289.00 Best offer accepted +$16.75 shipping Sponsored\n\n\n\nAdded EDIT: To only get the first section, you can do\n rsecSel = 'li:not(.srp-river-answer--REWRITE_START ~ li)'\n iDetSel = f'div[id=\"srp-river-results\"] {rsecSel} div.s-item__details'\n results = soup.select(f'{iDetSel}:not(:has(span.s-item__location))')\n\n(Although, since the international sellers seem to be in a separate section, the :not(:has(span.s-item__location)) part might not be necessary...)\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0074673061_beautifulsoup_python.txt |
Q:
how to create dynamic database table using csv file in django or DRF
I am going to create a database table using csv file without model in django. Steps are:
after sending csv file by post request, one database table will be created according to csv headers (name, university, score, total_score etc). And it will be populated using csv file data. Database table name should be derived from csv file name.
I searched but couldn't find good solution.
Any help is appreciated.
Below is my code to read csv file
class UploadProductApiView(generics.CreateAPIView):
serializer_class = FileUploadSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
file = serializer.validated_data['file']
decoded_file = file.read().decode()
# upload_products_csv.delay(decoded_file, request.user.pk)
io_string = io.StringIO(decoded_file)
reader = csv.reader(io_string)
for row in reader:
print(row)
A:
You could always create a dynamic Django model: https://code.djangoproject.com/wiki/DynamicModels
With this approach you could create models on the fly and by running this snippet
from django.core.management import call_command
call_command('makemigrations')
call_command('migrate')
you could migrate the model to the database and use it for accessing and storing the csv data. After that you should create the model code with inspectdb:
python manage.py inspectdb TableName > output.py
Set the path to the where you want your model file to be generated. I'm not certain if this command would overwrite or append to the current file, so you can try to append to current file, if it doesn't work make a temp file and write the output to it and append it to your desired models.py file: https://www.geeksforgeeks.org/python-append-content-of-one-text-file-to-another/
After the whole process you will have migrations generated, executed and the model in models.py for use on server restart.
| how to create dynamic database table using csv file in django or DRF | I am going to create a database table using csv file without model in django. Steps are:
after sending csv file by post request, one database table will be created according to csv headers (name, university, score, total_score etc). And it will be populated using csv file data. Database table name should be derived from csv file name.
I searched but couldn't find good solution.
Any help is appreciated.
Below is my code to read csv file
class UploadProductApiView(generics.CreateAPIView):
serializer_class = FileUploadSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
file = serializer.validated_data['file']
decoded_file = file.read().decode()
# upload_products_csv.delay(decoded_file, request.user.pk)
io_string = io.StringIO(decoded_file)
reader = csv.reader(io_string)
for row in reader:
print(row)
| [
"You could always create a dynamic Django model: https://code.djangoproject.com/wiki/DynamicModels\nWith this approach you could create models on the fly and by running this snippet\nfrom django.core.management import call_command\ncall_command('makemigrations')\ncall_command('migrate')\n\nyou could migrate the model to the database and use it for accessing and storing the csv data. After that you should create the model code with inspectdb:\npython manage.py inspectdb TableName > output.py\n\nSet the path to the where you want your model file to be generated. I'm not certain if this command would overwrite or append to the current file, so you can try to append to current file, if it doesn't work make a temp file and write the output to it and append it to your desired models.py file: https://www.geeksforgeeks.org/python-append-content-of-one-text-file-to-another/\nAfter the whole process you will have migrations generated, executed and the model in models.py for use on server restart.\n"
] | [
0
] | [] | [] | [
"django",
"django_models",
"django_rest_framework",
"dynamic_programming",
"python"
] | stackoverflow_0074666791_django_django_models_django_rest_framework_dynamic_programming_python.txt |
Q:
How to sort a list of strings in terms of a duplicate re-ordered copy
Take these two list of strings for example.
names = ['Jack', 'Steve', 'Marc', 'Xavier', 'Bob']
names_copy = ['Steve', 'Marc', 'Xavier', 'Bob', 'Jack']
Essentially I'm trying to find a way to sort names_copy in the same way that names is sorted.
So, a sorted version of names_copy would result in ['Jack', 'Steve', 'Marc', 'Xavier', 'Bob']
NOTE: in my program, names_copy is being created with a map, so I cannot use .sort(), sorted() works however.
I understand the sorted() function takes in a key parameter indicating the sort order, but I'm not too sure how to use it here.
A:
The easiest way would be to do:
names_copy_sorted = sorted(names_copy, key=names.index)
This assumes that every item in names_copy is actually an item in names. But this solution isn't very efficient. It would be more efficient to create a dictionary that assigns a priority to the items from names and then uses those priorities as key. For example:
priorities = dict((n, i) for i, n in enumerate(names))
names_copy_sorted = sorted(names_copy, key=priorities.get)
If there are items in names_copy that aren't in names then you could adjust that with something like:
priorities = dict((n, i) for i, n in enumerate(names))
default = len(priorities)
def key(name): return priorities.get(name, default)
names_copy_sorted = sorted(names_copy, key=key)
This way the items in names_copy that are not in names are pushed to the back.
Be aware that duplicates in names are dealt with differently: names.index uses the first occurrence as priority, while the priorities.get version uses the last.
| How to sort a list of strings in terms of a duplicate re-ordered copy | Take these two list of strings for example.
names = ['Jack', 'Steve', 'Marc', 'Xavier', 'Bob']
names_copy = ['Steve', 'Marc', 'Xavier', 'Bob', 'Jack']
Essentially I'm trying to find a way to sort names_copy in the same way that names is sorted.
So, a sorted version of names_copy would result in ['Jack', 'Steve', 'Marc', 'Xavier', 'Bob']
NOTE: in my program, names_copy is being created with a map, so I cannot use .sort(), sorted() works however.
I understand the sorted() function takes in a key parameter indicating the sort order, but I'm not too sure how to use it here.
| [
"The easiest way would be to do:\nnames_copy_sorted = sorted(names_copy, key=names.index)\n\nThis assumes that every item in names_copy is actually an item in names. But this solution isn't very efficient. It would be more efficient to create a dictionary that assigns a priority to the items from names and then uses those priorities as key. For example:\npriorities = dict((n, i) for i, n in enumerate(names))\nnames_copy_sorted = sorted(names_copy, key=priorities.get)\n\nIf there are items in names_copy that aren't in names then you could adjust that with something like:\npriorities = dict((n, i) for i, n in enumerate(names))\ndefault = len(priorities)\ndef key(name): return priorities.get(name, default)\n\nnames_copy_sorted = sorted(names_copy, key=key)\n\nThis way the items in names_copy that are not in names are pushed to the back.\nBe aware that duplicates in names are dealt with differently: names.index uses the first occurrence as priority, while the priorities.get version uses the last.\n"
] | [
0
] | [
"\nFirst we have a List that contains duplicates:\nCreate a dictionary, using the List items as keys. This will automatically remove any duplicates because dictionaries cannot have duplicate keys.\nThen, convert the dictionary back into a list:\nNow we have a List without any duplicates, and it has the same order as the original List.\n\nI hope this helps.\n"
] | [
-2
] | [
"python",
"python_3.x",
"sorting"
] | stackoverflow_0074673707_python_python_3.x_sorting.txt |
Q:
stable Baselines 3 model.predict with stepwise varying actions
I would like to train a gym model based on a custom environment.
The training loop looks like this:
obs = env.reset()
for i in range(1000):
action, _states = model.predict(obs, deterministic=True)
print(f"action: {action}")
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
There are basic examples like this, e.g. here:
https://stable-baselines.readthedocs.io/en/master/guide/examples.html
Somewhere else (within the environment class)
I defined an action_space:
self.action_space = spaces.Discrete(5)
With this basic definition of the action_space the actions returned by model.predict
for each step seem to be just numbers from 0 to 4.
Now - for making the question a little more practical - I assume, my
environment describes a maze. My overall available actions in this case could be
realActions = [_UP, _DOWN, _LEFT, _RIGHT]
Now in a maze the available actions for each step are constantly changing.
For example at the upper wall of the maze the actions would only be:
realActions = [_DOWN, _LEFT, _RIGHT]
So I would try to take this into consideration:
env.render()
realActions = env.getCurrentAvailableActions()
#set gym action_space with reduced no. of options:
self.action_space = spaces.Discrete(len(realActions))
And in env.step I would execute realActions[action] in the maze to do the correct move.
Unfortunately the reassignment of self.action_space seems not to be recognized by my model.
There is another important point: the workaround to assign realActions instead of defining action_space itsself with this values could never
train correctly, because the model never would know, which
effect the action it generates would have to the maze, because
it does not see the assignment from its own action to realActions.
So my question is: does stable baselines / gym provide a practicable way to limit the action_spaces to dynamically (per step) available actions?
Thank you!
A:
Yes, it is possible to use dynamically changing action spaces in stable-baselines / gym. The key is to use the self.observation_space attribute within your environment class to specify the available actions at each step.
Here is an example of how you could do this:
class MazeEnv(gym.Env):
def __init__(self):
self.observation_space = spaces.Box(
low=0, high=255, shape=(4, 4, 3), dtype=np.uint8)
self.current_actions = [_UP, _DOWN, _LEFT, _RIGHT]
self.action_space = spaces.Discrete(len(self.current_actions))
def get_current_actions(self):
# Compute and return the available actions at the current state
return self.current_actions
def step(self, action):
# Use self.get_current_actions() to determine which action to take
real_action = self.current_actions[action]
...
In the above example, the MazeEnv class has a current_actions attribute which is used to store the available actions at each step. The get_current_actions method is used to compute and return the available actions at the current state, and the step method uses this information to determine which action to take.
To use this environment with a stable-baselines model, you would need to update the training loop as follows:
obs = env.reset()
for i in range(1000):
# Get the current available actions
current_actions = env.get_current_actions()
# Set the action space to the current available actions
env.action_space = spaces.Discrete(len(current_actions))
# Use the current action space to make a prediction
action, _states = model.predict(obs, deterministic=True)
print(f"action: {action}")
# Use the current available actions to determine which action to take
real_action = current_actions[action]
obs, reward, done, info = env.step(real_action)
env.render()
if done:
obs = env.reset()
In this updated training loop, the current_actions variable is used to store the available actions at each step, and the env.action_space attribute is updated to use these actions. This allows the model to make predictions based on the correct action space, and the real_action variable is used to determine which action to take in the environment.
| stable Baselines 3 model.predict with stepwise varying actions | I would like to train a gym model based on a custom environment.
The training loop looks like this:
obs = env.reset()
for i in range(1000):
action, _states = model.predict(obs, deterministic=True)
print(f"action: {action}")
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
There are basic examples like this, e.g. here:
https://stable-baselines.readthedocs.io/en/master/guide/examples.html
Somewhere else (within the environment class)
I defined an action_space:
self.action_space = spaces.Discrete(5)
With this basic definition of the action_space the actions returned by model.predict
for each step seem to be just numbers from 0 to 4.
Now - for making the question a little more practical - I assume, my
environment describes a maze. My overall available actions in this case could be
realActions = [_UP, _DOWN, _LEFT, _RIGHT]
Now in a maze the available actions for each step are constantly changing.
For example at the upper wall of the maze the actions would only be:
realActions = [_DOWN, _LEFT, _RIGHT]
So I would try to take this into consideration:
env.render()
realActions = env.getCurrentAvailableActions()
#set gym action_space with reduced no. of options:
self.action_space = spaces.Discrete(len(realActions))
And in env.step I would execute realActions[action] in the maze to do the correct move.
Unfortunately the reassignment of self.action_space seems not to be recognized by my model.
There is another important point: the workaround to assign realActions instead of defining action_space itsself with this values could never
train correctly, because the model never would know, which
effect the action it generates would have to the maze, because
it does not see the assignment from its own action to realActions.
So my question is: does stable baselines / gym provide a practicable way to limit the action_spaces to dynamically (per step) available actions?
Thank you!
| [
"Yes, it is possible to use dynamically changing action spaces in stable-baselines / gym. The key is to use the self.observation_space attribute within your environment class to specify the available actions at each step.\nHere is an example of how you could do this:\nclass MazeEnv(gym.Env):\n def __init__(self):\n self.observation_space = spaces.Box(\n low=0, high=255, shape=(4, 4, 3), dtype=np.uint8)\n self.current_actions = [_UP, _DOWN, _LEFT, _RIGHT]\n self.action_space = spaces.Discrete(len(self.current_actions))\n\n def get_current_actions(self):\n # Compute and return the available actions at the current state\n return self.current_actions\n\n def step(self, action):\n # Use self.get_current_actions() to determine which action to take\n real_action = self.current_actions[action]\n ...\n\nIn the above example, the MazeEnv class has a current_actions attribute which is used to store the available actions at each step. The get_current_actions method is used to compute and return the available actions at the current state, and the step method uses this information to determine which action to take.\nTo use this environment with a stable-baselines model, you would need to update the training loop as follows:\nobs = env.reset()\nfor i in range(1000):\n # Get the current available actions\n current_actions = env.get_current_actions()\n\n # Set the action space to the current available actions\n env.action_space = spaces.Discrete(len(current_actions))\n\n # Use the current action space to make a prediction\n action, _states = model.predict(obs, deterministic=True)\n print(f\"action: {action}\")\n\n # Use the current available actions to determine which action to take\n real_action = current_actions[action]\n obs, reward, done, info = env.step(real_action)\n\n env.render()\n if done:\n obs = env.reset()\n\nIn this updated training loop, the current_actions variable is used to store the available actions at each step, and the env.action_space attribute is updated to use these actions. This allows the model to make predictions based on the correct action space, and the real_action variable is used to determine which action to take in the environment.\n"
] | [
1
] | [] | [] | [
"python",
"reinforcement_learning",
"stable_baselines"
] | stackoverflow_0074656974_python_reinforcement_learning_stable_baselines.txt |
Q:
Can anyone shed some light on why this code from the alpaca-py documentation does not work?
I am trying to stream bitcoin data using the alpaca-py trading documentation but I keey getting a invalid syntax error. This is taken exactly from the alpaca-py documentation. Does anyone know what I am doing wrong?
from typing import Any
from alpaca.data.live import CryptoDataStream
wss_client = CryptoDataStream(key-id, secret-key)
# async handler
async def quote_data_handler(data: Any):
# quote data will arrive here
print(data)
wss_client.subscribe_quotes(quote_data_handler, "BTC")
wss_client.run()
A:
Take a look at the dashes in your parameters. Usually a no-no in most languages since the "-" or dash usually refers to a minus which is a binary operator or an operator that operates on two operands to produce a new value or result."
Make sure parameters are set before passing them.
Try the underscore instead as in: key_id = "".
Also useful is the following link to a comprehensive list of crypto pairs supported by Alpaca: https://alpaca.markets/support/alpaca-crypto-coin-pair-faq/#:~:text=For%20the%20initial%20launch%20of,%2C%20SOL%2C%20TRX%2C%20UNI)
Stay up to date on the above list as it's membership may be a bit volatile at the moment.
| Can anyone shed some light on why this code from the alpaca-py documentation does not work? | I am trying to stream bitcoin data using the alpaca-py trading documentation but I keey getting a invalid syntax error. This is taken exactly from the alpaca-py documentation. Does anyone know what I am doing wrong?
from typing import Any
from alpaca.data.live import CryptoDataStream
wss_client = CryptoDataStream(key-id, secret-key)
# async handler
async def quote_data_handler(data: Any):
# quote data will arrive here
print(data)
wss_client.subscribe_quotes(quote_data_handler, "BTC")
wss_client.run()
| [
"\nTake a look at the dashes in your parameters. Usually a no-no in most languages since the \"-\" or dash usually refers to a minus which is a binary operator or an operator that operates on two operands to produce a new value or result.\"\nMake sure parameters are set before passing them.\nTry the underscore instead as in: key_id = \"\".\nAlso useful is the following link to a comprehensive list of crypto pairs supported by Alpaca: https://alpaca.markets/support/alpaca-crypto-coin-pair-faq/#:~:text=For%20the%20initial%20launch%20of,%2C%20SOL%2C%20TRX%2C%20UNI)\nStay up to date on the above list as it's membership may be a bit volatile at the moment.\n\n"
] | [
0
] | [] | [] | [
"python",
"websocket"
] | stackoverflow_0074089722_python_websocket.txt |
Q:
Python missing or unusable error while cross compiling GDB
I get this error while attempting to cross-compile GDB (using the --with-python flag):
checking for python: /usr/bin/python
checking for python2.7: no
configure: error: python is missing or unusable
I made sure I had python2.7 installed in /usr/bin. I even removed the package and installed it again. I tried using --with-python=/usr/bin and --with-python=/usr/local, but no luck. I know for sure though that 2.7 is installed though. Any idea on what to do?
A:
I had the same problem on Debian 6.0 when compiling GDB 7.4.1
The solution was to install python headers
sudo apt-get install python2.6-dev
and then configure with the right flag
./configure --with-python
A:
I had the same problem with gdb 7.4 and finally made it worked after spending some time debugging.
By checking the file <gdb-source-path>/gdb/config.log, you will notice one line:
configure:11031: gcc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 conftest.c -lncurses -lz -lm -L/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error Python.framework/Versions/2.7/Python >&5
Seems that the script python/python-config.py returned some invalid flags that caused the gcc command to fail.
The solution is to open <gdb-source-directory>/gdb/python/python-config.py and comment out these two lines:
# if getvar('LINKFORSHARED') is not None:
# libs.extend(getvar('LINKFORSHARED').split())
A:
I just came across a similar issue building gdb 7.8.1 using Continuum's Python 2.7, which, in my case, was installed in a non-standard location.
In this case, the solution was to provide an additional piece of configuration before running 'configure':
export LDFLAGS="-Wl,-rpath,<non-standard-Python-lib-location> -L<non-standard-Python-lib-location>"
configure --with-python=<non-standard-Python-executable-location>
A:
I hit this error building the ESP8266 SDK. Just did a
sudo apt-get install python-dev
and now it works.
A:
I came up with almost the same error building gdb 13.0.50 with python 3.8 for a more recent udpate. Running make after ./configure --with-python it shows python missing error.
During compilation it seems to be looking for usr/bin/python for python 3.8 while we only have usr/bin/python3.
So I quickly changed the name of the python directory from usr/bin/python3 to usr/bin/python and it compiled and found python successfully.
Don't forget to change it back for dependencies
A:
Just adding my own solution since I cannot see it among the rest here. In my case I was able to solve this by using configure as follows:
./configure --with-python=/usr/bin/python3
This was on a Ubuntu 22.04 machine. I suspect --with-python requires a full path to an existing python binary, so this seems like a much easier solution for standard distributions.
| Python missing or unusable error while cross compiling GDB | I get this error while attempting to cross-compile GDB (using the --with-python flag):
checking for python: /usr/bin/python
checking for python2.7: no
configure: error: python is missing or unusable
I made sure I had python2.7 installed in /usr/bin. I even removed the package and installed it again. I tried using --with-python=/usr/bin and --with-python=/usr/local, but no luck. I know for sure though that 2.7 is installed though. Any idea on what to do?
| [
"I had the same problem on Debian 6.0 when compiling GDB 7.4.1\nThe solution was to install python headers\nsudo apt-get install python2.6-dev\n\nand then configure with the right flag\n./configure --with-python\n\n",
"I had the same problem with gdb 7.4 and finally made it worked after spending some time debugging.\nBy checking the file <gdb-source-path>/gdb/config.log, you will notice one line:\nconfigure:11031: gcc -o conftest -g -O2 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 conftest.c -lncurses -lz -lm -L/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config -ldl -framework CoreFoundation -lpython2.7 -u _PyMac_Error Python.framework/Versions/2.7/Python >&5\n\nSeems that the script python/python-config.py returned some invalid flags that caused the gcc command to fail.\nThe solution is to open <gdb-source-directory>/gdb/python/python-config.py and comment out these two lines:\n# if getvar('LINKFORSHARED') is not None:\n# libs.extend(getvar('LINKFORSHARED').split())\n\n",
"I just came across a similar issue building gdb 7.8.1 using Continuum's Python 2.7, which, in my case, was installed in a non-standard location.\nIn this case, the solution was to provide an additional piece of configuration before running 'configure':\nexport LDFLAGS=\"-Wl,-rpath,<non-standard-Python-lib-location> -L<non-standard-Python-lib-location>\"\nconfigure --with-python=<non-standard-Python-executable-location>\n\n",
"I hit this error building the ESP8266 SDK. Just did a \nsudo apt-get install python-dev \nand now it works.\n",
"I came up with almost the same error building gdb 13.0.50 with python 3.8 for a more recent udpate. Running make after ./configure --with-python it shows python missing error.\nDuring compilation it seems to be looking for usr/bin/python for python 3.8 while we only have usr/bin/python3.\nSo I quickly changed the name of the python directory from usr/bin/python3 to usr/bin/python and it compiled and found python successfully.\nDon't forget to change it back for dependencies\n",
"Just adding my own solution since I cannot see it among the rest here. In my case I was able to solve this by using configure as follows:\n./configure --with-python=/usr/bin/python3\n\nThis was on a Ubuntu 22.04 machine. I suspect --with-python requires a full path to an existing python binary, so this seems like a much easier solution for standard distributions.\n"
] | [
21,
13,
7,
6,
0,
0
] | [] | [] | [
"gdb",
"python"
] | stackoverflow_0010792844_gdb_python.txt |
Q:
Cogs loads but won't work in discord.py 2.0
I've provided code of two files. bot.py - runs bot, ping.py - cog file.
The problem is that Cog doesn't work, bot doesn't respond to commands, in my ping.py file i have ping command
bot.py
import discord as ds
import asyncio
import os
from dotenv import load_dotenv
from discord.ext import commands
load_dotenv()
intents = ds.Intents.all()
intents.message_content = True
bot = commands.Bot(command_prefix='!', intents=intents)
@bot.event
async def on_message(message):
if message.author == bot.user:
return
username = str(message.author)
user_message = str(message.content)
channel = str(message.channel)
print(f"{username} said: '{user_message}' ({channel})")
async def load_cogs():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
await bot.load_extension(f'cogs.{filename[:-3]}')
@bot.event
async def on_ready():
print(f'{bot.user} is now running.')
async def main():
await load_cogs()
async with bot:
TOKEN = os.getenv('TOKEN')
await bot.start(TOKEN)
asyncio.run(main())
ping.py
import discord
from discord.ext import commands
import asyncio
class ping(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name="ping", description="Returns Pong!")
async def ping(self, ctx):
await ctx.send("Pong!")
async def setup(bot: commands.Bot):
await bot.add_cog(ping(bot))
After running bot display 0 errors. I tried to changing intents from default() to all() but it didn't help.
A:
You override the on_message event, and you don't have a process_commands in it, so the bot won't be processing any commands.
You can fix this by adding bot.process_commands inside the event.
@bot.event
async def on_message(message):
...
await bot.process_commands(message)
Or register it as a listener so it doesn't override the default event.
@bot.listen()
async def on_message(message):
...
| Cogs loads but won't work in discord.py 2.0 | I've provided code of two files. bot.py - runs bot, ping.py - cog file.
The problem is that Cog doesn't work, bot doesn't respond to commands, in my ping.py file i have ping command
bot.py
import discord as ds
import asyncio
import os
from dotenv import load_dotenv
from discord.ext import commands
load_dotenv()
intents = ds.Intents.all()
intents.message_content = True
bot = commands.Bot(command_prefix='!', intents=intents)
@bot.event
async def on_message(message):
if message.author == bot.user:
return
username = str(message.author)
user_message = str(message.content)
channel = str(message.channel)
print(f"{username} said: '{user_message}' ({channel})")
async def load_cogs():
for filename in os.listdir('./cogs'):
if filename.endswith('.py'):
await bot.load_extension(f'cogs.{filename[:-3]}')
@bot.event
async def on_ready():
print(f'{bot.user} is now running.')
async def main():
await load_cogs()
async with bot:
TOKEN = os.getenv('TOKEN')
await bot.start(TOKEN)
asyncio.run(main())
ping.py
import discord
from discord.ext import commands
import asyncio
class ping(commands.Cog):
def __init__(self, bot):
self.bot = bot
@commands.command(name="ping", description="Returns Pong!")
async def ping(self, ctx):
await ctx.send("Pong!")
async def setup(bot: commands.Bot):
await bot.add_cog(ping(bot))
After running bot display 0 errors. I tried to changing intents from default() to all() but it didn't help.
| [
"You override the on_message event, and you don't have a process_commands in it, so the bot won't be processing any commands.\nYou can fix this by adding bot.process_commands inside the event.\[email protected]\nasync def on_message(message):\n ...\n await bot.process_commands(message)\n\nOr register it as a listener so it doesn't override the default event.\[email protected]()\nasync def on_message(message):\n ...\n\n"
] | [
1
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074677573_discord_discord.py_python.txt |
Q:
TypeError: unsupported type for timedelta microseconds component: InstrumentedAttribute
i am also getting error while doing this task.
Models.py
CloudImageMaster
created_tmstmp = Column(DateTime(), default = datetime.now(timezone.utc))
ClientMaster
ttl = Column(BigInteger, nullable=False)
QUERY:-
db.query(CloudImageMaster).join(ClientMaster).filter(
(
CloudImageMaster.created_tmstmp + timedelta(microseconds=ClientMaster.ttl)
) < today
).all()
ERROR MESSAGE :-
TypeError: unsupported type for timedelta microseconds component: InstrumentedAttribute
I tried above. It should work as per the code. What i am doing wrong in this.
A:
Given a filter expression like MyMode.attr == something, the left hand side (LHS) can be thought of as belonging to the database side, the right hand side (RHS) as belonging to the application. What this means is that the RHS must be expressed in what SQLAlchemy regards as database constructs (ORM entities, tables, columns, database functions) while the LHS is expressed as normal Python code.
This means that we can't subtract a timedelta (a Python construct) from a Datetime column (a database construct); we have to convert the timedelta to a database construct - a PostgreSQL interval. We can do this by using the make_interval function, dividing ttl by 1000 as make_interval does not accept a microsecond argument.
from sqlalchemy import func
db.query(CloudImageMaster)
.join(ClientMaster)
.filter(
(
CloudImageMaster.created_tmstmp
+ func.make_interval(0, 0, 0, 0, 0, 0, ClientMaster.ttl /1000)
) < today
).all()
| TypeError: unsupported type for timedelta microseconds component: InstrumentedAttribute | i am also getting error while doing this task.
Models.py
CloudImageMaster
created_tmstmp = Column(DateTime(), default = datetime.now(timezone.utc))
ClientMaster
ttl = Column(BigInteger, nullable=False)
QUERY:-
db.query(CloudImageMaster).join(ClientMaster).filter(
(
CloudImageMaster.created_tmstmp + timedelta(microseconds=ClientMaster.ttl)
) < today
).all()
ERROR MESSAGE :-
TypeError: unsupported type for timedelta microseconds component: InstrumentedAttribute
I tried above. It should work as per the code. What i am doing wrong in this.
| [
"Given a filter expression like MyMode.attr == something, the left hand side (LHS) can be thought of as belonging to the database side, the right hand side (RHS) as belonging to the application. What this means is that the RHS must be expressed in what SQLAlchemy regards as database constructs (ORM entities, tables, columns, database functions) while the LHS is expressed as normal Python code.\nThis means that we can't subtract a timedelta (a Python construct) from a Datetime column (a database construct); we have to convert the timedelta to a database construct - a PostgreSQL interval. We can do this by using the make_interval function, dividing ttl by 1000 as make_interval does not accept a microsecond argument.\nfrom sqlalchemy import func\n\ndb.query(CloudImageMaster)\n .join(ClientMaster)\n .filter(\n (\n CloudImageMaster.created_tmstmp\n + func.make_interval(0, 0, 0, 0, 0, 0, ClientMaster.ttl /1000)\n ) < today\n ).all()\n\n"
] | [
2
] | [] | [] | [
"fastapi",
"postgresql",
"python",
"python_3.x",
"sqlalchemy"
] | stackoverflow_0074623642_fastapi_postgresql_python_python_3.x_sqlalchemy.txt |
Q:
What's the fastest way to recursively search for files in python?
I need to generate a list of files with paths that contain a certain string by recursively searching. I'm doing this currently like this:
for i in iglob(starting_directory+'/**/*', recursive=True):
if filemask in i.split('\\')[-1]: # ignore directories that contain the filemask
filelist.append(i)
This works, but when crawling a large directory tree, it's woefully slow (~10 minutes). We're on Windows, so doing an external call to the unix find command isn't an option. My understanding is that glob is faster than os.walk.
Is there a faster way of doing this?
A:
Maybe not the answer you were hoping for, but I think these timings are useful. Run on a directory with 15,424 directories totalling 102,799 files (of which 3059 are .py files).
Python 3.6:
import os
import glob
def walk():
pys = []
for p, d, f in os.walk('.'):
for file in f:
if file.endswith('.py'):
pys.append(file)
return pys
def iglob():
pys = []
for file in glob.iglob('**/*', recursive=True):
if file.endswith('.py'):
pys.append(file)
return pys
def iglob2():
pys = []
for file in glob.iglob('**/*.py', recursive=True):
pys.append(file)
return pys
# I also tried pathlib.Path.glob but it was slow and error prone, sadly
%timeit walk()
3.95 s ± 13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit iglob()
5.01 s ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit iglob2()
4.36 s ± 34 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Using GNU find (4.6.0) on cygwin (4.6.0-1)
Edit: The below is on WINDOWS, on LINUX I found find to be about 25% faster
$ time find . -name '*.py' > /dev/null
real 0m8.827s
user 0m1.482s
sys 0m7.284s
Seems like os.walk is as good as you can get on windows.
A:
os.walk() uses scandir which is the fastest and we get the file object that can be used for many other purposes as well like, below I am getting the modified time. Below code implement recursive serach using os.scandir()
import os
import time
def scantree(path):
"""Recursively yield DirEntry objects for given directory."""
for entry in os.scandir(path):
if entry.is_dir(follow_symlinks=False):
yield from scantree(entry.path)
else:
yield entry
for entry in scantree('/home/'):
if entry.is_file():
print(entry.path,time.ctime(entry.stat().st_mtime))
| What's the fastest way to recursively search for files in python? | I need to generate a list of files with paths that contain a certain string by recursively searching. I'm doing this currently like this:
for i in iglob(starting_directory+'/**/*', recursive=True):
if filemask in i.split('\\')[-1]: # ignore directories that contain the filemask
filelist.append(i)
This works, but when crawling a large directory tree, it's woefully slow (~10 minutes). We're on Windows, so doing an external call to the unix find command isn't an option. My understanding is that glob is faster than os.walk.
Is there a faster way of doing this?
| [
"Maybe not the answer you were hoping for, but I think these timings are useful. Run on a directory with 15,424 directories totalling 102,799 files (of which 3059 are .py files).\nPython 3.6:\nimport os\nimport glob\n\ndef walk():\n pys = []\n for p, d, f in os.walk('.'):\n for file in f:\n if file.endswith('.py'):\n pys.append(file)\n return pys\n\ndef iglob():\n pys = []\n for file in glob.iglob('**/*', recursive=True):\n if file.endswith('.py'):\n pys.append(file)\n return pys\n\ndef iglob2():\n pys = []\n for file in glob.iglob('**/*.py', recursive=True):\n pys.append(file)\n return pys\n\n# I also tried pathlib.Path.glob but it was slow and error prone, sadly\n\n%timeit walk()\n3.95 s ± 13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n%timeit iglob()\n5.01 s ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n%timeit iglob2()\n4.36 s ± 34 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nUsing GNU find (4.6.0) on cygwin (4.6.0-1)\nEdit: The below is on WINDOWS, on LINUX I found find to be about 25% faster\n$ time find . -name '*.py' > /dev/null\n\nreal 0m8.827s\nuser 0m1.482s\nsys 0m7.284s\n\n\nSeems like os.walk is as good as you can get on windows.\n",
"os.walk() uses scandir which is the fastest and we get the file object that can be used for many other purposes as well like, below I am getting the modified time. Below code implement recursive serach using os.scandir()\nimport os\nimport time\ndef scantree(path):\n \"\"\"Recursively yield DirEntry objects for given directory.\"\"\"\n for entry in os.scandir(path):\n if entry.is_dir(follow_symlinks=False):\n yield from scantree(entry.path) \n else:\n yield entry\n \nfor entry in scantree('/home/'):\n if entry.is_file():\n print(entry.path,time.ctime(entry.stat().st_mtime))\n\n"
] | [
27,
0
] | [] | [] | [
"glob",
"python",
"search"
] | stackoverflow_0050948391_glob_python_search.txt |
Q:
printing values django templates using for loop
I have two models interrelated items and broken :
class Items(models.Model):
id = models.AutoField(primary_key=True)
item_name = models.CharField(max_length=50, blank=False)
item_price = models.IntegerField(blank=True)
item_quantity_received = models.IntegerField(blank=False)
item_quantity_available = models.IntegerField(blank=True)
item_purchased_date = models.DateField(auto_now_add=True, blank=False)
item_units = models.CharField(max_length=50, blank=False)
def __str__(self):
return self.item_name
class Broken(models.Model):
item = models.ForeignKey(Items, default=1, on_delete=models.CASCADE)
item_quantity_broken = models.IntegerField(blank=True)
item_broken_date = models.DateField(auto_now_add=True, blank=False)
item_is_broken = models.BooleanField(default=True)
date_repaired = models.DateField(auto_now=True, blank=True)
def __str__(self):
return self.item.item_name
I wrote this view function to retrieve data to a table into a template:
def broken_items(request):
br = Broken.objects.select_related('item').all()
print(br.values_list())
context = {
'title': 'broken',
'items': br,
}
return render(request, 'store/broken.html', context)
this is the executing query:
SELECT "store_broken"."id",
"store_broken"."item_id",
"store_broken"."item_quantity_broken",
"store_broken"."item_broken_date",
"store_broken"."item_is_broken",
"store_broken"."date_repaired",
"store_items"."id",
"store_items"."item_name",
"store_items"."item_price",
"store_items"."item_quantity_received",
"store_items"."item_quantity_available",
"store_items"."item_purchased_date",
"store_items"."item_units"
FROM "store_broken"
INNER JOIN "store_items"
ON ("store_broken"."item_id" = "store_items"."id")
looks like it gives me all the fields I want. In debugger it shows data from both tables,
so I wrote for loop in template,
{% for item in items %}
<tr>
<td>{{item.id}}</td>
<td>{{item.item_id}}</td>
<td>{{item.item_quantity_broken}}</td>
<td>{{item.item_broken_date}}</td>
<td>{{item.item_is_broken}}</td>
<td>{{item.date_repaired}}</td>
<td>{{item.item_name }}</td>
<td>{{item.item_item_quantity_received}}</td>
<td>{{item.item_quantity_available}}</td>
<td>{{item.item_purchased_date}}</td>
<td>{{item.items_item_units}}</td>
</tr>
{% endfor %}
The thing is this loop only gives me data from broken table only. I can't see data from Items table.
can someone help me to find the reason why other details are not showing?
A:
you loop over a List of Broken objects
to access the related item objects
item.item.item_name
A:
Your items query is of Broken objects. So in order to access the Items values you need to change your table. For better understanding change your view like this:
brokens = Broken.objects.select_related('item').all()
context = {
'title': 'broken',
'brokens ': brokens,
}
and then your table:
{% for broken in brokens %}
<tr>
<td>{{broken.id}}</td>
<td>{{broken.item.pk}}</td> # This is the item id
<td>{{broken.item_quantity_broken}}</td>
<td>{{broken.item_broken_date}}</td>
<td>{{broken.item_is_broken}}</td>
<td>{{broken.date_repaired}}</td>
<td>{{broken.item.item_name}}</td>
<td>{{broken.item.item_quantity_received }}</td>
<td>{{broken.item.item_quantity_available}}</td>
<td>{{broken.item.item_purchased_date}}</td>
<td>{{broken.item.items_item_units}}</td>
</tr>
{% endfor %}
| printing values django templates using for loop | I have two models interrelated items and broken :
class Items(models.Model):
id = models.AutoField(primary_key=True)
item_name = models.CharField(max_length=50, blank=False)
item_price = models.IntegerField(blank=True)
item_quantity_received = models.IntegerField(blank=False)
item_quantity_available = models.IntegerField(blank=True)
item_purchased_date = models.DateField(auto_now_add=True, blank=False)
item_units = models.CharField(max_length=50, blank=False)
def __str__(self):
return self.item_name
class Broken(models.Model):
item = models.ForeignKey(Items, default=1, on_delete=models.CASCADE)
item_quantity_broken = models.IntegerField(blank=True)
item_broken_date = models.DateField(auto_now_add=True, blank=False)
item_is_broken = models.BooleanField(default=True)
date_repaired = models.DateField(auto_now=True, blank=True)
def __str__(self):
return self.item.item_name
I wrote this view function to retrieve data to a table into a template:
def broken_items(request):
br = Broken.objects.select_related('item').all()
print(br.values_list())
context = {
'title': 'broken',
'items': br,
}
return render(request, 'store/broken.html', context)
this is the executing query:
SELECT "store_broken"."id",
"store_broken"."item_id",
"store_broken"."item_quantity_broken",
"store_broken"."item_broken_date",
"store_broken"."item_is_broken",
"store_broken"."date_repaired",
"store_items"."id",
"store_items"."item_name",
"store_items"."item_price",
"store_items"."item_quantity_received",
"store_items"."item_quantity_available",
"store_items"."item_purchased_date",
"store_items"."item_units"
FROM "store_broken"
INNER JOIN "store_items"
ON ("store_broken"."item_id" = "store_items"."id")
looks like it gives me all the fields I want. In debugger it shows data from both tables,
so I wrote for loop in template,
{% for item in items %}
<tr>
<td>{{item.id}}</td>
<td>{{item.item_id}}</td>
<td>{{item.item_quantity_broken}}</td>
<td>{{item.item_broken_date}}</td>
<td>{{item.item_is_broken}}</td>
<td>{{item.date_repaired}}</td>
<td>{{item.item_name }}</td>
<td>{{item.item_item_quantity_received}}</td>
<td>{{item.item_quantity_available}}</td>
<td>{{item.item_purchased_date}}</td>
<td>{{item.items_item_units}}</td>
</tr>
{% endfor %}
The thing is this loop only gives me data from broken table only. I can't see data from Items table.
can someone help me to find the reason why other details are not showing?
| [
"you loop over a List of Broken objects\nto access the related item objects\nitem.item.item_name\n",
"Your items query is of Broken objects. So in order to access the Items values you need to change your table. For better understanding change your view like this:\nbrokens = Broken.objects.select_related('item').all()\ncontext = {\n 'title': 'broken',\n 'brokens ': brokens,\n}\n\nand then your table:\n{% for broken in brokens %}\n <tr>\n <td>{{broken.id}}</td>\n <td>{{broken.item.pk}}</td> # This is the item id \n <td>{{broken.item_quantity_broken}}</td>\n <td>{{broken.item_broken_date}}</td>\n <td>{{broken.item_is_broken}}</td>\n <td>{{broken.date_repaired}}</td>\n <td>{{broken.item.item_name}}</td>\n <td>{{broken.item.item_quantity_received }}</td>\n <td>{{broken.item.item_quantity_available}}</td>\n <td>{{broken.item.item_purchased_date}}</td>\n <td>{{broken.item.items_item_units}}</td>\n </tr>\n {% endfor %}\n\n"
] | [
1,
1
] | [] | [] | [
"django",
"django_models",
"django_queryset",
"django_templates",
"python"
] | stackoverflow_0074677397_django_django_models_django_queryset_django_templates_python.txt |
Q:
Pandas: Better Way to Group By and Find Mean
I have a spreadsheet of stock prices for all companies, and I'd like to calculate the moving average more efficiently. As it stands I have some code that works, but takes a pretty long time to run. I'm wondering what are alternative ways to do the same thing, but more efficiently, or in a way that utilizes Pandas' strengths.
Here is the workflow I am trying to accomplish in my code:
I first want to take the 20 day rolling/moving average for each company, and add it as a column to the dataframe (sma_20).
From there I want to count the number of days a stock's price was over this 20 day average.
Finally, I want to convert this count into a percentage. For reference, there are 252 days in a trading year, I'd like to see out of these 252 days, how many of them was the stock trading above it's moving average.
prices_df['sma_20'] = prices_df.groupby('ticker').rolling(20)['closeadj'].mean().reset_index(0,drop=True)
prices_df['above_sma_20'] = np.where(prices_df.closeadj > prices_df.sma_20, 1, 0)
prices_df['above_sma_20_count'] = prices_df.groupby('ticker').rolling(252)['above_sma_20'].sum().reset_index(0,drop=True)
prices_df['above_sma_20_pct'] = prices_df['above_sma_20_count'] / 252
A:
I would rearrange the data into n(date) by m(ticker) array, and use numpy to deal with rolling mean,
Given a df with 100 companies and 253 days from yahoo finance,
import pandas as pd
import numpy as np
df_n = df.to_numpy()
sma_20 = np.cumsum(df_n, dtype=float, axis=0)
sma_20[20:] = sma_20[20:] - sma_20[:-20]
sma_20[19:] = sma_20[19:] / 20
sma_20[:19] = sma_20[:19] / np.arange(1, 20)[:, None]
print(sum(df_n > sma_20)/len(df_n))
>>>
[0.41897233 0.61660079 0.7312253 0.71936759 0.74703557 0.743083
0.52964427 0.53359684 0.52964427 0.45849802 0.64031621 0.63241107
0.59683794 0.66798419 0.77470356 0.56521739 0.64426877 0.60869565
0.46640316 0.45059289 0.61660079 0.743083 0.69565217 0.56916996
0.63241107 0.69565217 0.55731225 0.6284585 0.60869565 0.66798419
0.59683794 0.56126482 0.62055336 0.65612648 0.54150198 0.46245059
0.62055336 0.54545455 0.54545455 0.68379447 0.59683794 0.50988142
0.81422925 0.65217391 0.60869565 0.66798419 0.56126482 0.57312253
0.74703557 0.64822134 0.44664032 0.67588933 0.6284585 0.61264822
0.60474308 0.50197628 0.58498024 0.54545455 0.65612648 0.61660079
0.66007905 0.64822134 0.60869565 0.58893281 0.68774704 0.66403162
0.50988142 0.62055336 0.4743083 0.53754941 0.60869565 0.62055336
0.60869565 0.743083 0.43873518 0.6916996 0.71936759 0.61264822
0.59288538 0.49011858 0.58102767 0.5256917 0.59288538 0.45454545
0.49407115 0.55335968 0.49011858 0.64031621 0.6798419 0.54150198
0.59683794 0.67588933 0.56126482 0.60474308 0.45454545 0.61264822
0.56521739 0.48221344 0.40711462 0.68379447]
Assign the probability and corresponding company to a new dataframe,
df_result = pd.DataFrame(sum(df_n > sma_20)/len(df_n), columns=['probability'])
df_result['company'] = df.columns
df_result = df_result.sort_values(by='probability', ascending=False).reset_index(drop=True)
df_result
###
probability company
0 0.814229 FTNT
1 0.774704 ASML
2 0.747036 INTU
3 0.747036 GOOGL
4 0.743083 AVGO
.. ... ...
95 0.450593 BIIB
96 0.446640 JD
97 0.438735 PCAR
98 0.418972 ATVI
99 0.407115 ZM
[100 rows x 2 columns]
| Pandas: Better Way to Group By and Find Mean | I have a spreadsheet of stock prices for all companies, and I'd like to calculate the moving average more efficiently. As it stands I have some code that works, but takes a pretty long time to run. I'm wondering what are alternative ways to do the same thing, but more efficiently, or in a way that utilizes Pandas' strengths.
Here is the workflow I am trying to accomplish in my code:
I first want to take the 20 day rolling/moving average for each company, and add it as a column to the dataframe (sma_20).
From there I want to count the number of days a stock's price was over this 20 day average.
Finally, I want to convert this count into a percentage. For reference, there are 252 days in a trading year, I'd like to see out of these 252 days, how many of them was the stock trading above it's moving average.
prices_df['sma_20'] = prices_df.groupby('ticker').rolling(20)['closeadj'].mean().reset_index(0,drop=True)
prices_df['above_sma_20'] = np.where(prices_df.closeadj > prices_df.sma_20, 1, 0)
prices_df['above_sma_20_count'] = prices_df.groupby('ticker').rolling(252)['above_sma_20'].sum().reset_index(0,drop=True)
prices_df['above_sma_20_pct'] = prices_df['above_sma_20_count'] / 252
| [
"I would rearrange the data into n(date) by m(ticker) array, and use numpy to deal with rolling mean,\nGiven a df with 100 companies and 253 days from yahoo finance,\n\nimport pandas as pd\nimport numpy as np\n\ndf_n = df.to_numpy()\nsma_20 = np.cumsum(df_n, dtype=float, axis=0)\nsma_20[20:] = sma_20[20:] - sma_20[:-20]\nsma_20[19:] = sma_20[19:] / 20\nsma_20[:19] = sma_20[:19] / np.arange(1, 20)[:, None]\n\n\n\n\nprint(sum(df_n > sma_20)/len(df_n))\n>>>\n[0.41897233 0.61660079 0.7312253 0.71936759 0.74703557 0.743083\n 0.52964427 0.53359684 0.52964427 0.45849802 0.64031621 0.63241107\n 0.59683794 0.66798419 0.77470356 0.56521739 0.64426877 0.60869565\n 0.46640316 0.45059289 0.61660079 0.743083 0.69565217 0.56916996\n 0.63241107 0.69565217 0.55731225 0.6284585 0.60869565 0.66798419\n 0.59683794 0.56126482 0.62055336 0.65612648 0.54150198 0.46245059\n 0.62055336 0.54545455 0.54545455 0.68379447 0.59683794 0.50988142\n 0.81422925 0.65217391 0.60869565 0.66798419 0.56126482 0.57312253\n 0.74703557 0.64822134 0.44664032 0.67588933 0.6284585 0.61264822\n 0.60474308 0.50197628 0.58498024 0.54545455 0.65612648 0.61660079\n 0.66007905 0.64822134 0.60869565 0.58893281 0.68774704 0.66403162\n 0.50988142 0.62055336 0.4743083 0.53754941 0.60869565 0.62055336\n 0.60869565 0.743083 0.43873518 0.6916996 0.71936759 0.61264822\n 0.59288538 0.49011858 0.58102767 0.5256917 0.59288538 0.45454545\n 0.49407115 0.55335968 0.49011858 0.64031621 0.6798419 0.54150198\n 0.59683794 0.67588933 0.56126482 0.60474308 0.45454545 0.61264822\n 0.56521739 0.48221344 0.40711462 0.68379447]\n\n\n\n\nAssign the probability and corresponding company to a new dataframe,\ndf_result = pd.DataFrame(sum(df_n > sma_20)/len(df_n), columns=['probability'])\ndf_result['company'] = df.columns\ndf_result = df_result.sort_values(by='probability', ascending=False).reset_index(drop=True)\ndf_result\n###\n probability company\n0 0.814229 FTNT\n1 0.774704 ASML\n2 0.747036 INTU\n3 0.747036 GOOGL\n4 0.743083 AVGO\n.. ... ...\n95 0.450593 BIIB\n96 0.446640 JD\n97 0.438735 PCAR\n98 0.418972 ATVI\n99 0.407115 ZM\n\n[100 rows x 2 columns]\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074668648_pandas_python.txt |
Q:
Optimizing gaussian heatmap generation
I have a set of 68 keypoints (size [68, 2]) that I am mapping to gaussian heatmaps. To do this, I have the following function:
def generate_gaussian(t, x, y, sigma=10):
"""
Generates a 2D Gaussian point at location x,y in tensor t.
x should be in range (-1, 1).
sigma is the standard deviation of the generated 2D Gaussian.
"""
h,w = t.shape
# Heatmap pixel per output pixel
mu_x = int(0.5 * (x + 1.) * w)
mu_y = int(0.5 * (y + 1.) * h)
tmp_size = sigma * 3
# Top-left
x1,y1 = int(mu_x - tmp_size), int(mu_y - tmp_size)
# Bottom right
x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)
if x1 >= w or y1 >= h or x2 < 0 or y2 < 0:
return t
size = 2 * tmp_size + 1
tx = np.arange(0, size, 1, np.float32)
ty = tx[:, np.newaxis]
x0 = y0 = size // 2
# The gaussian is not normalized, we want the center value to equal 1
g = torch.tensor(np.exp(- ((tx - x0) ** 2 + (ty - y0) ** 2) / (2 * sigma ** 2)))
# Determine the bounds of the source gaussian
g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1
g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1
# Image range
img_x_min, img_x_max = max(0, x1), min(x2, w)
img_y_min, img_y_max = max(0, y1), min(y2, h)
t[img_y_min:img_y_max, img_x_min:img_x_max] = \
g[g_y_min:g_y_max, g_x_min:g_x_max]
return t
def rescale(a, img_size):
# scale tensor to [-1, 1]
return 2 * a / img_size[0] - 1
My current code uses a for loop to compute the gaussian heatmap for each of the 68 keypoint coordinates, then stacks the resulting tensors to create a [68, H, W] tensor:
x_k1 = [generate_gaussian(torch.zeros(H, W), x, y) for x, y in rescale(kp1.numpy(), frame.shape)]
x_k1 = torch.stack(x_k1, dim=0)
However, this method is super slow. Is there some way that I can do this without a for loop?
Edit:
I tried @Cris Luengo's proposal to compute a 1D Gaussian:
def generate_gaussian1D(t, x, y, sigma=10):
h,w = t.shape
# Heatmap pixel per output pixel
mu_x = int(0.5 * (x + 1.) * w)
mu_y = int(0.5 * (y + 1.) * h)
tmp_size = sigma * 3
# Top-left
x1, y1 = int(mu_x - tmp_size), int(mu_y - tmp_size)
# Bottom right
x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)
if x1 >= w or y1 >= h or x2 < 0 or y2 < 0:
return t
size = 2 * tmp_size + 1
tx = np.arange(0, size, 1, np.float32)
ty = tx[:, np.newaxis]
x0 = y0 = size // 2
g = torch.tensor(np.exp(-np.power(tx - mu_x, 2.) / (2 * np.power(sigma, 2.))))
g = g * g[:, None]
g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1
g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1
img_x_min, img_x_max = max(0, x1), min(x2, w)
img_y_min, img_y_max = max(0, y1), min(y2, h)
t[img_y_min:img_y_max, img_x_min:img_x_max] = \
g[g_y_min:g_y_max, g_x_min:g_x_max]
return t
but my output ends up being an incomplete gaussian.
I'm not sure what I'm doing wrong. Any help would be appreciated.
A:
You generate an NxN array g with a Gaussian centered on its center pixel. N is computed such that it extends by 3*sigma from that center pixel. This is the fastest way to build such an array:
tmp_size = sigma * 3
tx = np.arange(1, tmp_size + 1, 1, np.float32)
g = np.exp(-(tx**2) / (2 * sigma**2))
g = np.concatenate((np.flip(g), [1], g))
g = g * g[:, None]
What we're doing here is compute half a 1D Gaussian. We don't even bother computing the value of the Gaussian for the middle pixel, which we know will be 1. We then build the full 1D Gaussian by flipping our half-Gaussian and concatenating. Finally, the 2D Gaussian is built by the outer product of the 1D Gaussian with itself.
We could shave a bit of extra time by building a quarter of the 2D Gaussian, then concatenating four rotated copies of it. But the difference in computational cost is not very large, and this is much simpler. Note that np.exp is the most expensive operation here by far, so just minimizing how often we call it we significantly reduce the computational cost.
However, the best way to speed up the complete code is to compute the array g only once, rather than anew for each key point. Note how your sigma doesn't change, so all the arrays g that are computed are identical. If you compute it only once, it no longer matters which method you use to compute it, since this will be a minimal portion of the total program anyway.
You could, for example, have a global variable _gaussian to hold your array, and have your function compute it only the first time it is called. Or you could separate your function into two functions, one that constructs this array, and one that copies it into an image, and call them as follows:
g = create_gaussian(sigma=3)
x_k1 = [
copy_gaussian(torch.zeros(H, W), x, y, g)
for x, y in rescale(kp1.numpy(), frame.shape)
]
On the other hand, you're likely best off using existing functionality. For example, DIPlib has a function dip.DrawBandlimitedPoint() [disclosure: I'm an author] that adds a Gaussian blob to an image. Likely you'll find similar functions in other libraries.
| Optimizing gaussian heatmap generation | I have a set of 68 keypoints (size [68, 2]) that I am mapping to gaussian heatmaps. To do this, I have the following function:
def generate_gaussian(t, x, y, sigma=10):
"""
Generates a 2D Gaussian point at location x,y in tensor t.
x should be in range (-1, 1).
sigma is the standard deviation of the generated 2D Gaussian.
"""
h,w = t.shape
# Heatmap pixel per output pixel
mu_x = int(0.5 * (x + 1.) * w)
mu_y = int(0.5 * (y + 1.) * h)
tmp_size = sigma * 3
# Top-left
x1,y1 = int(mu_x - tmp_size), int(mu_y - tmp_size)
# Bottom right
x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)
if x1 >= w or y1 >= h or x2 < 0 or y2 < 0:
return t
size = 2 * tmp_size + 1
tx = np.arange(0, size, 1, np.float32)
ty = tx[:, np.newaxis]
x0 = y0 = size // 2
# The gaussian is not normalized, we want the center value to equal 1
g = torch.tensor(np.exp(- ((tx - x0) ** 2 + (ty - y0) ** 2) / (2 * sigma ** 2)))
# Determine the bounds of the source gaussian
g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1
g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1
# Image range
img_x_min, img_x_max = max(0, x1), min(x2, w)
img_y_min, img_y_max = max(0, y1), min(y2, h)
t[img_y_min:img_y_max, img_x_min:img_x_max] = \
g[g_y_min:g_y_max, g_x_min:g_x_max]
return t
def rescale(a, img_size):
# scale tensor to [-1, 1]
return 2 * a / img_size[0] - 1
My current code uses a for loop to compute the gaussian heatmap for each of the 68 keypoint coordinates, then stacks the resulting tensors to create a [68, H, W] tensor:
x_k1 = [generate_gaussian(torch.zeros(H, W), x, y) for x, y in rescale(kp1.numpy(), frame.shape)]
x_k1 = torch.stack(x_k1, dim=0)
However, this method is super slow. Is there some way that I can do this without a for loop?
Edit:
I tried @Cris Luengo's proposal to compute a 1D Gaussian:
def generate_gaussian1D(t, x, y, sigma=10):
h,w = t.shape
# Heatmap pixel per output pixel
mu_x = int(0.5 * (x + 1.) * w)
mu_y = int(0.5 * (y + 1.) * h)
tmp_size = sigma * 3
# Top-left
x1, y1 = int(mu_x - tmp_size), int(mu_y - tmp_size)
# Bottom right
x2, y2 = int(mu_x + tmp_size + 1), int(mu_y + tmp_size + 1)
if x1 >= w or y1 >= h or x2 < 0 or y2 < 0:
return t
size = 2 * tmp_size + 1
tx = np.arange(0, size, 1, np.float32)
ty = tx[:, np.newaxis]
x0 = y0 = size // 2
g = torch.tensor(np.exp(-np.power(tx - mu_x, 2.) / (2 * np.power(sigma, 2.))))
g = g * g[:, None]
g_x_min, g_x_max = max(0, -x1), min(x2, w) - x1
g_y_min, g_y_max = max(0, -y1), min(y2, h) - y1
img_x_min, img_x_max = max(0, x1), min(x2, w)
img_y_min, img_y_max = max(0, y1), min(y2, h)
t[img_y_min:img_y_max, img_x_min:img_x_max] = \
g[g_y_min:g_y_max, g_x_min:g_x_max]
return t
but my output ends up being an incomplete gaussian.
I'm not sure what I'm doing wrong. Any help would be appreciated.
| [
"You generate an NxN array g with a Gaussian centered on its center pixel. N is computed such that it extends by 3*sigma from that center pixel. This is the fastest way to build such an array:\ntmp_size = sigma * 3\ntx = np.arange(1, tmp_size + 1, 1, np.float32)\ng = np.exp(-(tx**2) / (2 * sigma**2))\ng = np.concatenate((np.flip(g), [1], g))\ng = g * g[:, None]\n\nWhat we're doing here is compute half a 1D Gaussian. We don't even bother computing the value of the Gaussian for the middle pixel, which we know will be 1. We then build the full 1D Gaussian by flipping our half-Gaussian and concatenating. Finally, the 2D Gaussian is built by the outer product of the 1D Gaussian with itself.\nWe could shave a bit of extra time by building a quarter of the 2D Gaussian, then concatenating four rotated copies of it. But the difference in computational cost is not very large, and this is much simpler. Note that np.exp is the most expensive operation here by far, so just minimizing how often we call it we significantly reduce the computational cost.\n\nHowever, the best way to speed up the complete code is to compute the array g only once, rather than anew for each key point. Note how your sigma doesn't change, so all the arrays g that are computed are identical. If you compute it only once, it no longer matters which method you use to compute it, since this will be a minimal portion of the total program anyway.\nYou could, for example, have a global variable _gaussian to hold your array, and have your function compute it only the first time it is called. Or you could separate your function into two functions, one that constructs this array, and one that copies it into an image, and call them as follows:\ng = create_gaussian(sigma=3)\nx_k1 = [\n copy_gaussian(torch.zeros(H, W), x, y, g)\n for x, y in rescale(kp1.numpy(), frame.shape)\n]\n\n\nOn the other hand, you're likely best off using existing functionality. For example, DIPlib has a function dip.DrawBandlimitedPoint() [disclosure: I'm an author] that adds a Gaussian blob to an image. Likely you'll find similar functions in other libraries.\n"
] | [
0
] | [] | [] | [
"computer_vision",
"python",
"pytorch"
] | stackoverflow_0074666177_computer_vision_python_pytorch.txt |