Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
1
0
I want to crawl a webpage(news) and get only the latest links. I have a crawler code which gets all the links from a website and takes like 2-3 hours to get around 30000 links and stores in db. What if next time when I run the crawler I want only the new links to be inserted. I know I can do a filter before inserting in db, but I would want the crawler to fetch only new links other than crawling the old links again(basically entire website). Is it possible to do something like that?
false
49,899,506
0.099668
0
0
1
You need some sort of a cache. One solution which comes to my mind is storing a local version of the website. When you want to add the new links you can make a diff between the new version and your locally stored version. Afterwards you can crawl over the diff...
0
51
0
0
2018-04-18T12:16:00.000
python,web-scraping,web-crawler
Is there a way to crawl a webpage in python where the crawler shoud fetch only the new links.
1
1
2
49,899,582
0
0
0
In influxdb 1.5, the /write path can accept multiple points in a POST request. What is a reasonable maximum payload size for this? 100 point? 1,000? 10,000? More?
false
49,908,395
0.099668
0
0
1
I had some problems with 25000 and more points. The points were written by a little python script out of a pandas dataframe. The code was near by the example from influx (dataframe to influxdb with python). It did not matter how many lines and columns were present, the error was reproducible over the sum to be written points. It is better to stay below 20000 points per transfer to avoid exceptions.
0
1,591
0
1
2018-04-18T20:20:00.000
influxdb,influxdb-python
What's the maximum number of points I should write to influxdb in a single request?
1
1
2
53,751,719
0
0
0
I'm getting the ModuleNotFoundError: No module named 'lxml' while trying run a python script in Mac os. I've installed python in /usr/local/bin. All the libraries are also successfully installed. I tried setting the PATH to include the location where the libraries are installed, but it didn't help. Please advise.
true
49,912,414
1.2
0
0
0
This is fixed by setting the PYTHONPATH Variable in OS X.
1
358
1
0
2018-04-19T03:50:00.000
python,macos
ModuleNotFoundError in Python3.6 Mac OS
1
1
1
49,923,516
0
0
0
I want to crawl IMDB and download the trailers of movies (either from YouTube or IMDB) that fit some criteria (e.g.: released this year, with a rating above 2). I want to do this in Python - I saw that there were packages for crawling IMDB and downloading YouTube videos. The thing is, my current plan is to crawl IMDB and then search youtube for '$movie_name' + 'trailer' and hope that the top result is the trailer, and then download it. Still, this seems a bit convoluted and I was wondering if there was perhaps an easier way. Any help would be appreciated.
true
49,957,297
1.2
1
0
0
There is no easier way. I doubt IMDB allows people to scrap their website freely so your IP is probably gonna get blacklisted and to counter that you'll need proxies. Good luck and scrape respectfully. EDIT: Please take a look at @pds's answer below. My answer is no longer valid.
0
1,369
0
0
2018-04-21T15:23:00.000
python,youtube,web-crawler
Crawling IMDB for movie trailers?
1
1
2
49,957,379
0
0
0
Can we use Two amazon lambda functions for one lex bot if one lambda function in python and other in node.js?
false
49,980,575
0
1
0
0
you can have two lambda functions for two different intents. You cannot have two lambda functions for the same intent
1
506
0
0
2018-04-23T11:59:00.000
python,machine-learning,aws-lambda,chatbot,amazon-lex
Two amazon lambda functions for one lex bot
1
1
2
49,984,458
0
0
1
Trying to Import 200 contacts from CSV file to telegram using Python3 Code. It's working with first 50 contacts and then stop and showing below: telethon.errors.rpc_error_list.FloodWaitError: A wait of 101 seconds is required Any idea how I can import all list without waiting?? Thanks!!
false
50,012,489
0
1
0
0
You can not import a large number of people in sequential. ُThe telegram finds you're sperm. As a result, you must use ‍sleep between your requests
0
287
0
0
2018-04-25T00:38:00.000
python,csv,telegram,telethon
can't import more than 50 contacts from csv file to telegram using Python3
1
1
1
50,310,718
0
1
0
Trying to make a script to scrape one or two articles (article URLs only) from different websites, i was able to make a Python script that uses BeautifulSoup to get the website's HTML, find the website's Navbar menu via its Class name, and loop trough each website section, the problem is that each website has a different Class name or Xpath for the Navbar menu and its sections .. Is there a way to make the script work for multiple websites with as little human intervention as possible ? Any suggestions are more than welcome, Thanks
true
50,018,551
1.2
0
0
1
Did it, i have only needed to use Python and Selenium, an Xpath for the Navbar Elements for each website and another Xpath for all types of articles on the different website pages, saved everything on a database and the rest is just customized for our specific needs, it wasn't that complicated in the end, thanks for the help <3
0
77
0
0
2018-04-25T09:23:00.000
python,web-scraping,beautifulsoup,scrape,scraper
Is it possible to automatically scrape articles from websites - Python & Beautiful Soup
1
1
1
51,704,866
0
1
0
I am trying to a simple make a http request to a server inside my company, from a dev server. I figured out that depending on the origin / destination server, I might, or not, to be forced to use qualified name of the destination server, like srvdestination.com.company.world instead of just srvdestination. I am ok with this, but I don't understand how come my DB connection works? Let's say I have srvorigin. Now, to make http request, I must use qualified name srvdestination.com.company.world. However, for database connection, the connection string with un-qualified name is enough psycopg.connect(host='srvdestination', ...) I understand that protocols are different, but how psycopg2 does to resolve the real name?
false
50,018,843
0
0
0
0
First it all depend on how the name resolution subsystem of your OS is configured. If you are on Unix (you did not specify), this is governed by /etc/resolv.conf. Here you can provide the OS with a search list: if a name has not "enough" dots (the number is configurable) then a suffix is added to retry resolution. The library you use to do the HTTP request may not query the OS for name resolution and do its DNS resolution itself. In which case, it can only work with the information you give it (but it could as well re-use the OS /etc/resolv.conf and information in it), hence the need to use the full name. On the contrary, the psycopg2 may use the OS resolution mechanism and hence dealing with "short" names just fine. Both libraries should have documentation on how they handle hostnames... or otherwise you need to study their source code. I guess psycopg2 is a wrapper around the default libpq standard library, written in C if I am not mistaken, which hence certainly use the standard OS resolution process. I can understand the curiosity around this difference but anyway my advice is to keep short names when you type commands on the shell and equivalent (and even there it could be a problem), but always use FQDNs (Fully Qualified Domain Names) in your program and configuration files. You will avoid a lot of problems.
0
134
0
0
2018-04-25T09:37:00.000
python,dns,psycopg2
Name resolving in http requests
1
1
1
50,476,805
0
0
0
I am using tweepy to get tweets pertaining to a certain hashtag(s) and then I send them to a certain black box for some processing. However, tweets containing any URL should not be sent. What would be the most appropriate way of removing any such tweets?
false
50,044,690
0.321513
1
0
5
In your query add -filter:links. This will exclude tweets containing urls.
0
1,195
0
1
2018-04-26T13:47:00.000
python,twitter,tweepy
How do I filter out tweets containing any URL?
1
1
3
50,048,998
0
1
0
I was wondering if there was a way to create an instance without a Key-Pair for testing purposes.
false
50,047,929
0
1
0
0
You can create an instannce without keypair however you will not be able to ssh into it or you can start it with ssm agent installed and running and use ec2 ssm service to send shell commands.
0
225
0
0
2018-04-26T16:28:00.000
python,boto3
Boto3 Create Instance without a Key-Pair
1
1
1
50,048,241
0
0
0
I'm actually aware that I can get the address which the email is sent from, but I wonder if I can get the user name of de sender too. I searched on the email module documentation but I didn't find anything about it.
false
50,063,795
0
1
0
0
Short answer: no, you can't. Username of the sender remains between the SMTP server and the sender; it's never included in the data sent outside, unless the sender explicitly typed it into the email text. Note that there can be several hops between the originating SMTP server and the receiving SMTP server. IMAP servers are used to access received mail; they have no idea how it was sent.
0
24
0
0
2018-04-27T13:47:00.000
python,email
Is there anyway I can get user name from a MIME object in python?
1
1
1
50,066,623
0
0
0
I need to make an app that uses Python to search for specific names on a website. For instance, I have to check if the string "Robert Paulson" is being used on a website. If it is, it returns True. Else, false. Also,is there any library that can help me make that?
true
50,066,306
1.2
0
0
1
Since you have not attempted to make your application first, then I am not going to post code for you. I will however, suggest using: urllib2: A robust module for interacting with webpages. i.e. pull back the html of a webpage. BeautifulSoup (from bs4 import BeautifulSoup): An awesome module to "regex" html to find what is is that you're looking for. Good luck my friend!
0
469
0
2
2018-04-27T16:05:00.000
python,string,url,web
Finding specific names on a website using Python
1
1
2
50,066,699
0
0
0
I'm working on Json Web Tokens and wanted to reproduce it using python, but I'm struggling on how to calculate the HMAC_SHA256 of the texts using a public certificate (pem file) as a key. Does anyone know how I can accomplish that!? Tks
false
50,110,748
0.197375
1
0
2
In case any one found this question. The answer provided by the host works, but the idea is wrong. You don't use any RSA keys with HMAC method. The RSA key pair (public and private) are used for asymmetric algorithm while HMAC is symmetric algorithm. In HMAC, the two sides of the communication keep the same secret text(bytes) as the key. It can be a public_cert.pem as long as you keep it secretly. But a public.pem is usually shared publicly, which makes it unsafe.
0
1,106
0
1
2018-05-01T02:43:00.000
python,jwt,hmac
How to calculate the HMAC(hsa256) of a text using a public certificate (.pem) as key
1
1
2
51,356,271
0
0
0
I am trying to run a project in selenium with chrome driver, but after I didn't use it for a month ( Was an update to chrome). When I run the project its opens thechrome browser and then immediately closes. I reciving the following error: Traceback (most recent call last): File "C:\Users\maorb\OneDrive\Desktop\Maor\python\serethd\tvil_arthur.py", line 27, in driver = webdriver.Chrome() File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 67, in init desired_capabilities=desired_capabilities) File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 91, in init self.start_session(desired_capabilities, browser_profile) File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 173, in start_session 'desiredCapabilities': desired_capabilities, File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 233, in execute self.error_handler.check_response(response) File "C:\Program Files (x86)\Python36-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 194, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: session not created exception from disconnected: Unable to receive message from renderer (Session info: chrome=63.0.3239.108) (Driver info: chromedriver=2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91),platform=Windows NT 10.0.16299 x86_64) I am using chrome web driver version 2.36 & Google chrome version 63.0.3239.10 I tried to use latest Chrome & Chrome webdriver version but Its opening the chrome but its just opening and not doing any of code.
false
50,113,359
0
0
0
0
Usually people get this error when your script cannot find the chromedriver maybe re look at where you specified the path to be and add a executable path
0
1,266
0
0
2018-05-01T08:20:00.000
python,selenium,selenium-webdriver,selenium-chromedriver
python selenium not running with chrome driver & chrome version
1
1
1
50,156,979
0
0
0
This Twilio is really awesome. I just have a question to which I was not able to find a straightforward answer. All calls in twilio(Incoming and Outgoing) take place through webhooks.i.e you have to specify to which URL an incoming call needs to be redirected. Say suppose, once an incoming call to a twilio number is redirected to the URL, the set of actions defined in the URL are being provided as response to the caller. Instead of this, is it possible for a real person to answer an incoming call to a twilio number. I am already aware of the "dial" verb where you can redirect the call to a different number. My exact question is whether is it possible to make a real person answer a call directly in the twilio number itself?
false
50,150,568
0
0
0
0
Yes, a real person can answer a Twilio inbound call. The way you do that is to Forward that call to the real person's real phone number.
0
518
0
1
2018-05-03T08:39:00.000
python,twilio,twilio-api,twilio-twiml
Make real people answer calls in Twilio
1
1
2
50,224,548
0
0
0
I have a device which is sending packet with its own specific construction (header, data, crc) through its ethernet port. What I would like to do is to communicate with this device using a Raspberry and Python 3.x. I am already able to send Raw ethernet packet using the "socket" Library, I've checked with wireshark on my computer and everything seems to be transmitted as expected. But now I would like to read incoming raw packet sent by the device and store it somewhere on my RPI to use it later. I don't know how to use the "socket" Library to read raw packet (I mean layer 2 packet), I only find tutorials to read higher level packet like TCP/IP. What I would like to do is Something similar to what wireshark does on my computer, that is to say read all raw packet going through the ethernet port. Thanks, Alban
false
50,151,655
0
1
0
0
Did you try using ettercap package (ettercap-graphical)? It should be available with apt. Alternatively you can try using TCPDump (Java tool) or even check ip tables
0
914
0
2
2018-05-03T09:34:00.000
python,linux,sockets,raspberry-pi,ethernet
Read raw ethernet packet using python on Raspberry
1
1
1
50,182,669
0
1
0
I am using server(server_name.corp.com) inside a corporate company. On the server i am running a flask server to listen on 0.0.0.0:5000. servers are not exposed to outside world but accessible via vpns. Now when i run host server_name.corp.com in the box i get some ip1(10.*.*.*) When i run ifconfig in the box it gives me ip2(10.*.*.*). Also if i run ping server_name.corp.com in same box i get ip2. Also i can ssh into server with ip1 not ip2 I am able to access the flask server at ip1:5000 but not on ip2:5000. I am not into networking so fully confused on why there are 2 different ips and why i can access ip1:5000 from browser not ip2:5000. Also what is equivalent of host command in python ( how to get ip1 from python. I am using socktet.gethostbyname(server_name.corp.com) which gives me ip2)
false
50,166,145
0
0
0
0
Not quite clear about the network status by your statements, I can only tell that if you want to get ip1 by python, you could use standard lib subprocess, which usually be used to execute os command. (See subprocess.Popen)
0
84
0
0
2018-05-04T02:07:00.000
python,linux,networking,server,ip
Host command and ifconfig giving different ips
1
1
2
50,166,912
0
0
0
I tried to import socketserver and it asked me to install, so i went with the command "pip install socketserver" and it says: "Could not find a version that satisfies the requirement socketserver (from versions:) No matching distribution found for socketserver" Any sort of help would be appreciated.
false
50,211,310
0.53705
0
0
3
socketserver is a standard library module so you don't need to install it. It looks like you are using Python 2 so use SocketServer, In Python 3 it's renamed to socketserver.
1
5,929
0
2
2018-05-07T09:48:00.000
python,python-3.x,python-2.7,websocket,serversocket
No matching distribution found for socketserver
1
1
1
50,212,231
0
0
0
I've build before server-client programs (both sides where build in python by far). Recently I started building app using swift and my goal is to add a backend to my apps using python (My app is a chat app) I searched in the Internet a tutorials to do so, and I only saw two options to communicate between server side and mobile application, the first one is to create an API (REST) (request - response) - I can't use this solution because I want a real-time chat. And the second option was web-sockets (socket.IO). SO, my question is why not use the simple socket technology (like I used to use when it was only python server side to python client side -> import sockets) - no sockets over web
true
50,216,417
1.2
0
0
3
following Features You will get if you are using Socket.io or socketcluster.io (which is developed on the top of Socket IO) scalability :- It will scale horizontally adding more nodes (scale-out) & Linearly(scale-up) Reduces Payload size as message payload is compressed Authorisation via middle ware functions Reconnects Automatically if Connection drops If You want to use your own implementation then you have to take care of the above features/Solutions to problems which arises when User-base is increases.
0
1,859
0
1
2018-05-07T14:17:00.000
python,swift,sockets,websocket,socket.io
Why Use Socket IO and not just Socket?
1
1
2
50,216,729
0
0
0
I asked a similar questions some days ago but it was a bit unclear so I deleted it and made this new one here. I have a project that fetches market data from cryptocurrency exchanges (Binance, Kraken, Poloniex, etc...). I want to be able to add additional exchanges while the project is up and running. For example I am pulling data every 10 seconds from Binance and Poloniex but now I want to add support for Kraken. How can I keep fetching data from the other two exchanges (add Kraken without restarting the program). I currently have 2 solutions in mind. Start the client that is fetching the data as a new process for each exchange Use importlib.import_module() to load new modules and handle every exchange in the same process (using asyncio) Also, what if I want to add functionality like fetching data from another API endpoint. Method 1 would probably require a restart, with method 2 I could reload all modules and update the class instances between the fetch calls. But I am unsure about the side effects this can cause. Maybe there is a default way how such a project is implemented?
false
50,223,336
0
0
0
0
I would consider designing a base class, named something like CryptoCurrencyExchangeParser. This class would have methods that allowed client code to extract data in a standard way, no matter which exchange it came from. Each specific exchange would then be a subclass, containing methods (possibly defined as abstract in the base class) to access and parse the data. Then I would write a factory function to load one parser, probably specified by a string. This factory function would use the python import machinery to find the code for the requested parser. The main program would keep a list of the active parsers. Such a design would mean that all of the parsers used exactly the same mechanism, whether they were loaded on program start or ten years later. Whether to use multiprocessing, multithreading or some other approach would be factored out as a separate design decision.
0
51
0
0
2018-05-07T22:26:00.000
python,api,fetch,python-asyncio,cryptocurrency
Python dynamically update/add/remove modules
1
1
1
50,223,595
0
0
0
I'm working with Python and Selenium to do some automation in the office, and I need to fill in an "upload file" dialog box (a windows "open" dialog box), which was invoked from a site using a headless chrome browser. Does anyone have any idea on how this could be done? If I wasn't using a headless browser, Pywinauto could be used with a line similar to the following, for example, but this doesn't appear to be an option in headless chrome: app.pane.open.ComboBox.Edit.type_keys(uploadfilename + "{ENTER}") Thank you in advance!
false
50,223,735
0
0
0
0
This turned out to not be possible. I ended up running the code on a VM and setting a registry key to allow automation to be run while the VM was minimized, disconnected, or otherwise not being interacted with by users.
0
533
0
1
2018-05-07T23:22:00.000
python,google-chrome,selenium,automation,headless
How can you fill in an open dialog box in headless chrome in Python and Selenium?
1
1
1
50,398,443
0
1
0
I have a virtualenv environment running python 3.5 Today, when I booted up my MacBook, I found myself unable to install python packages for my Django project. I get the following error: Could not fetch URL <package URL>: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:646) - skipping I gather that TLS 1.0 has been discontinued, but from what I understand, newer versions of Python should be using TLS1.2, correct? Even outside of my environment, running pip3 trips the same error. I've updated to the latest version of Sierra and have updated Xcode as well. Does anyone know how to resolve this?
true
50,284,838
1.2
0
0
1
Here is the fix: curl https://bootstrap.pypa.io/get-pip.py | python Execute from within the appropriate virtual environment.
1
105
0
0
2018-05-11T04:04:00.000
python,django,macos
Python - Enable TLS1.2 on OSX
1
1
1
50,379,383
0
0
0
I've recently started freelance python programming, and was hired to write a script that scraped certain info online (nothing nefarious, just checking how often keywords appear in search results). I wrote this script with Selenium, and now that it's done, I'm not quite sure how to prepare it to run on the client's machine. Selenium requires a path to your chromedriver file. Am I just going to have to compile the py file as an exe and accept the path to his chromedriver as an argument, then show him how to download chromedriver and how to write the path? EDIT: Just actually had a thought while typing this out. Would it work if I sent the client a folder including a chromedriver.exe inside of said folder, so the path was always consistent?
false
50,301,263
0
0
0
0
Option 1) Deliver a Docker image if customer not to watch the browser during running and they can setup Docker environment. The Docker image should includes following items: Python Dependencies for running your script, like selenium Headless chrome browser and compatible chrome webdriver binary Your script, put them in github and fetch them when start docker container, so that customer can always get your latest code This approach's benefit: You only need to focus on scripts like bug fix and improvement after delivery Customer only need to execute same docker command Option 2) Deliver a Shell script to do most staff automatically. It should accomplish following items: Install Python (Or leave it for customer to complete) Install Selenium library and others needed Install latest chrome webdriver binary (which is compatible backward) Fetch your script from code repo like github, or simply deliver as packaged folder Run your script. Option 3) Deliver your script and an user guide, customer have to do many staff by self. You can supply a config file along with your script for customer to specify the chrome driver binary path after they download. Your script read the path from this file, better than enter it in cmd line every time.
0
582
0
0
2018-05-11T22:56:00.000
python,selenium
How to prepare Python Selenium project to be used on client's machine?
1
1
2
50,301,550
0
0
0
I hope this is not too opinionated because I'm looking for a finite answer. I'm developing a UDP server in Python 3.x that is utilizing asyncio. The server is going to work with a game engine to process pretty much every interaction a player makes in the game. Therefore, I have to authenticate the game client with the game server in some way as well ensure replay attacks do not happen on top of everything else that could harm or spoof the game server. When it comes to authenticating with UDP, I'm at a loss. My plan is to have both the game client and game server authenticate per user and game session. That means having something like a public key on the client end and a private key on the server end where the server can authenticate the client is approved. During that authentication, I am going to generate a symmetric key that the game server makes and passes it down to the game client so every packet sent can be verified with that symmetric key using HMAC. If not, all packets are dropped. My Question Is this the best approach? Creating some type of public key where something like a token is generated per session to ensure packets coming to the UDP server are from authenticated clients? My worry here is the keys are still stored in a Windows EXE file and still likely can be cracked and extracted or am I just too paranoid?
false
50,319,570
0
0
0
0
Going to answer my own question! No It's Not The Best Approach The reason this is not the best approach is because even with this process, my use of public/private key was incorrect. I was not referring to encrypting data between the client and server, I was referring to creating a nonce with having some type of password on the client that matches the password on the server. This means the moment I send credentials from the client to the server, it's being sent without encryption on the line or in the buffer to the server. Once this authentication is accepted by the server, it will hash the password, store it and then send the nonce back to the client, which will also be without encryption. The username, the password, and the nonce will be vulnerable to Man in the Middle attacks from both a third-party and the client itself. Which means they can sniff the packets, snag the transmission and crack the user or attempt replays being they have the nonce. Solution My game client does not have SSL/TSL/DTSL options. This is why I actually posted the question, but did not clarify this. My server, as it's custom within Python, does have that ability though. But, I found a plugin for the game engine last night that allows for AES encryption of the data being written to the buffer before said buffer is sent as a packet to the UDP server, which means I can encrypt the passing of data in those packets in a secure way that hopefully the server can decrypt with a legit private key. Doing this will be a good option because it likely will be my only option. Now when a malicious user sniffs the traffic, they wont be able to see credentials or the nonce. The only issue after that is cracking the EXE, which I'm fine with for now.
0
465
0
0
2018-05-13T18:53:00.000
python,ssl,udp
What is the best way to secure UDP server with HMAC?
1
1
1
50,331,500
0
0
0
I'm creating a discord bot, and I want to add it to group DM's so I can keep my server levels lower. However, you can't add people who aren't friends to group DM's. Is there a way to get a discord bot to accept friend requests?
true
50,337,904
1.2
1
0
6
Not possible. Bot accounts do not have permission to use Discord's relationships endpoint. This means no friending and no blocking, and by extension means no bots in group DMs.
0
7,073
0
2
2018-05-14T19:44:00.000
python,discord.py
How to make a discord bot add you as friend
1
1
1
50,338,842
0
0
0
Is there a way to transfer files from one S3 bucket to another S3 bucket using AWS Glue through a Python Script?
false
50,348,880
0
0
1
0
Create a Crawler for your bucket - it will discover the schema of your data and add it as table to Glue Catalog Use Job wizard and select your table as source, and new table as target Glue will generate the code for you where you have to select the destination of your data, specify format etc.
0
1,360
0
0
2018-05-15T11:18:00.000
python-3.x,amazon-s3,aws-glue
Transfer files within S3 buckets using AWS Glue
1
1
1
50,351,216
0
0
0
I am new in mininet. I created a custom topology with 2 linear switches and 4 nodes. I need to write a python module accessing each nodes in that topology and do something but I don't know how. Any idea please?
true
50,373,532
1.2
0
0
0
try the following: s1.cmd('ifconfig s1 192.168.1.0') h1.cmd('ifconfig h1 192.168.2.0')
0
155
0
0
2018-05-16T14:25:00.000
python,mininet,pox
How to access created nodes in a mininet topology?
1
1
1
50,385,916
0
0
0
I have a question regarding the Python API of Interactive Brokers. Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.) To put things in perspective, this is what I am trying to do: 1) Call reqMktData, get last prices for multiple assets. 2) Feed the data into my prediction engine, and do something 3) Go to step 1. When I contacted Interactive Brokers, they said: "Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data." Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one. Any suggestions?
false
50,374,498
0.197375
0
0
1
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots. The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols. However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.
0
2,183
0
6
2018-05-16T15:13:00.000
python-3.x,api,finance,quantitative-finance,interactive-brokers
Getting Multiple Last Price Quotes from Interactive Brokers's API
1
1
1
50,413,302
0
0
0
Title says it all really. I am running a program on a Linux EC2 instance with 4 threads. Three of these are listening to different websockets and the final one is webscraping and calling off a set of other functions when needed. Is it possible that if the GIL is owned by the 4th thread (i.e it is currently running its calculation through the single core) that websocket messages could be 'missed' by the threads listening? I am beginning to think it isn't possible, but have no understanding as to why. I have looked around, but to little avail.
true
50,379,706
1.2
0
0
0
Not really, even if your application is completely blocked say by scheduling or simply sleeping the operating system will queue the incoming network messages. You might lose messages say if the TCP buffer starts to overflow, I reckon that is unlikely in your case. You can test your idea by deliberately sleeping in the 4th thread for some time and see if messages are dropped.
0
63
0
0
2018-05-16T20:52:00.000
python,multithreading,websocket,python-multithreading,gil
If I am listening to a websocket in one thread and running a function in another thread is it possible to miss messages
1
1
1
50,379,960
0
0
1
I need to perform image recognition thing on real time camera feed. The video is embedded in shtml hosted on a IP. How do I access the video and process things using openCV.
false
50,406,354
0
0
0
0
First,you need to find the actual path to camera stream. It is mjpeg or rtsp stream. Use developer option in this page to find the stream, like http://ip/video.mjpg or rtsp://ip/live.sdp. When you find the stream url,create python script which will use stream like capture = cv2.VideoCapture(stream_url). But,as noticed,this is too broad which mean you are asking for full tutorial. This is some directions,and when you have some code,ask a question about issues with your code and you'll get an answer.
0
378
0
0
2018-05-18T07:49:00.000
python,opencv,camera,video-streaming
How do I capture a live IP Camera feed (eg: http://61.60.112.230/view/viewer_index.shtml?id=938427) to python application?
1
1
1
50,410,930
0
0
0
I am getting the API response from the URL as follows : I am not sure what encoding scheme is used. I am not sure how to check this and decode the string. I tried base64 decode and its not the one. To give you a background : I used Packet Capture App to capture the request sent by the APP and the response. So there was no API documentation. F/VDFb/tLplCXIgvPGlpppHawetuV1a5DtWOtmO1ZkQGN1sV8hZmieyIbMC7pjj4wh81IrsWFmOWJZBtmRmHnu/Y/c4lR9EXXAmO2h8hcB6W+ls6cE9S7GFun1lYw2EPBXzxJ+ST2HPaBMsjulnxTJjqftkSf/tOPJBXTQSjrxJqHpUAMfey5qpu8V/cZ/uFEhy5JmpNOZVtoKh+M3YPmKzc88XZS22+35It8HW7CXmzD1UHFE6tmNa3lfFfemqfQU+GMtga0pvU6c+0L1lJTY1HoH64Nf2u4xQ3nidT24ap6NUU4SOi3wg6VqLtSLaVwMWNuXcQmgoW5edj3L/ThGKGmq7ZVFKYO7InGhfxunNhTBbDB8QYxhDZ0GuyC+0pJXyGfcD0HItfeqnIJYqkr3uOaJVaGs//wyF2Q/RBivSvyXf9yRM8kvBIoNH/784XqIEwWnCH5Cqpn/Cvq//ktTz6Gs/atSfP+G5TdcNJ0hf3vDZ4Zle04vsDCGxREp83Wy/MIVN8apRpa5dJCFp0KC5SY3X5miO0Nq7UnGZkBl2zcVb9+ZKlVqgjr1hA1SCzQIArdae2rP14CqTZqP9HNs4DJGvYwYDwnDL4njf5rX9uzIJN5Xdm/+r6bN6I2/IZXRXIj2JU9x8VQFOlTCygR+rCVVkOUZNww0fF6MG3NCc\u003d Any help in this regard would be really helpful
false
50,427,125
0.197375
0
0
1
It is Base64, the error is the last character "\u003d" which is a UTF-16 "=", replace "\u003d"' with "=" and it decodes properly to binary. Trailing Base64 encoded "="characters are padding. Since it decode as Base64 and the trailing "=" character it sure seems to be Base64 encoded data. The Base64 decoded binary in a hex representation is: 17F54315BFED2E99425C882F3C6969A691DAC1EB6E5756B90ED58EB663B5664406375B15F2166689EC886CC0BBA638F8C21F3522BB1616639625906D9919879EEFD8FDCE2547D1175C098EDA1F21701E96FA5B3A704F52EC616E9F5958C3610F057CF127E493D873DA04CB23BA59F14C98EA7ED9127FFB4E3C90574D04A3AF126A1E950031F7B2E6AA6EF15FDC67FB85121CB9266A4D39956DA0A87E33760F98ACDCF3C5D94B6DBEDF922DF075BB0979B30F5507144EAD98D6B795F15F7A6A9F414F8632D81AD29BD4E9CFB42F59494D8D47A07EB835FDAEE314379E2753DB86A9E8D514E123A2DF083A56A2ED48B695C0C58DB977109A0A16E5E763DCBFD38462869AAED95452983BB2271A17F1BA73614C16C307C418C610D9D06BB20BED29257C867DC0F41C8B5F7AA9C8258AA4AF7B8E68955A1ACFFFC3217643F4418AF4AFC977FDC9133C92F048A0D1FFEFCE17A881305A7087E42AA99FF0AFABFFE4B53CFA1ACFDAB527CFF86E5375C349D217F7BC36786657B4E2FB03086C51129F375B2FCC21537C6A94696B9749085A74282E526375F99A23B436AED49C6664065DB37156FDF992A556A823AF5840D520B340802B75A7B6ACFD780AA4D9A8FF4736CE03246BD8C180F09C32F89E37F9AD7F6ECC824DE57766FFEAFA6CDE88DBF2195D15C88F6254F71F154053A54C2CA047EAC255590E519370C347C5E8C1B73427 To me that looks like random data which is what encrypted data looks like, if so without the decryption key you will not be able to further decrypt it. It is 512 bytes in length which is multiple of common encryption block sizes.
0
278
0
0
2018-05-19T16:04:00.000
python,json,encryption,base64,decode
How to check the Encoding scheme on the API response
1
1
1
50,427,755
0
1
0
I have built a chatbot using AWS Lex and lambda. I have a use case where in a user enters a question (For example: What is the sale of an item in a particular region). I want that once this question is asked, a html form/pop up appears that askes the user to select the value of region and item from dropdown menus and fills the slot of the question with the value selected by the user and then return a response. Can some one guide how can this be achieved? Thanks.
false
50,447,302
0
1
0
0
Lex has something called response cards where your can add all the possible values. These are called prompts. The user can simply select his/her choice and the slot gets filled. Lex response cards work in Facebook and slack. In case of custom channel, you will have to custom develop the UI components.
0
224
0
0
2018-05-21T10:54:00.000
python-3.x,amazon-web-services,aws-lambda,chatbot,amazon-lex
Using aws lambda to render an html page in aws lex chatbot
1
1
1
50,462,554
0
0
0
I would like to write a script in Python to read a public Twitter profile. Specifically, I'd like to check for Tweets with images, and download those images (eventually, I'd like to add this as a cron job). I was looking into tweepy as a Twitter API wrapper. However, from what I understand, the Twitter API requires authentication even for actions that access public data - is that correct? Since all I need is to access a single public user timeline, going through the rigmarole of authenticating (and then having those credentials sitting on my computer in I'm not sure how secure a form) seems a little overkill. Are there other solutions out there (particularly Python-based) for reading public Twitter data?
true
50,459,620
1.2
1
0
3
Yes, Twitter does require, authentication to access any public/private data of user. You need to create an app on Twitter to access the data. The app is required to keep a check on the number of requests, etc. made by a particular client, to prevent any abuse. This authentication is a general process followed by other API providers as well and this is the only recommended way. Another advantage of creating a Twitter App is that other users can give permissions to your app and then you can access their private data as well such as DM, etc. Another approach is web-scraping, but I would consider it as unethical as twitter is already providing it's API. Also you would need to update your scraping script each time there is some front end change by the Twitter developers.
0
2,189
0
2
2018-05-22T04:07:00.000
python,twitter
Building script to access public Twitter data, without authentication
1
1
2
50,459,886
0
0
0
I am trying to retrieve fares between two different cities on a travel website. In the request headers, I see the following. 'x-api-idtoken':'null', 'x-api-key':'l7xx944d175ea25f4b9c903a583ea82a1c4c', Do they change/expire with time?
true
50,474,098
1.2
0
0
2
They could change over time and in many cases do. Access keys/tokens can be given a limited life span, but it is up to the service in question to control refresh, expiration and/or revocation. Given that the question does not contain an explicit reference to the web service involved, it is not possible to answer specifically for this case. Typically, the access management scheme is documented as part of the API.
0
834
0
0
2018-05-22T18:16:00.000
python,python-3.x,http-headers
x-api-key in request headers
1
1
1
52,833,525
0
0
0
I downloaded multiple modules (Discord API, cx_Freeze) (pip download, Windows 10) and now I wanted to use them. But when I want to import them, it says there isn’t any module. From my former Python using (before resetting computer) I‘ve added a pycache folder and it worked for one module. I‘m not able to reproduce it for other modules. What to do? I‘ve only one Python version (3.6.5) on PC. I‘ve checked the \site-packages folder and they‘re there.
true
50,546,451
1.2
1
0
0
If you are using python3 then try downloading the library using pip3 install libname but if you are using python2 then install the library using pip2 install libname or just pip install libname try with these command and reply
1
464
0
0
2018-05-26T19:45:00.000
python,python-3.x,api,module,python-import
Python: ModuleNotFound Error
1
1
2
50,546,470
0
0
0
I'm struggling to find a way of seeing the content of a xml file. I have done a lot of searching and the only progress I am making is to keep running my code without any results
false
50,550,467
0
0
0
0
Have a look at the BeautifulSoup package and use the lxml parser.
0
587
0
0
2018-05-27T08:45:00.000
python
How can i see the content of a xml file in python
1
1
2
50,550,515
0
1
0
I am trying to crawl some websites and while I am using headless chrome browser with selenium to render some HTLM that have embedded JS, I would also like to simply use requests, for the cases where there is no need for JS code rendering. Is there a way to know if the HTML needs to be rendered by a browser or if a simple requests.get() would give me the complete HTML content?
true
50,568,534
1.2
0
0
0
Any HTML code generated by tags won't be retrieved by requests. The only way to know if a page would need to be rendered by a browser to generate the whole content is to check if its HTML code has tags. Still, if the information you are interested on is not generated by JS, requests.get() will serve you well.
0
44
0
1
2018-05-28T14:26:00.000
javascript,python,html,selenium,python-requests
How to know HTML page needs to be rendered by JS compiler?
1
1
1
55,472,306
0
1
0
I work with python and data mine some content which I categorize into different categories. Then I go to a specific webpage and submit manually the results. Is there a way to automate the process? I guess this is a "form-submit" thread but I haven't seen any relevant module in Python. Can you suggest me something?
false
50,589,351
0
0
0
0
If you want make this automatic yo have to see which params are send in the form and make a request with this params to the endpoint but directly from your python app, or search a package that simulate a browser and fill the form, but I think that the correct way is making the request directly from your app
0
64
0
0
2018-05-29T16:52:00.000
python,form-submit
How to fill textareas and select option (select tag) and hit submit (input tag) via python?
1
2
2
50,589,450
0
1
0
I work with python and data mine some content which I categorize into different categories. Then I go to a specific webpage and submit manually the results. Is there a way to automate the process? I guess this is a "form-submit" thread but I haven't seen any relevant module in Python. Can you suggest me something?
false
50,589,351
0
0
0
0
Selenium Webdriver is the most popular way to drive web pages, but Python also has beautifulsoup; Either library will work.
0
64
0
0
2018-05-29T16:52:00.000
python,form-submit
How to fill textareas and select option (select tag) and hit submit (input tag) via python?
1
2
2
50,589,446
0
0
0
I'm trying to make a Discord bot in Python that a user can request a unit every few minutes, and later ask the bot how many units they have. Would creating a google spreadsheet for the bot to write each user's number of units to be a good idea, or is there a better way to do this?
false
50,590,788
0
1
0
0
Using a database is the best option. If you're working with a small number of users and requests you could use something even simpler like a text file for ease of use, but I'd recommend a database. Easy to use database options include sqlite (use the sqlite3 python library) and MongoDB (I use the mongoengine python library for my Slack bot).
0
252
0
0
2018-05-29T18:31:00.000
python,python-3.x,discord.py
Discord bot with user specific counter
1
1
1
50,590,881
0
0
0
I have a list of S3 keys for the same bucket my_s3_bucket. What is the most efficient way to figure out which of those keys actually exist in aws S3. By efficient I mean with low latency and hopefully low network bandwidth usage. Note: the keys don't share the same prefix so filtering by a single prefix is not effective The two suboptimal approaches I can think of: Check the existence of each key, one-by-one List all keys in the bucket and check locally. This is not good if the total number of keys is large since listing the keys will still incur many network calls. Is there any better alternative?
true
50,638,573
1.2
1
0
2
To answer your question: there is not alternative exposed by the S3 API. Using multiple threads or asynchronous I/O are solid ways to reduce the real time required to make multiple requests, by doing them in parallel, as you mentioned. A further enhancement that might be worth considering would be to wrap this logic up in an AWS Lambda function that you could invoke with a bucket name and a list of object keys as arguments. Parallellize the bucket operations inside the Lambda function and return the results to the caller already parsed and interpeted, in one tidy response. This would put most of the bandwidth usage between the function and S3 on the AWS network within the region, which should be the fastest possible place for it to happen. Lambda functions are an excellent way to abstract away any AWS interaction that requires multiple API requests. This also allows your Lambda function to be written in a different language than the main project, if desired, because the language does not matter across that boundary -- it's just JSON crossing the border between the two. Some AWS interactions are easier to do (or to execute in complex series/parallel fashion) in some languages than in others, in my opinion, so for example, your function could be written in Node.JS even though your project is written in python, and this would make no difference when it comes to invoking the funcrion and using the response it generates.
0
208
0
1
2018-06-01T07:49:00.000
python,amazon-web-services,amazon-s3,boto3
What is the most efficient way in python of checking the existence of multiple s3 keys in the same bucket?
1
1
1
50,645,869
0
0
0
Even after installing selenium using pip on Python 3.6.3, whenever I try to run a code with import selenium I get the message that ModuleNotFoundError: No module named 'selenium'. I usually use Anaconda Prompt and run my codes in Jupyter notebook, but I made the installation also in regular cmd. Does anyone have an idea about how to solve this?
false
50,660,585
0
0
0
0
I think you have both python 2.x and python 3.x installed on your system. When you do pip install selenium, the module gets installed for python 2.x. To install the module for python 3.x, use pip3 install selenium.
1
170
0
0
2018-06-02T19:47:00.000
python,selenium,anaconda,jupyter-notebook
Python not finding modules
1
1
2
50,660,627
0
0
0
I've started tinkering with python recently and I'm on my way to create my very first telegram bot mainly for managing my Raspberry Pi and a few things connected to it. The bot is done but I would like to send a message to all the users that have already interacted with the bot when it starts, basically saying something like "I'm ready!", but I haven't been able to find any information about it. Is there any specific method in the API already done to do this? Or should I create another file to store the chat_id from all the users and read it with python? Thank you all for your help!! Regards!
false
50,668,567
0.049958
1
0
1
You should save users in database or file.After that use for to send_message one by one to all users that you have in database or file.
0
3,875
0
1
2018-06-03T16:31:00.000
python,telegram-bot,python-telegram-bot
Send a message to all users when the bot wake up
1
1
4
50,695,439
0
0
0
I ran python3 -m pip install -U discord.py but it only installed discord.py v0.16.x. How do I install the new discord.py rewrite v1.0? I uninstalled the old discord.py using pip uninstall discord.py and re-ran pip to install discord.py, only to get version v0.16.x again instead of the new v1.0 version.
false
50,686,388
0
0
0
0
If your also having trouble with all the above install python using the Microscoft Store and use the commands above in command prompt
1
46,947
0
13
2018-06-04T18:06:00.000
python,anaconda,discord,discord.py,discord.py-rewrite
How to install discord.py rewrite?
1
6
18
65,622,492
0
0
0
I ran python3 -m pip install -U discord.py but it only installed discord.py v0.16.x. How do I install the new discord.py rewrite v1.0? I uninstalled the old discord.py using pip uninstall discord.py and re-ran pip to install discord.py, only to get version v0.16.x again instead of the new v1.0 version.
false
50,686,388
0
0
0
0
pip install discord.py Installing from the source[branch:master] is not recommended since it is in the testing phase(Writing as of 5/30/2021).If you want to test out buttons then sure go ahead and run pip install -U git+https://github.com/Rapptz/discord.py@master
1
46,947
0
13
2018-06-04T18:06:00.000
python,anaconda,discord,discord.py,discord.py-rewrite
How to install discord.py rewrite?
1
6
18
67,759,391
0
0
0
I ran python3 -m pip install -U discord.py but it only installed discord.py v0.16.x. How do I install the new discord.py rewrite v1.0? I uninstalled the old discord.py using pip uninstall discord.py and re-ran pip to install discord.py, only to get version v0.16.x again instead of the new v1.0 version.
false
50,686,388
0.022219
0
0
2
Open Command Prompt and type in; pip install discord.py or; pip install discord.py==1.0.1 and then if you want voice do; pip install discord.py[voice]
1
46,947
0
13
2018-06-04T18:06:00.000
python,anaconda,discord,discord.py,discord.py-rewrite
How to install discord.py rewrite?
1
6
18
56,053,192
0
0
0
I ran python3 -m pip install -U discord.py but it only installed discord.py v0.16.x. How do I install the new discord.py rewrite v1.0? I uninstalled the old discord.py using pip uninstall discord.py and re-ran pip to install discord.py, only to get version v0.16.x again instead of the new v1.0 version.
false
50,686,388
0
0
0
0
Open CMD and go to the folder of your discord bot eg: cd C:\Users\max\Desktop\DiscordBot Next type this in CMD: pip install discord.py That should work
1
46,947
0
13
2018-06-04T18:06:00.000
python,anaconda,discord,discord.py,discord.py-rewrite
How to install discord.py rewrite?
1
6
18
63,932,305
0
0
0
I ran python3 -m pip install -U discord.py but it only installed discord.py v0.16.x. How do I install the new discord.py rewrite v1.0? I uninstalled the old discord.py using pip uninstall discord.py and re-ran pip to install discord.py, only to get version v0.16.x again instead of the new v1.0 version.
false
50,686,388
0
0
0
0
Go to python.org and click on download python. Now, open and run it. Make sure you add python to the path. Click on Install. Once installed you can close it. Go to git-scm.com and click on Download for Windows. Once installed open it. Make sure Use git from windows command prompt is selected. Then after clicking on next on everything, click on install. Once install is finished, hit finish. Copy your script address. In my case, it was Local//Programs//Python//Python39//Scripts Open your command prompt and type cd Local//Programs//Python//Python39//Scripts. Paste your address there, mine will not work for you. Hit Enter. Then type py -3 -m pip install -U discord.py. Hit enter again. Once the install is finished close the command prompt. Now you are ready to go ;)
1
46,947
0
13
2018-06-04T18:06:00.000
python,anaconda,discord,discord.py,discord.py-rewrite
How to install discord.py rewrite?
1
6
18
64,515,264
0
0
0
I ran python3 -m pip install -U discord.py but it only installed discord.py v0.16.x. How do I install the new discord.py rewrite v1.0? I uninstalled the old discord.py using pip uninstall discord.py and re-ran pip to install discord.py, only to get version v0.16.x again instead of the new v1.0 version.
false
50,686,388
0
0
0
0
Easy, open Command Prompt and type "pip install discord.py" If you do that you're most probably going to want "pip install requests" If the pip command doesn't work, open your python installer and make sure to click on the add to evironment variables option! That's all your good to go! Use Visual Studio Code or Atom, they are the best so far that I have used for my bot!
1
46,947
0
13
2018-06-04T18:06:00.000
python,anaconda,discord,discord.py,discord.py-rewrite
How to install discord.py rewrite?
1
6
18
62,136,173
0
0
0
I'm new comer of Selenium, and I can use selenium with Chromedriver to do basic auto-test now, the code works fine, but the problem is Chrome browser always update automatically at the backend, and code always fail to run after Chrome update. I know I need to download new chromedriver to solve this issue, but I wonder if there's any way to solve this issue without disabling chromebrowser update? tks. I'm using Windows 10 / Chrome Version 67 / Python 3.6.4 / Selenium 3.12.0
false
50,692,358
0
0
0
0
For me this resolve the issue: pip install --upgrade --force-reinstall chromedriver-binary-auto
0
32,246
0
10
2018-06-05T05:00:00.000
python,google-chrome,selenium,selenium-webdriver,selenium-chromedriver
How to work with a specific version of ChromeDriver while Chrome Browser gets updated automatically through Python selenium
1
2
8
70,550,827
0
0
0
I'm new comer of Selenium, and I can use selenium with Chromedriver to do basic auto-test now, the code works fine, but the problem is Chrome browser always update automatically at the backend, and code always fail to run after Chrome update. I know I need to download new chromedriver to solve this issue, but I wonder if there's any way to solve this issue without disabling chromebrowser update? tks. I'm using Windows 10 / Chrome Version 67 / Python 3.6.4 / Selenium 3.12.0
false
50,692,358
0
0
0
0
Maybe this will help you. I managed to use ChromeDriver version 96.0.4664.45 editing in JUPYTER, I was using Pycharm before and it didn't respond.
0
32,246
0
10
2018-06-05T05:00:00.000
python,google-chrome,selenium,selenium-webdriver,selenium-chromedriver
How to work with a specific version of ChromeDriver while Chrome Browser gets updated automatically through Python selenium
1
2
8
70,526,550
0
0
0
Trying to switch from rabbitMQ to activeMQ keeping kombu library but my script just hang on. Does kombu supports activeMQ?
false
50,739,516
0
0
0
0
I've never heard of anyone using it with ActiveMQ but given it is seemingly designed for Rabbit I'd guess not as Rabbit is not based on an official AMQP version but the draft AMQP 0.9 versions. ActiveMQ implements AMQP v1.0 which is the official AMQP version an so that library would need to implement that in order to be compatible. The Apache Qpid project supplies a number of clients that can be used against ActiveMQ
0
206
0
0
2018-06-07T10:59:00.000
python,activemq,amqp,kombu
Is it possible to use kombu as client library for activeMQ?
1
1
1
50,742,888
0
0
0
My influxdb measurement have 24 Field Keys and 5 tag keys. I try to do 'select last(cpu) from mymeasurement', and found result : When there is no client throwing data into it, it'll take around 2 seconds to got the result But when I run 95 client throwing data (per 5 seconds) into it, the query will take more than 10 seconds before it show the result. is it normal ? Note : My system is a Centos7 VM in xenserver with 4 vcore CPU and 8 GB ram, the top command show 30% cpu while that clients throw datas.
true
50,743,503
1.2
0
0
1
Some ideas: Check your vCPU configuration on other VMs running on the same host. Other VMs you might have that don't need the extra vCPUs should only be configured with one vCPU, for a latency boost. If your DB server requires 4 vCPUs and your host already has very little CPU% used during queries, you might want to check the storage and memory configurations of the VM in case your server is slow due to swap partition use, especially if your swap partition is located on a Virtual Disk over the network via iSCSI or NFS. It might also be a memory allocation issue within the VM and server application. If you have XenTools installed on the VM, try on a system without the XenTools installed to rule out latency issues related to the XenTools driver.
0
642
0
0
2018-06-07T14:14:00.000
influxdb,influxdb-python
influxDB query speed
1
1
1
50,922,084
0
0
0
I just got acquainted with the keyboard buttons for telegram bots using the .KeyboardButton from the Telegram API documentation but I have an issue; so far I've only been able to design the buttons such that the output after clicking on the button is the same as it's caption/placeholder which is really not helpful for what I want to do. I have commands set and all already so for example I want the output of a button labeled "Rules" to be /rules in order to initiate the command action instead of the bot output being "Rules". I'm working with Python although I'm open to anyone working on the same stuff in other languages.
false
50,755,566
-0.099668
0
0
-1
you may add a space character to the KeyboardButton text and then parse it from the message, or, if you don't like spaces to appear at the start, just add them to the end. heh, that's not enough right, so soft hyphen might be used, it's hex is \xC2\xAD (it doesn't show up, at least at my telegrams). so, that works for me, hope mr.durov won't change that.. I've got pinged at SF because of downvote.. ye, okay, The question is about keyboard_markup, which creates buttons below text input area, they produce instant input, which prints button caption in the chat, the bot may parse them as commands (which normally start with "/" character), so the question is about how to distinguish them properly.. This is not recommended route (as it seems). With inline_markup, you create buttons that produce callbacks with custom data, so there will be no user input in the chat.
0
4,783
0
4
2018-06-08T07:31:00.000
python,telegram,telegram-bot,python-telegram-bot,php-telegram-bot
How to make telegram keyboard button issue commands?
1
1
2
66,487,402
0
0
0
What's the best approach in python to do something like what happens when you run jupyter notebook, in other words, run a server (for example, with http.server) on some available port if the default one isn't available? Is it common to just catch the error if starting the server fails and try a different port until it works?
false
50,770,485
0.197375
0
0
1
You can use port 0 - this will bind your server to some port that is currently known to be available by kernel. However, that makes a problem of service discovery - how your clients will know which port number server is listening on? If that's only you, shouldn't be big deal.
1
69
0
1
2018-06-09T02:49:00.000
python,server
find available port to run server
1
1
1
50,770,554
0
0
0
There is a website that claims to predict the approximate salary of an individual on the basis of the following criteria presented in the form of individual drop-down Age : 5 options Education : 3 Options Sex : 3 Options Work Experience : 4 Options Nationality: 12 Options On clicking the Submit button, the website gives a bunch of text as output on a new page with an estimate of the salary in numerals. So, there are technically 5*3*3*4*12 = 2160 data points. I want to get that and arrange it in an excel sheet. Then I would run a regression algorithm to guess the function this website has used. This is what I am looking forward to achieve through this exercise. This is entirely for learning purposes since I'm keen on learning these tools. But I don't know how to go about it? Any relevant tutorial, documentation, guide would help! I am programming in python and I'd love to use it to achieve this task! Thanks!
true
50,776,071
1.2
0
0
1
If you are uncomfortable asking them for database as roganjosh suggested :) use Selenium. Write in Python a script that controls Web Driver and repeatedly sends requests to all possible combinations. The script is pretty simple, just a nested loop for each type of parameter/drop down. If you are sure that value of each type do not depend on each other, check what request is sent to the server. If it is simple URL encoded, like age=...&sex=...&..., then Selenium is not needed. Just generate such URLa for all possible combinations and call the server.
1
50
0
1
2018-06-09T15:59:00.000
python,selenium,selenium-webdriver,web-scraping,regression
How to write a python program that 'scrapes' the results from a website for all possible combinations chosen from the given drop down menus?
1
1
1
50,776,223
0
0
0
I plan to install Python and Selenium on MacOS 10.12.1. What would be the better choice - Python 2.7 or 3.6.5? On windows I use 2.7, because I read that it works faster. But are there any proven obstacles while working with Python 3 and Selenium on Mac?
false
50,792,163
0
0
0
0
I have been using Python 3 for quite a long time now. There is not really a performance issue that is quite noticeable. I would suggest Python 3 or which ever is the newer one because as they keep developing they will be adding new things that you might need and remove some older vulnerabilities. I would suggest you upgrade to Python 3. There might be some getting used to the new syntax but it can be quickly come over as you keep developing.
1
31
0
0
2018-06-11T07:10:00.000
python,macos
MacOS Sierra 10.12.1 - selenium - which version of Python?
1
1
1
50,792,233
0
0
0
There is plenty of info on how to use what seems to be third-party packages that allow you to access your sFTP by inputting your credentials into these packages. My dilemma is this: How do I know that these third-party packages are not sharing my credentials with developers/etc? Thank you in advance for your input.
true
50,806,632
1.2
1
0
0
Thanks everyone for comments. To distill it: Unless you do a code review yourself or you get a the sftp package from a verified vendor (ie - packages made by Amazon for AWS), you can not assume that these packages are "safe" and won't post your info to a third-party site.
0
53
0
0
2018-06-11T22:00:00.000
python,security,sftp,paramiko
Security of SFTP packages in Python
1
1
1
50,827,731
0
0
0
Can someone point me the right direction to where I can sync up a live video and audio stream? I know it sound simple but here is my issue: We have 2 computers streaming to a single computer across multiple networks (which can be up to hundreds of miles away). All three computers have their system clocks synchronized using NTP Video computer gathers video and streams UDP to the Display computer Audio computer gathers audio and also streams to the Display computer There is an application which accepts the audio stream. This application does two things (plays the audio over the speakers and sends network delay information to my application). I am not privileged to the method which they stream the audio. My application displays the video and two other tasks (which I haven't been able to figure out how to do yet). - I need to be able to determine the network delay on the video stream (ideally, it would be great to have a timestamp on the video stream from the Video computer which is related to that system clock so I can compare that timestamp to my own system clock). - I also need to delay the video display to allow it to be synced up with the audio. Everything I have found assumes that either the audio and video are being streamed from the same computer, or that the audio stream is being done by gstreamer so I could use some sync function. I am not privileged to the actual audio stream. I am only given the amount of time the audio was delayed getting there (network delay). So intermittently, I am given a number as the network delay for the audio (example: 250 ms). I need to be able to determine my own network delay for the video (which I don't know how to do yet). Then I need to compare to see if the audio delay is more than the video network delay. Say the video is 100ms ... then I would need to delay the video display by 150ms (which I also don't know how to do). ANY HELP is appreciated. I am trying to pick up where someone else has left off in this design so it hasn't been easy for me to figure this out and move forward. Also being done in Python ... which further limits the information I have been able to find. Thanks. Scott
false
50,807,114
0
0
0
0
A typical way to synch audio and video tracks or streams is have a timestamp for each frame or packet, which is relative to the start of the streams. This way you know that no mater how long it took to get to you, the correct audio to match with the video frame which is 20001999 (for example) milliseconds from the start is the audio which is also timestamped as 20001999 milliseconds from the start. Trying to synch audio and video based on an estimate of the network delay will be extremely hard as the delay is very unlikely to be constant, especially on any kind of IP network. If you really have no timestamp information available, then you may have to investigate more complex approaches such as 'markers' in the stream metadata or even some intelligent analysis of the audio and video streams to synch on an event in the streams themselves.
0
3,020
1
0
2018-06-11T22:56:00.000
python,video,udp,gstreamer,ntp
How to sync 2 streams from separate sources
1
1
1
50,833,346
0
0
0
The module 'discord' I installed in the command prompt which I ran as administrator can't be found. It's located in my site-packages directory along with some other modules such as setuptools which when I import, are imported successfully without error. However, discord which is in the same directory doesn't. In the environment variables I have Path which I've specified to site-packages but I still receive the error that discord cannot be found.
false
50,848,070
0
1
0
0
That error is common on Python version 3.7 If you are trying to run your bot or discord app with Python 3.7 and getting an error such as Invalid Syntax, we recommend that you install Python 3.6.x instead. Discord.py isn't supported on 3.7 due to asyncio not supporting it. Remember to add Python to PATH! Install an older version of PY to run discord!
0
2,176
0
0
2018-06-14T00:33:00.000
python,module,discord
I Installed a Module Called Discord and When Using "import discord" I Get the Error: "ModuleNotFoundError: No module named 'discord'"
1
1
2
51,535,001
0
1
0
I'm using bs4 and urllib.request in python 3.6 to webscrape. I have to open tabs / be able to toggle an "aria-expanded" in button tabs in order to access the div tabs I need. The button tab when the tab is closed is as follows with <> instead of --: button id="0-accordion-tab-0" type="button" class="accordion-panel-title u-padding-ver-s u-text-left text-l js-accordion-panel-title" aria-controls="0-accordion-panel-0" aria-expanded="false" When opened, the aria-expanded="true" and the div tab appears underneath. Any idea on how to do this? Help would be super appreciated.
false
50,882,732
0
0
0
0
BeautifulSoup is used to parse HTML/XML content. You can't click around on a webpage with it. I recommend you look through the document to make sure it isn't just moving the content from one place to the other. If the content is loaded through AJAX when the button is clicked then you will have to use something like selenium to trigger the click. An easier option could be to check what url the content is fetched from when you click the button and make a similar call in your script if possible.
0
1,681
0
0
2018-06-15T21:13:00.000
python-3.x,dom,web-scraping,beautifulsoup,urlopen
Accessing Hidden Tabs, Web Scraping With Python 3.6
1
1
2
50,882,877
0
0
0
I am a python newbie and i have a controler that get Post requests. I try to print to log file the request that it receive, i am able to print the body but how can i extract all the request include the headers? I am using request.POST.get() to get the body/data from the request. Thanks
false
50,903,244
-0.099668
0
0
-1
request.POST should give you the POST body if it is get use request.GET if the request body is json use request.data
0
264
0
0
2018-06-18T05:46:00.000
python,django
How to print all recieved post request include headers in python
1
1
2
50,903,304
0
1
0
I have to write automation scripts using python and Robot framework. I have installed python, Robotframework, RIDE, wxpython. I have installed sikuli library but when I import it in my project, library is not imported. I have tried 'Import Library Spec XML'. My question is from where do I import this .xml or how do I create it?
false
50,949,412
0
0
0
0
First check whether Sikuli is installed in python directory's \Lib\site-packages. Robot test should contain as below: * Settings * Documentation Sikuli Library Demo Library SikuliLibrary mode=NEW * Test Cases * Sample_Sikuli_Test blabh blabh etc
0
1,441
0
0
2018-06-20T13:27:00.000
python-2.7,robotframework,sikuli
Unable to import sikuli library in RIDE
1
1
2
51,034,898
0
0
0
How do I pass a complete folder from a master node to a worker node in spark? I am using one master node and one worker node in a standalone cluster. sc.addFile() passes a file from master to worker, but I want to pass a folder. Thanks for the help.
false
50,950,170
0
0
0
0
There is another method available: public void addFile(String path, boolean recursive) recursive - if true, a directory can be given in path. Currently directories are only supported for Hadoop-supported filesystems.
0
66
0
0
2018-06-20T14:05:00.000
python,apache-spark,pyspark,spark-streaming
How to pass folder from master node to worker node?
1
1
1
50,951,871
0
1
0
I have a Docker image which runs a python subprocess, which is a node.js server exposing an end point /check. The whole thing is put inside a Kubernetes pod and uses /check as the readinessProbe endpoint. Now at some point, I want to close this endpoint or force-fail all the requests coming at it. Ideally, I want to do this via higher-level entities (i.e. Kubernetes lifecycle hooks) so as not to touch the lower-level implementation (such as opening a new endpoint /stop that switch some boolean flag and force the /check to fail) Is that possible at all? If not, what is the best alternative?
true
50,966,018
1.2
0
0
1
Is that possible at all? If not, what is the best alternative? I believe there are a few: remote address filtering magic headers a formal proxy container remote address Requests to /check coming from kubernetes will come from the Node's SDN IP address (so if a Node's SDN subnet is 10.10.5.0/24, then requests will come from 10.10.5.1), so you could permit the checks from the .1 of the /24 assigned to the Pod magic headers The httpGet readinessProbe allows httpHeaders: so you could turn on HTTP Basic auth for /check and then put the - name: Authentication value: Basic xxyyzz== in the httpHeaders: a formal proxy container Add a 2nd container to the Pod that runs haproxy and filters /check requests to return 401 or 404 or whatever you want. Since all containers in a Pod share the same networking namespace, configuring haproxy to speak to your node.js server will be super trivial, and your readinessProbe (as well as liveliness) can continue to use the URL because only kubernetes will have access to it by using the non-haproxy container's port. To complete that loop, point the Service at the haproxy container's port.
0
50
1
0
2018-06-21T10:15:00.000
python,node.js,docker,kubernetes
Close off an endpoint on node.js server by via Kubernetes
1
1
1
50,981,018
0
0
0
I am trying to send an ACK with a data payload using the socket library but I cannot understand how to do it. Is this supported and if not, what are the alternatives?
false
50,989,064
0
0
0
0
you can't do this , Socket library uses high level APIs like : bind() listen() accept() ... etc the acks will be handled for you ,
0
3,315
0
0
2018-06-22T13:28:00.000
python,tcp
How to send ACK with data payload using python
1
1
3
50,989,676
0
0
0
I have this large XML file on my drive. The file is too large to be opened with sublimetext or other text editors. It is also too large to be loaded in memory by the regular XML parsers. Therefore, I dont even know what's inside of it! Is it just possible to "print" a few rows of the XML files (as if it was some sort of text document) so that I have an idea of the nodes/content? I am suprised not to find an easy solution to that issue. Thanks!
false
50,994,801
0.066568
0
0
1
This is one of the few things I ever do on the command line: the "more" command is your friend. Just type more big.xml
0
1,845
0
0
2018-06-22T19:56:00.000
python,xml
how to print the first lines of a large XML?
1
1
3
50,995,481
0
1
0
How can I send POST request with a csv or a text file to the server running on a localhost using cURL. I have tried curl -X POST -d @file.csv http://localhost:5000/upload but I get { "message": "The browser (or proxy) sent a request that this server could not understand." } My server is flask_restful API. Thanks a lot in advance.
false
50,998,620
0.197375
0
0
2
Curl's default Content-Type is application/x-www-form-urlencoded so your problem is probably that the data you are POSTing is not actually form data. It might work if you set the content type header properly: -H "Content-Type: text/csv" Though it does depend on the server.
0
23,637
0
14
2018-06-23T06:36:00.000
python-3.x,curl,flask,flask-restful
POST csv/Text file using cURL
1
1
2
50,998,749
0
0
0
A third party monitoring app expects the host header to be set to a certain value in order to recognise the message I send it (it defaults to localhost when sending locally). So I need to set it manually. My syslog message is sent via simple python socket connection (socket.send) and I have tried prepending date time followed by host to the message string I pass in as parameter. However the 3rd party system still detects it as localhost, indicating that is not setting the header. How do I set the header of a syslog message via python?
false
51,008,666
0
0
0
0
The hostname is added to the message by the local syslog server, i.e. is not set inside Python at all. Thus, your local system needs to be configured properly in order to send the correct hostname when forwarding syslog messages to other systems.
0
250
0
0
2018-06-24T09:38:00.000
python,sockets,syslog
How to set host header field in a syslog message?
1
1
1
51,008,937
0
0
0
Connectivity to pypi.org is blocked in corporate windows laptop hence not able to install pip,selenium etc... Is there any other way to achieve this? ERROR H:\script>python get-pip.py Collecting pip Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/pip/ Operation cancelled by user
true
51,035,406
1.2
0
0
2
Use proxy to install python packages on corporate servers behind firewall. something like this: pip install --proxy www-proxy.xxxxxxx.com:8080 psutil
0
450
0
1
2018-06-26T05:25:00.000
python
Not able to install python modules such as pip,selenium etc
1
1
1
53,090,042
0
1
0
I want to speed up the loading time for pages on selenium because I don't need anything more than the HTML (I am trying to scrape all the links using BeautifulSoup). Using PageLoadStrategy.NONE doesn't work to scrape all the links, and Chrome no longer supports PageLoadStrategy.EAGER. Does anyone know of a workaround to get PageLoadStrategy.EAGER in python?
false
51,087,832
0.099668
0
0
1
You only use normal or none as the pageLoadStrategy in chromdriver. So either choose none and handle everything yourself or wait for the page load as it normally happens
0
6,458
0
3
2018-06-28T16:41:00.000
python-3.x,selenium,web-scraping,selenium-chromedriver,pageloadstrategy
"Eager" Page Load Strategy workaround for Chromedriver Selenium in Python
1
1
2
51,088,883
0
1
0
I am implementing soap toolkit api. After submitting credit card info it redirects me towards the 3rd party page to get the password. I want to avoid that redirection is it possible to load that page within my domain or on my custom page using IFRAME.
false
51,091,419
0
0
0
0
Not exactly an answer but more of a request for clarifying information: When you say "it redirects me towards the 3rd party page to get the password", are you talking about the 3D-Secure process? I think I understand what you are trying to accomplish, but in order to use 3D Secure, I think the idea is to guide the end-user through the process of the additional Verification (security) process. So I think the redirection is required as a matter of how the Business Process (or 3D-Secure) works.
0
150
0
0
2018-06-28T21:22:00.000
python,soap-client,cybersource
Cybersource soap toolkit api avoid redirection on 3rd party domain
1
1
2
51,107,797
0
1
0
I need to make a simple API using Python. There are many tutorials to make a REST API using Django REST framework, but I don't need REST service, I just need to be able to process POST requests. How can I do that? I'm new to Python. Thank you!
false
51,134,093
0
0
0
0
Well if you don't need the whole DRF stuff than just don't use it. Django is built around views which take HTTP requests (whatever the verb - POST, GET etc) and return HTTP responses (which can be html, json, text, csv, binary data, whatever), and are mapped to urls, so all you have to do is to write your views and map them to url.
0
5,025
0
0
2018-07-02T10:30:00.000
python,django,api,post
Python simple API application without Django REST framework
1
1
2
51,134,824
0
0
0
I am using search_ext_s() method of python-ldap to search results on the basis of filter_query, upon completion of search I get msg_id which I passed in result function like this ldap_object.result(msg_id) this returns tuple like this (100, attributes values) which is correct(I also tried result2, result3, result4 method of LDAP object), But how can I get response code for ldap search request, also if there are no result for given filter_criteria I get empty list whereas in case of exception I get proper message like this ldap.SERVER_DOWN: {u'info': 'Transport endpoint is not connected', 'errno': 107, 'desc': u"Can't contact LDAP server"} Can somebody please help me if there exists any attribute which can give result code for successful LDAP search operation. Thanks, Radhika
true
51,174,045
1.2
1
0
0
An LDAP server simply may not return any results, even if there was nothing wrong with the search operation sent by the client. With python-ldap you get an empty result list. Most times this is due to access control hiding directory content. In general the LDAP server won't tell you why it did not return results. (There are some special cases where ldap.INSUFFICIENT_ACCESS is raised but you should expect the behaviour to be different when using different LDAP servers.) In python-ldap if the search operation did not raise an exception the LDAP result code was ok(0). So your application has to deal with an empty search result in some application-specific way, e.g. by also raising a custom exception handled by upper layers.
0
549
0
0
2018-07-04T12:45:00.000
python-ldap
search_s search_ext_s search_s methods of python-ldap library doesn't return any Success response code
1
1
1
51,365,948
0
0
0
Is there a way to use github3.py python library to access github with a SSH key? I'm trying to create a service that writes on some repositories using a machine user for security reasons.
true
51,192,262
1.2
1
0
1
Unfortunately, the GitHub API doesn't provide a way to authenticate with SSH keys. Thus, github3.py provides no way to login using SSH keys.
0
138
0
0
2018-07-05T13:13:00.000
python,github-api,github3.py
github3.py login using ssh keys
1
1
1
51,211,308
0
0
0
I captured some Internet packets via Wireshark, now i want to extract the payload length only from the total length of the packet Using PYTHON. I can get the full length of the packet using pkt.length or pkt.captured_length. But i didn't find any command for extracting only payload size.
true
51,279,053
1.2
0
0
2
Payload size must be calculated based on the embedded protocol headers and lengths. For example, the IP total length (which is most likely what you are seeing as "packet length") is the length of the entire IP datagram, including IP header, embedded protocol headers, and data. To find the payload length you must, just as an IP stack would: determine the length of the IP header (likely 20, but it can have options) by multiplying the low order nibble of the first byte by 4. Determine the embedded protocol header based on the value of the 9th byte in the IP header Determine the header length of the embedded protocol header; for instance, if this is TCP, multiplying the high order nibble of the twelfth byte by 4 to determine the total header length of the TCP header including options. If you add up these values, you can then subtract them from the IP total length (packet length) to determine the payload length. Of course, from an IP point of view, you could just subtract the length of the IP header from the total length; from IP's point of view, everything inside of it is just payload. :)
0
1,999
0
1
2018-07-11T06:55:00.000
python,wireshark,packet-capture,payload
Extracting Packet payload length
1
1
1
51,286,456
0
0
1
I am trying to divide a city into n squares. Right now, I'm calculating the coordinates for all square centres and using the ox.graph_from_point function to extract the OSM data for each of them. However, this is getting quite long at high n due to the API pausing times. My question: Is there a way to download all city data from OSM, and then divide the cache file into squares (using ox.graph_from_point or other) without making a request for each? Thanks
false
51,287,813
0.197375
0
0
1
Using OSMnx directly - no, there isn't. You would have to script your own solution using the existing tools OSMnx provides.
0
161
0
1
2018-07-11T14:15:00.000
python-3.x,osmnx
OSMnx: Divide a cache ox.graph into equal squares without redownloading each
1
1
1
51,637,340
0
0
0
Hello I would like to ask if is possible to update my app with a socket server like in other apps when there is an update so its downloading and updating the code without downloading the app again
true
51,313,649
1.2
0
0
0
To answer you question directly, "yes, it is possible". Python is a general purpose language and can do that kind of thing readily enough. the "how" will depend your app.
0
35
0
0
2018-07-12T20:07:00.000
python,python-3.x,updates
UPDATE my script in python with python socket
1
1
1
51,313,884
0
1
0
Pretty self-explanatory, trying to do a small web scraper from a google search, but when trying to import, using "from google import search" I get the error: ModuleNotFoundError: No module named 'google'. When trying to install google again from the command prompt using "pip install google" I get the error: Requirement already satisfied: google in c:\users\dsimard\python\lib\site-packages (2.0.1) Requirement already satisfied: beautifulsoup4 in c:\users\dsimard\python\lib\site-packages (from google) (4.6.0) I am writing this program in eclipse.
false
51,331,133
0
0
0
0
Try from googlesearch import search
1
432
0
0
2018-07-13T18:40:00.000
python,eclipse
getting error "modulenotfounderror: No module named 'google'" but when trying to install through command prompt it says requirement already satisfied
1
1
1
51,573,332
0
1
0
So I have this website that I made in web2py and launched using python anywhere. I am using godaddy to host the domain. When I go to the website though it is not https and is not trusted. So, I bought the trusted site trustmark from godaddy and they are telling me to add a line of javascript into the code of the website to make it trusted. But, I can't figure out where to put this line of code. What file do I put this code in in web2py to make the entire site https?
false
51,341,005
0.197375
0
0
1
Add the specified <script> tag near the bottom of the /views/layout.html file of the web2py app, which will result in it being included in every page that extends that layout (if you serve some pages with a different layout or without extending a layout, then you'll need to separately include the code in those views as well).
0
54
0
0
2018-07-14T15:50:00.000
https,web2py,pythonanywhere,godaddy-api,trusted-sites
How to make a web2py site a trusted site
1
1
1
51,341,232
0
0
0
I am trying to install discord.py with voice support into Pythonista on my iPad using StaSh. The problem is that when I enter the command pip install discord.py[voice] like it says to in the documentation, I get an error that says Error: Failed to fetch package release urls. Can anyone help me figure out what the issue is here? Any help is greatly appreciated. Thanks!
false
51,343,961
0
0
0
0
In discord.py docs, it says using this command: pip install -U discord.py[voice] try that.
1
784
0
3
2018-07-14T22:46:00.000
python,pip,python-3.6,discord.py,pythonista
Discord.py[voice] giving installation error
1
2
3
66,851,297
0
0
0
I am trying to install discord.py with voice support into Pythonista on my iPad using StaSh. The problem is that when I enter the command pip install discord.py[voice] like it says to in the documentation, I get an error that says Error: Failed to fetch package release urls. Can anyone help me figure out what the issue is here? Any help is greatly appreciated. Thanks!
false
51,343,961
0
0
0
0
Add quotemarks around "discord.py[VOICE]" and see if it works
1
784
0
3
2018-07-14T22:46:00.000
python,pip,python-3.6,discord.py,pythonista
Discord.py[voice] giving installation error
1
2
3
66,851,237
0
1
0
Actually I have 2 questions. I added google, facebook and twitter sign in to my android app. I use firebase sign in for register and login. After that I will use my own python server. Now, I want to add auto sign in. Namely, after first login, it won't show login page again and it will open other pages automatically. I searched but i didn't find a sample for this structure. How can I do auto sign in with facebook, google, twitter in my android app. And how my server know this login is success and it will give user's data to clients in securely.
false
51,347,328
0
1
0
0
You need to do a web-service call from Android side just after login from firebase, stating in your server that this user has logged in to your app. You can store the access token provided by firebase or you can generate yours on web service call and thereby authenticate user with that token for user specific pages.
0
460
0
1
2018-07-15T10:05:00.000
android,firebase-authentication,facebook-login,google-signin,python-server-pages
Auto sign in with Google, Facebook and Twitter in Android App
1
1
2
51,347,386
0
0
0
I want get the data from my elasticsearch node for my code, i am using elasticsearch-dsl library to query the data from elasticsearch. Now i want the data to be sorted according to the "@timestamp" which can done using sort api. But the data that i am getting back has more than 10000 documents. I cannot use scan with sort to get large data as with sort doesn't work with scan in elasticsearch-dsl. Is there a way to use scroll api in elasticsearch-dsl or any other way to get more than 10000 document sorted with "@timestamp".
false
51,396,258
0.379949
0
0
2
scroll does work with sort, you just need to call it with preserve_order: s.params(preserve_order=True).scan() Hope this helps!
0
1,242
0
1
2018-07-18T07:39:00.000
python-2.7,elasticsearch,elasticsearch-dsl,elasticsearch-dsl-py
Get result sorted by "@timestamp" in python using elasticsearch-dsl
1
1
1
51,428,739
0
0
0
I was using python to insert some data to elasticsearch, the elasticsearch version is > 6.0. the code can be seen as: from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch( "localhost:9200" ) from elasticsearch import TransportError data = { "http_code" : "404", "count" : "10" } try: es.index(index="http_code", doc_type="error_code",body=data) except TransportError as e: print(e.info) but we will have problem like:{u'status': 406, u'error': u'Content-Type header [] is not supported'} I have searched that in the new elastic search version it need to set the header, for example in the command line we can use: " curl -XPUT 'localhost:9200/customer/external/1?pretty' -d '{ "name": "John Doe" }' but in python how can we set the header? anyone knows about that?
false
51,398,525
0
0
0
0
TRY THIS: python -m pip install --upgrade 'elasticsearch>=7.16,<8' it should resolve your problem.
0
213
0
1
2018-07-18T09:33:00.000
python,elasticsearch
elasticsearch.index error with header not supported
1
1
1
72,467,426
0
0
0
I'm trying to create a program that incorporates a kind of exchange JSONs messaging system where there is an A or central device, of which we know its static IP and one or more B devices, which have a dynamic and unstable IP. The communication I propose should be viable not only from B to A (including answer or not from B) but also from A to B, with the same results. My messaging skills are limited to ZMQ, so I had thought about two possible situations: On the first one, based on PUSH-PULL sockets, A has a PULL socket and another PUSH socket, in the same way as B. As a heartbeat, B sends a JSON every X seconds/minutes saying "This is my IP". That information is considered reliable within a time range and, on the one hand, we solve the communication from B to A (fairly easy) and on the other hand B has saved an IP with which to try to contact with a minimum of reliability. ON the second one, I thought about on REQ-REP sockets. In this configuration, A has a RES socket, capable of receiving and responding requests from B, which in turn has an associated REQ socket for performing the requests. The communication from B to A, again, is simple. Conversely, the idea would be to have B throwing REQ requests as a heartbeat (every half second, for example), asking "Is there something for me? Is there something for me?" Much more reliable than the first but much less efficient at the network level (I completely ignore the real cost of this type of calls). Is there a better proposal to start with? I want to emphasize that I use ZMQ because it is what I know, but if there are much better tools / more adapted to this type of situation, I will be happy to know and / or work with them. I know Google has a kind of API for Android but I'm not talking specifically about smartphones.
false
51,428,065
0
0
0
0
Remember that the end that PUSHes or REQuests or whatever is totally independent of the end that binds or connects. So the device that has the stable IP address can bind its zmq sockets and everything else connects their zmq sockets to that IP address, but you can then choose which end is PUSH or PULL, or REQ or REP
0
87
0
0
2018-07-19T16:49:00.000
json,python-3.x,zeromq,messaging,pyzmq
Best way of communicating two devices knowing only one IP
1
1
2
51,430,389
0
0
0
I'm using element.location in selenium under python 3 to find the X, Y coordinates of a given element but I don't get exact values? Any ideas why or what I should do?
false
51,455,748
0
0
0
0
It could depend on the size of the window. I have tried with different size and locations were different. Try to set the size for browser: browser.set_window_size(1920, 1080) Hope, this will help you.
0
38
0
0
2018-07-21T11:47:00.000
python,selenium,selenium-webdriver,ui-automation
How to get correct Location of web elements in selenium python
1
1
1
51,482,825
0
0
0
I am pretty new to Python in general and recently started messing with the Google Cloud environment, specifically with the Natural Language API. One thing that I just cant grasp is how do I make use of this environment, running scripts that use this API or any API from my local PC in this case my Anaconda Spyder environment? I have my project setup, but from there I am not exactly sure, which steps are necessary. Do I have to include the authentication somehow in the Script inside Spyder? Some insights would be really helpful.
false
51,455,781
-0.099668
0
0
-1
First install the API by pip install or conda install in the scripts directory of anaconda and then simply import it into your code and start coding.
1
2,057
0
0
2018-07-21T11:51:00.000
python,google-cloud-platform
How do I use Google Cloud API's via Anaconda Spyder?
1
1
2
51,455,824
0
1
0
We have a Linux server with Jupyterhub installed and can be accessed by users over browser, similarly we are able to access Rstudio. Is it possible to install Spyder on the Linux server and provide access via web browser. Multiple users will be accessing it simultaneously. We are not looking for remote desktop or SSH solution. Thanks
true
51,465,212
1.2
0
0
2
(Spyder maintainer here) Spyder can't work inside a web browser because it's a pure desktop application, sorry.
0
1,496
0
0
2018-07-22T12:38:00.000
python,anaconda,spyder
Access Python Spyder in Browser like Rstudio or Jupyter notebook (NOT SSH)
1
1
1
51,467,329
0
0
0
After I installed pycurl, there is an error occured like this when I test my code. OS: Ubuntu 16.04 LTS python: 3.6.5 curl: 7.47.0-1ubuntu2.8 pycurl: 7.43.0.1 Is there any solution I can solve that? Thank you !
false
51,472,336
0
0
0
0
Anyway, I solved the problem finally. Here are 2 errors I face: pycurl.error: (1, 'Protocol https not supported or disabled in libcurl') ImportError: pycurl: libcurl link-time ssl backend (openssl) is different from compile-time ssl backend (none/other). Solution: 1)apt-get install openssl 2)export PYCURL_SSL_LIBRARY=openssl 3)easy_install pycurl For some reason, the pip ingore the PYCURL_SSL_LIBRARY. I have to use easy_install.
0
768
0
1
2018-07-23T05:56:00.000
python,pycurl
pycurl.error: (1, 'Protocol https not supported or disabled in libcurl') Ubuntu
1
1
2
51,504,001
0
1
0
I am basically trying to start an HTTP server which will respond with content from a website which I can crawl using Scrapy. In order to start crawling the website I need to login to it and to do so I need to access a DB with credentials and such. The main issue here is that I need everything to be fully asynchronous and so far I am struggling to find a combination that will make everything work properly without many sloppy implementations. I already got Klein + Scrapy working but when I get to implementing DB accesses I get all messed up in my head. Is there any way to make PyMongo asynchronous with twisted or something (yes, I have seen TxMongo but the documentation is quite bad and I would like to avoid it. I have also found an implementation with adbapi but I would like something more similar to PyMongo). Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff but then I find myself at an impasse with Scrapy integration. I have seen things like scrapa, scrapyd and ScrapyRT but those don't really work for me. Are there any other options? Finally, if nothing works, I'll just use aiohttp and instead of Scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. Any advice on how to proceed down that road? Thanks for your attention, I'm quite a noob in this area so I don't know if I'm making complete sense. Regardless, any help will be appreciated :)
true
51,525,645
1.2
0
0
0
Is there any way to make pymongo asynchronous with twisted No. pymongo is designed as a synchronous library, and there is no way you can make it asynchronous without basically rewriting it (you could use threads or processes, but that is not what you asked, also you can run into issues with thread-safeness of the code). Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff It doesn't. aiohttp is a http library - it can do http asynchronously and that is all, it has nothing to help you access databases. You'd have to basically rewrite pymongo on top of it. Finally, if nothing works, I'll just use aiohttp and instead of scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. That means lots of work for not using scrapy, and it won't help you with the pymongo issue - you still have to rewrite pymongo! My suggestion is - learn txmongo! If you can't and want to rewrite it, use twisted.web to write it instead of aiohttp since then you can continue using scrapy!
0
263
0
0
2018-07-25T18:37:00.000
python,mongodb,asynchronous,server,scrapy
Async HTTP server with scrapy and mongodb in python
1
1
1
51,525,888
0
0
0
I'm building an RPC Server in golang that uses msgpack. The client is built in python using the mprpc library (msgpack over TCP with gevent). My issue is, being an absolute noob in networking, I discovered that I can't use the same address/port with multiple clients running at once on the same computer (socket already bound i guess, it just stalls and timeouts). I have looked around quite a bit but I'm not sure what I should be doing to be able to have multiple clients on the same machine talk to a server (msgpack back and forth). Is this a case where I need to use ZeroMQ ? Or requests over HTTP ? Thanks !
false
51,527,846
0
0
0
0
If you want to have two way connection, then HTTP is not suitable for this. Because HTTP is designed in a way that the server only responds to a request, which prevents server to issue a request itself. There are other solutions that provide two way connection(server to client and client to server in same time). WebSocket is the first thing that comes to my mind. Of course ZeroMQ also can do this.
1
1,344
0
0
2018-07-25T21:22:00.000
python,sockets,tcp,rpc,msgpack
RPC over TCP with multiple clients on same machine
1
1
2
51,528,128
0
0
0
I am trying to stream an online video using VLC media player. But the URL I'm receiving is random, so that URL needs to be linked to the VLC media player online stream. Is there any APIs that enables the random online video to be played ? A small understanding of the project that I'm building... I have a device that will receive the URL from the server and will play it on a screen. Earlier I was playing it via a web browser. But this time I want it to implement it using a media player.Thus my question, is there any API for VLC media player that can be used to stream online videos? *** BTW I'm using python to write my scripts.
false
51,536,989
0.099668
0
0
1
If the URL is to the stream / video file, you can just open it directly in VLC like you would anything else. If it's to an HTML document, you'll need to extract the URL of the actual stream.
0
8,408
0
3
2018-07-26T10:46:00.000
python,video,media,vlc
VLC Media Player API
1
1
2
51,537,047
0
0
0
I am using a windows client to perform UI automated tests. each time i start the test using cmd or eclipse the browser opens up and goes to the given url but it does not continue, instead, a small window is opened with "Chromedriver.exe stopped working" message on it. how can I solve this issue? I am using on this windows client : python 2.7.15, selenium 3.5.0 , robot framework 3.0.2,windows 7 professional , and chrome P.S. : I already tried upgrading and downgrading both chrome and chrome driver, also, used selenium 3.13.0 then downgraded again to selenium 3.5.0.
true
51,599,659
1.2
0
0
1
I figured out the problem . Chrome driver version in the system was an old one despite the fact that i downloaded the newest. I managed to delete the old one from my system. It works fine now
0
234
0
0
2018-07-30T17:51:00.000
python,selenium,selenium-chromedriver
chromedriver.exe stopped working issue
1
1
1
51,605,292
0
1
0
I have an s3 bucket which has a large no of zip files having size in GBs. I need to calculate all zip files data length. I go through boto3 but didn't get it. I am not sure if it can directly read zip file or not but I have a process- Connect with the bucket. Read zip files from the bucket folder (Let's say folder is Mydata). Extract zip files to another folder named Extracteddata. Read Extracteddata folder and do action on files. Note: Nothing shouldn't download on local storage. All process goes on S3 to S3. Any suggestions are appreciated.
false
51,604,689
0
0
1
0
This is not possible. You can upload files to Amazon S3 and you can download files. You can query the list of objects and obtain metadata about the objects. However, Amazon S3 does not provide compute, such as zip compression/decompression. You would need to write a program that: Downloads the zip file Extracts the files Does actions on the files This is probably best done on an Amazon EC2 instance, which would have low-latency access to Amazon S3. You could do it with an AWS Lambda function, but it has a limit of 500MB disk storage and 5 minutes of execution, which doesn't seem applicable to your situation. If you are particularly clever, you might be able to download part of each zip file ('ranged get') and interpret the zipfile header to obtain a listing of the files and their sizes, thus avoiding having to download the whole file.
0
8,072
0
1
2018-07-31T02:36:00.000
python,amazon-web-services,amazon-s3,boto3
Read zip files from amazon s3 using boto3 and python
1
1
2
51,604,927
0
0
0
How can I use a single Elastic Search connection object in multiple files of python? I tried making it global but it didn't work.
false
51,626,940
0
0
0
0
You have to pass the Es object as an argument. So when you call a method or a function from a different python file, you pass this object as function/method argument. I doubt if you can achieve something like that by using global variables.
1
34
0
0
2018-08-01T06:46:00.000
python,elasticsearch,connection
Using single elastic search connection object across multiple python files?
1
1
1
51,627,172
0
0
0
Context: I implemented tests which use docker-py to create docker networks and run docker containers. The test runner used to execute the tests is pytest. The test setup depends on Python (Python package on my dev machine), on my dev machines docker daemon and my dev machines static ip address. In my dev machine runtime context the tests run just fine (via plain invocation of the test runner pytest). Now I would like to migrate the tests into gitlab-ci. gitlab-ci runs the job in a docker container which accesses the CI servers docker daemon via /var/run/docker.sock mount. The ip address of the docker container which is uses by gitlab-ci to run the job in is not static like in my dev machine context. But for the creation of the docker networks in the test I need the ip address. Question: How can I get the appropriate ip address of the docker container the tests are executed in with Python?
false
51,629,194
-0.197375
1
0
-1
When you run docker you can share the same network as the host if asked. Add to the run command --network host Such parameter will cause the python network code be the same as you run the code without container.
0
2,714
1
2
2018-08-01T08:57:00.000
python,docker,gitlab,dockerpy
How can I get the host ip in a docker container using Python?
1
1
1
51,630,443
0
0
0
I want to download approximately 50 pdf files from the Internet using a python script. Can Google APIs help me anyhow?
false
51,654,956
0
0
0
0
I am going to assume that you are downloading from Google drive. You can only download one file at a time. You cant batch download of the actual file itself. YOu could look into some kind of multi threading system and download the files at the same time that way but you man run into quota issues.
0
59
0
0
2018-08-02T13:30:00.000
python,python-3.x,web-scraping,google-api
how to download many pdf files from google at once using python?
1
1
1
51,663,714
0
0
0
How can I get the embed of a message to a variable with the ID of the message in discord.py? I get the message with uzenet = await client.get_message(channel, id), but I don't know how to get it's embed.
true
51,688,392
1.2
1
0
5
To get the first Embed of your message, as you said that would be a dict(): embedFromMessage = uzenet.embeds[0] To transfer the dict() into an discord.Embed object: embed = discord.Embed.from_data(embedFromMessage)
0
5,406
0
1
2018-08-04T18:15:00.000
python,python-3.x,discord,discord.py
Discord.py get message embed
1
1
2
55,555,605
0
0
0
In my Python script I want to connect to remote server every time. So how can I use my windows credentials to connect to server without typing user ID and password. By default it should read the userid/password from local system and will connect to remote server. I tried with getuser() and getpass() but I have to enter the password everytime. I don't want to enter the password it should take automatically from local system password. Any suggestions..
false
51,690,162
0
0
0
0
I am sorry this is not exactly an answer but I have looked on the web and I do not think you can write a code to automatically open Remote desktop without you having to enter the credentials but can you please edit the question so that I can see the code?
0
183
0
0
2018-08-04T22:59:00.000
python,python-3.x,login,python-requests
How to use Windows credentials to connect remote desktop
1
1
1
51,690,444
0