Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
Is there any way that can handle deleted message by user in one-to-one chat or groups that bot is member of it ? there is method for edited message update but not for deleted message .
true
48,484,272
1.2
1
0
9
No. There is no way to track whether messages have been deleted or not.
0
1,993
0
8
2018-01-28T07:49:00.000
telegram-bot,python-telegram-bot
handle deleted message by user in telegram bot
1
2
2
48,485,447
0
0
0
I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.). My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx if that attribute (type arrorw) is from a to b or from b to a. Does anyone know how to handle this?
false
48,503,540
0
0
0
0
I guess you can use a directed graph and store the direction as an attribute if you don't need to represent that directed graph.
0
215
0
0
2018-01-29T14:27:00.000
python,graph,networkx
Partially undirect graphs in Networkx
1
2
2
48,504,499
0
0
0
I'm looking for a way to implement a partially undirect graph. This is, graphs where edges can be directed (or not) and with different type of arrow (>, *, #, etc.). My problem is that when I try to use undirect grpah from Networkx and stored arrow type as an attribute, I don't find an efficient way to tell networkx if that attribute (type arrorw) is from a to b or from b to a. Does anyone know how to handle this?
true
48,503,540
1.2
0
0
0
After search it in a lot of different sources, the only way to do a partial undirect graph I've found it is this is through adjacent matrices. Networkx has a good tools to move between graph and adjacent matrix (in pandas and numpy array format). The disadvantage is if you need networkx functions you have to program it yourself or convert the adjacent matrix to networkx format and then return it back to your previous adjacent matrix.
0
215
0
0
2018-01-29T14:27:00.000
python,graph,networkx
Partially undirect graphs in Networkx
1
2
2
48,583,218
0
1
0
I am trying to give temporary download access to a bucket in my s3. using boto3.generate_presigned_url(), I have only managed to download a specific file from that bucket but not the bucket itself. is there any option to do so or my only option is to download the bucket content, zip it, upload it, and give access to the zip?
false
48,517,407
0
0
0
0
Have you tried cycling through the list of items in the bucket? do a aws s3 ls <bucket_name_with_Presigned_URL> and then use a for loop to get each item. Hope this helps.
0
1,170
0
0
2018-01-30T08:57:00.000
python,amazon-web-services,amazon-s3,boto3,pre-signed-url
boto3 python generate pre signed url for a whole bucket
1
1
1
52,748,706
0
0
0
I was working on boto3 module in python and I have had created a bot which would find the publicly accessible buckets, but this is done for a single user with his credentials. I am thinking of advancing the features and make the bot fetch all the publicly accessible buckets throughout every user accounts. I would like to know if this is possible, if yes how, if not why?
false
48,537,478
0.099668
0
0
1
This is not possible. There is no way to discover the names of all of the millions of buckets that exist. There are known to be at least 2,000,000,000,000 objects stored in S3, a number announced several years ago and probably substantially lower than the real number now. If each bucket had 1,000,000 of those objects, that would mean 2,000,000 buckets to hold them. You lack both the time and the permission to scan them all, and intuition suggests that AWS Security would start to ask questions, if you tried.
0
2,004
0
0
2018-01-31T08:15:00.000
python-2.7,amazon-s3,boto3,s3-bucket
Find all the s3 public buckets
1
1
2
48,553,966
0
1
0
I have automation scripts where the implicitly_wait is parametrized so that the user will be able to set it. I have a default value of 20 seconds which I am aware of but there is a chance that the user has set it with a different value. In one of my methods I would like to change the implicitly_wait (to lower it as much as possible) and return it to the value before the method was called. In order to do so I would like to save the implicitly_wait value before I change it. This is why I am looking for a way to reach to it.
false
48,542,904
0
1
0
0
After reading through the Selenium code and playing in the interpreter, it appears there is no way to retrieve the current implicit_wait value. This is a great opportunity to add a wrapper to your framework. The wrapper should be used any time a user wants to change the implicit wait value. The wrapper would store the current value and provide a 'getter' to retrieve the current value. Otherwise, you can submit a request to the Selenium development team...
0
31
0
0
2018-01-31T13:04:00.000
python,python-3.x,selenium
How can I view the implicitly_wait that the webdriver was set with?
1
1
1
48,543,491
0
0
0
I am trying to model the spread of information on Twitter, so I need the number of tweets with specific hashtags and the time each tweet was posted. If possible I would also like to restrict the time period for which I am searching. So if I were examining tweets with the hashtag #ABC, I would like to know that there were 1,400 tweets from 01/01/2015 - 01/08/2015, and then the specific time for each tweet. I don't the actual tweet itself though. From what I've read so far, it looks like the Twitter API restricts the total number of tweets you can pull and limits how far back I can search. Anyone know if there's a way for me to get this data?
false
48,549,453
-0.099668
1
0
-1
Twitter api provides historical data for a hashtag only up to past 10 days. There is no limit on number of tweets but they have put limitation on time. There is no way to get historical data related to a hashtag past 10 days except: You have access to their premium api (Twitter has recently launched its premium api where you get access only if you qualify their criteria. Till date they have provided access to very limited users) You can purchase the data from data providers like Gnip You have internal contacts in Twitter ;)
0
1,686
0
1
2018-01-31T18:51:00.000
python,r,twitter,tweepy,twython
How can I get the number of tweets associated with a certain hashtag, and the timestamp of those tweets?
1
1
2
48,580,106
0
1
0
I'm using python-social-auth to allow users to login via SAML; everything's working correctly, except for the fact that if a logged-in user opens the SAML login page and logs in again as a different user, they'll get an association with both of the SAML users, rather than switch login. I understand the purpose behind this (since it's what you can normally do to associate the user with different auth services) but in this case I need to enforce a single association (ie. if you're logged in with a given SAML IdP, you cannot add another association for the same user with the same provider). Is there any python-social-auth solution for this, or should I cobble together something (for instance, preventing logged-in users from accessing the login page)?
true
48,559,911
1.2
0
0
0
There's no standard way to do it in python-social-auth, there are a few alternatives: Override the login page and if there's a user authenticated, then log them out first, or show an error, whatever fits your projects. Add a pipeline function and set it in the top that will act if user is not None, you can raise an error, logout the user, etc. Override the backend and extend the auth_allowed method in it return False if there's a valid user instance at self.strategy.request.user. This will halt the auth flow and AuthForbidden will be raised.
0
187
0
0
2018-02-01T09:58:00.000
django,python-social-auth
Python-social-auth: do not reassociate existing users
1
1
1
48,586,214
0
0
0
Good evening, i have to work on a xml file, the problem is that the elements in the file ends with a different format than usual, for example: <1ELEMENT> text <\1ELEMENT> I use the function root=etree.parse('filepath'), and, by changing manually in the text out of the compiler the \ in /, the function works correctly. The big problem is that i need to automate that replacing process and the only solution that i thinked about is importing the file as array make a replace / to \ and build a new xml file;but it seems a little bit clunky. Summing up i need to know if exist a function to replace the terms i mentioned above before to use the root=etree.parse('filepath').
true
48,571,060
1.2
0
0
-1
You could load the file do the replacement, e.g. string_containing_modified_data = data_as_string.replace('\\>', '/>') use etree.fromstring(string_containing_modified_data) to parse the xml. If possible, you should try to fix the writer, but I understand if you don't have the opportunity to do so.
1
353
0
0
2018-02-01T20:18:00.000
python,xml,xml-parsing
(python) parsing xml file but the elements ends with \
1
2
3
48,571,357
0
0
0
Good evening, i have to work on a xml file, the problem is that the elements in the file ends with a different format than usual, for example: <1ELEMENT> text <\1ELEMENT> I use the function root=etree.parse('filepath'), and, by changing manually in the text out of the compiler the \ in /, the function works correctly. The big problem is that i need to automate that replacing process and the only solution that i thinked about is importing the file as array make a replace / to \ and build a new xml file;but it seems a little bit clunky. Summing up i need to know if exist a function to replace the terms i mentioned above before to use the root=etree.parse('filepath').
false
48,571,060
0
0
0
0
This isn't an XML file. Given that the format of the file is garbage, are you sure the content isn't garbage too? I wouldn't want to work with data from such an untrustworthy source. If you want to parse this data you will need to work out what rules it follows. If those rules are something fairly similar to XML rules then it might be that converting it to XML and then parsing the XML is a reasonable way to go about this; if not, you might be better off writing a parser from scratch. But before you do so, try to persuade the people responsible for this nonsense of the benefits of conforming to standards.
1
353
0
0
2018-02-01T20:18:00.000
python,xml,xml-parsing
(python) parsing xml file but the elements ends with \
1
2
3
48,572,589
0
1
0
I'm trying to scrape Facebook public page likes data using Python. My scraper uses the post number in order to scrape the likes data. However, some posts have more than 6000 likes and I can only scrape 6000 likes, also I have been told that this is due to Facebook restriction which doesn't allow to scrape more than 6000 per day. How can I continue scrape the likes for the post from the point the scraper stop scraping.
false
48,577,599
-0.099668
0
0
-1
In tags I see facebook-graph-api, which has limitations. Why don't you use requrests + lxml? It would be such easier, and as you want to scrape public pages, you don't even have to login, so it could be easily solve.
0
1,695
0
0
2018-02-02T07:19:00.000
python,facebook-graph-api,scrape
scrape facebook likes with python
1
1
2
48,578,008
0
0
0
I'm coming from NetBeans and evaluating others and more flexible IDEs supporting more languages (i.e. Python) than just php and related. I kept an eye on Eclipse that seems to be the best choice; at the time I was not able to find an easy solution to keep the original project on my machine and automatically send / syncronize the files on the remove server via sftp. All solutions seems to be outdated or stupid (like mounting a smb partition or manually send the file via an ftp client! I'm not going to believe that an IDE like Eclipse doesn't have a smart solution of what I consider a basic feature of an IDE, so I think I missed something... On Eclipse forums I've seen the same question asked lots of time but without any answer! Some suggestions about is strongly apreciated otherwise I think the only solution is stick on one IDE each language I use that seem to be incredible on 2018. I'm developing on MacOS and the most interesting solution (kDevelop) fails on building with MacPorts. Thank you very much.
false
48,599,891
0.379949
1
0
2
RSE is a very poor solution, as you noted it's a one-shot sync and is useless if you want to develop locally and only deploy occasionally. For many years I used the Aptana Studio suite of plugins which included excellent upload/sync tools for individual files or whole projects, let you diff everything against a remote file structure over SFTP when you wanted and exclude whatever you wanted. Unfortunately, Aptana is no longer supported and causes some major problems in Eclipse Neon and later. Specifically, its editors are completely broken, and they override the native Eclipse editors, opening new windows that are blank with no title. However, it is still by far the best solution for casual SFTP deployment...there is literally nothing else even close. With some work it is possible to install Aptana and get use of its publishing tools while preventing it from destroying the rest of your workspace. Install Aptana from the marketplace. Go to Window > Preferences > Install/Update, then click "Uninstall or update". Uninstall everything to do with Aptana except for Aptana Studio 3 Core and the Aptana SecureFTP Library inside that. This gets rid of most, but not all of Aptana's editors, and the worst one is the HTML editor which creates a second HTML content type in Eclipse that cannot be removed and causes all kinds of chaos. But there is a workaround. Exit Eclipse. Go into the eclipse/plugins/ directory and remove all plugins beginning with com.aptana.editor.* EXCEPT FOR THE FOLLOWING which seem to be required: com.aptana.editor.common.override_1.0.0.1351531287.jar com.aptana.editor.common_3.0.3.1400201987.jar com.aptana.editor.diff_3.0.0.1365788962.jar com.aptana.editor.dtd_3.0.0.1354746625.jar com.aptana.editor.epl_3.0.0.1398883419.jar com.aptana.editor.erb_3.0.3.1380237252.jar com.aptana.editor.findbar_3.0.0.jar com.aptana.editor.idl_3.0.0.1365788962.jar com.aptana.editor.text_3.0.0.1339173764.jar Go back into Eclipse. Right-clicking a project folder should now expose a 'Publish' option that lets you run Aptana's deployment wizard and sync to a remote filesystem over SFTP. Hope this helps...took me hours of trial and error, but finally everything works. For the record I am using Neon, not Oxygen, so I can't say definitively whether it will work in later versions.
0
1,140
0
1
2018-02-03T17:13:00.000
java,php,python,eclipse
Eclipse Oxygen: How to automatically upload php files on remote server
1
1
1
48,876,177
0
1
0
I have the bulk of my web application in React (front-end) and Node (server), and am trying to use Python for certain computations. My intent is to send data from my Node application to a Python web service in JSON format, do the calculations in my Python web service, and send the data back to my Node application. Flask looks like a good option, but I do not intend to have any front-end usage for my Python web service. Would appreciate any thoughts on how to do this.
false
48,600,583
0
0
0
0
In terms of thoughts: 1) You can build a REST interface to your python code using Flask. Make REST calls from your nodejs. 2) You have to decide if your client will wait synchronously for the result. If it takes a relatively long time you can use a web hook as a callback for the result.
1
415
0
0
2018-02-03T18:27:00.000
python,node.js,rest,api,web-services
Python web service with React/Node Application
1
1
1
48,600,936
0
1
0
I'm trying to decide if I should use gevent or threading to implement concurrency for web scraping in python. My program should be able to support a large (~1000) number of concurrent workers. Most of the time, the workers will be waiting for requests to come back. Some guiding questions: What exactly is the difference between a thread and a greenlet? What is the max number of threads \ greenlets I should create in a single process (with regard to the spec of the server)?
false
48,608,845
0
0
0
0
The python thread is the OS thread which controlled by the OS which means it's a lot heavier since it needs context switch, but the green thread is lightweight and since it's in userspace the OS does not create or manage them. I think you can use gevent, Gevent = eventloop(libev) + coroutine(greenlet) + monkey patch, Gevent give you threads but without using threads with that you can write normal code but have async IO. Make sure you don't have CPU bound stuff in your code.
1
1,130
0
1
2018-02-04T14:02:00.000
python,multithreading,concurrency,python-multithreading,gevent
Python Threading vs Gevent for High Volume Web Scraping
1
1
2
57,702,150
0
1
0
I’ve started working a lot with Flask SocketIO in Python with Eventlet and are looking for a solution to handle concurrent requests/threading. I’ve seen that it is possible with gevent, but how can I do it if I use eventlet?
true
48,611,425
1.2
0
0
5
The eventlet web server supports concurrency through greenlets, same as gevent. No need for you to do anything, concurrency is always enabled.
0
2,111
0
2
2018-02-04T18:10:00.000
python,socket.io,webserver,flask-socketio,eventlet
Handle concurrent requests or threading Flask SocketIO with eventlet
1
1
2
48,616,158
0
0
0
I have created simple test cases using selenium web driver in python. I want to log the execution of the test cases at different levels. How do I do it? Thanks in advance.
false
48,617,220
0
1
0
0
I created library in python for logging info messages and screenshots in HTML file called selenium-logging There is also video explanation of package on youtube (25s) called "Python HTML logging"
0
1,240
0
1
2018-02-05T06:53:00.000
python,selenium-webdriver
Logging in selenium python
1
1
1
69,818,465
0
1
0
I'm having a little trouble figuring out if I should have an API for admins and for users splitted. So: Admins should login using /admin/login with a POST request, and users just /login. Admins should access/edit/etc resources on /admin/resourceName and users just /resourceName.
true
48,646,826
1.2
0
0
1
You should only have one endpoint, not one for each type of user. What if you have moderators? Will you also create a /mods/login ? What each user should and shouldn't have access to should be sorted out with permissions.
0
40
0
1
2018-02-06T15:46:00.000
python,rest,falconframework
Should I make an API for users and an API for admins?
1
1
1
48,646,870
0
1
0
i have the below error while running my code on amazon ec2 instance and when trying to import the h5py package i have permission denied error ImportError: load_weights requires h5py
false
48,676,037
0
0
0
0
just solve it using sudo pip install hypy.
0
539
0
0
2018-02-08T01:26:00.000
python-3.x,amazon-web-services,h5py
Import error requires h5py
1
1
1
48,676,038
0
0
0
Need to extract specific data from grafana dashboard. Grafana is connected to graphite in the backend. Seems there is no API to make calls to grafana directly. Any help? Ex: I need to extract AVG CPU value from graph of so and so server.
false
48,683,976
0.197375
0
0
1
The only way I found in grafana 7.1 was to: Open the dashboard and then inspect the panel Open the query tab and click on refresh Use the url and parameters on your own query to the api note: First you need to create an API key in the UI with the proper role and add the bearer to the request headers
0
2,210
0
4
2018-02-08T11:04:00.000
python-3.x,graphite,grafana-api
How can we extract data from grafana dashboard?
1
1
1
62,026,844
0
1
0
My server in Python (Tornado) send a csv content on a GET request. I want to specify the content type of the response as "text/csv", but when I do this the file is downlaoded when I send the GET request on my browser. How can I specify the header "Content-type : text/csv" without having making it a downlaodable file but just show the content on my browser ?
false
48,685,070
0.379949
0
0
2
The content-type header is what tells the browser how to display a given file. It doesn't know how to display text/csv, so it has no choice but to treat it as an opaque download. If you want the file to be displayed as plain text, you need to tell the browser that it has content-type text/plain. If you need to tell other clients that the content type is text/csv, you need some way to distinguish clients that understand that content type from those that do not. The best way to do this is with the Accept request header. Clients that understand CSV would send Accept: text/csv in their request, and then the server would respond with content-type text/plain or text/csv depending on whether CSV appears in the accept header. Using the Accept header may require modifications to the client, which may or may not be possible for you. If you can't update the clients to send the Accept header, then you'll have to use a hackier workaround. You can either use a different url (add ?type=plain or ?type=csv) or try to detect browsers based on their user agent.
0
50
0
0
2018-02-08T12:04:00.000
python,http,request,tornado
GET response - Do NOT send a downlaodable file
1
1
1
48,722,882
0
0
0
I have packages stored in s3 bucket. I need to read metadata file of each package and pass the metadata to program. I used boto3.resource('s3') to read these files in python. The code took few minutes to run. While if I use aws cli sync, it downloads these metafiles much faster than boto. My guess was that if I do not download and just read the meta files, it should be faster. But it isn't the case. Is it safe to say that aws cli is faster than using boto?
false
48,692,483
0
1
0
0
It's true that the AWS CLI uses boto, but the cli is not a thin wrapper, as you might expect. When it comes to copying a tree of S3 data (which includes the multipart chunks behind a single large file), it is quite a lot of logic to make a wrapper that is as thorough and fast, and that does things like seamlessly pick up where a partial download has left off, or efficiently sync down only the changed data on the server. The implementation in the awscli code is more thorough than anything in the Python or Java SDKs, as far as I have seen. I've seen several developers who were too proud to call the CLI from their code, but zero thus far all such attempts I have seen have failed to measure up. Love to see a counter example, though.
0
4,228
0
3
2018-02-08T18:32:00.000
python,amazon-s3,boto,boto3
Is aws CLI faster than using boto3?
1
1
3
63,806,918
0
1
0
I have some 1000 html pages. I need to update the names which is present at the footer of every html page. What is the best possible efficient way of updating these html pages instead of editing each name in those html pages one by one. Edit: Even if we use some sort of scripts, we have to make changes to every html file. One possible way could be using Editor function. Please share if anybody has any another way of doing it.
false
48,697,322
0
0
0
0
You can use domdocument and domxpath to parse the html file(you can use php file_get_contents to read the file ) it looks like i can't post links
0
1,154
0
0
2018-02-09T01:15:00.000
javascript,php,python,html,css
How to update static content in multiple HTML pages
1
1
3
60,838,059
0
0
0
I'm learning about basic back-end and server mechanics and how to connect it with the front end of an app. More specifically, I want to create a React Native app and connect it to a database using Python(simply because Python is easy to write and fast). From my research I've determined I'll need to make an API that communicates via HTTP with the server, then use the API with React Native. I'm still confused as to how the API works and how I can integrate it into my React Native front-end, or any front-end that's not Python-based for that matter.
false
48,698,110
0.197375
0
0
2
You have to create a flask proxy, generate JSON endpoints then use fetch or axios to display this data in your react native app. You also have to be more specific next time.
0
6,331
0
2
2018-02-09T03:04:00.000
python,rest,http,react-native,server
How to create a Python API and use it with React Native?
1
1
2
48,718,794
0
1
0
I have two instances. One is on the Public Subnet & the other is on the Private subnet of AWS. In the private system, I am performing some computation. And the public system is acting as the API endpoint. My total flow idea is like this: When some request comes to the public server, the parameters should be forwarded to the private system, the computation will be done there and the result will be sent back to the public and from there the result will be fed back to the user. In the private system, some python code is running. I did setup Apache-Flask in the private system. So the idea is when some requests are coming, from the public server the parameters will be extracted and another HTTP request will be fired to the private system. Computation will be done there and the response will return which will return to the client system. I have two question, Is this a good approach? Any better way to implement the total scenario?
false
48,702,061
0
0
0
0
This is a commonly used pattern when separating Web Servers and App Servers in traditional Web Application setup, keeping the Web Servers in public subnets (Or keeping internet accessible) and the business rules kept in App Servers in the private network. However, it also depends on the complexity of the system to justify the separation having multiple servers. One of the advantages of this approach is you can scale these servers separately.
0
59
0
0
2018-02-09T08:58:00.000
python,amazon-web-services,http,server
Best way for communicating between two servers
1
1
2
48,702,358
0
0
0
I'm working with splinter and Python and I'm trying to setup some automation and log into Twitter.com Having trouble though... For example the password field's "name=session[password]" on Twitter.com/login and the username is similar. I'm not exactly sure of the syntax or what this means, something with a cookie... But I'm trying to fill in this field with splinters: browser.fill('exampleName','exampleValue') It isn't working... Just curious if there is a work around, or a way to fill in this form? Thanks for any help!
false
48,709,350
0
1
0
0
What's the purpose of doing this rather than using the official API? Scripted logins to Twitter.com are against the Terms of Service, and Twitter employs multiple techniques to detect and disallow them. Accounts showing signs of automated login of this kind are liable to suspension or requests for security re-verification.
0
59
0
0
2018-02-09T15:43:00.000
python,python-3.x,twitter,login,splinter
How to log into Twitter with Splinter Python
1
1
1
48,716,354
0
0
0
I am trying to use websocket.WebSocketApp, however it's coming up with the error: module 'websocket' has no attribute 'WebSocketApp' I had a look at previous solutions for this, and tried to uninstall websocket, installed websocket-client and still comes up with the same error. My File's name is MyWebSocket, so I don't think it has anything to do with that can anyone help me please?
false
48,730,108
0.099668
0
0
1
Just installing websocket-client==1.2.0 is ok. I encountered this problem when I was using websocket-client==1.2.3
0
7,871
0
1
2018-02-11T09:32:00.000
python,websocket,pip
AttributeError: module 'websocket' has no attribute 'WebSocketApp' pip
1
1
2
70,608,299
0
0
0
Can anyone of you help me with an automation task which involves connecting through rdp and automating certain task in a particular application which is stored in that server. I have found scripts for rdp connection and for Windows GUI automation seperately. But in the integration, I have become a bit confused. It will be great if anyone can help me with the python library name :)
true
48,764,814
1.2
0
0
2
It is not possible to automate a RDP window using pywinauto as RDP window itself is an image of a desktop. Printing control identifiers of the RDP window gives the UI of the screen. Solution is to install python+pywinauto in the remote machine.
0
1,322
1
0
2018-02-13T10:40:00.000
python-3.x,user-interface,rdp,pywinauto
GUI Automation in RDP
1
1
1
49,934,460
0
0
0
I have a python project with Selenium that I was working on a year ago. When I came back to work on it and tried to run it I get the error ImportError: No module named selenium. I then ran pip install selenium inside the project which gave me Requirement already satisfied: selenium in some/local/path. How can I make my project compiler (is that the right terminology?) see my project dependencies?
false
48,766,723
0
0
0
0
Is it possible that you're using e.g. Python 3 for your project, and selenium is installed for e.g. Python 2? If that is the case, try pip3 install selenium
0
141
0
0
2018-02-13T12:20:00.000
python,selenium,import
Import error "No module named selenium" when returning to Python project
1
1
1
48,767,767
0
0
0
I got this error in in Python3.6 ModuleNotFoundError: No module named 'oauth2client.client',i tried pip3.6 install --upgrade google-api-python-client,But I don't know how to fix Please tell me how to fix, Thanks
false
48,780,634
0.761594
0
0
5
Use below code, this worked for me: pip3 install --upgrade oauth2client
0
7,084
0
2
2018-02-14T06:04:00.000
python-3.x
ModuleNotFoundError: No module named 'oauth2client.client'
1
1
1
52,187,177
0
1
0
I am developing a desktop application that must send a specified url to a Flask application hosted online, and subsequently receive data from the same Flask app. 2 applications communicating back & forth. I am able to make GET and POST requests to this Flask app, but I am unaware of how to construct specific URL's which include arguments for the Flask app to receive via request.args.get() Thus far my ability hasn't been entirely erroneous. I can send a request GET / HTTP/1.1\nHost : \r\n which in turn receives something like b'HTTP/1.0 200 OK\r\n' Which is well and good, I got the encoding part down. Beyond this I am at a loss as the Flask view function needs to acquire an argument arg from a specific url - something like myFlaskApp.com/viewfunction?h=arg What would be an at least decent form if not a minimal / pragmatic way of practicing this kind of communication? I haven't much code to show for this one; I would like to leave any stratagem open for debate. I hope you can understand. Thank you! P.S. +<3 if you also show me how to receive and decode the Flask server's view function return value on my app client. Assumed to be an arbitrary string.
false
48,795,392
0
0
0
0
If your HTTP client is written in python the simplest solution would be to use a higher level HTTP library like requests or urllib2. If you want to get the path mappings against your Flask app views you could print them by introspecting the app object and export them to json or some other format and use them in your client. In your sockets example just use GET /?arg=value HTTP/1.1\nHost : \r\n.
0
113
0
0
2018-02-14T20:09:00.000
python,sockets,flask
Python - Using socket to construct URL for external Flask server's view function
1
1
1
48,795,531
0
0
0
I am trying to use the Select function in Selenium for Python 3 to help with navigating through the drop down boxes. However, when I try to import org.openqa.selenium.support.ui.Select I get an error message: "No module named 'org'" Would appreciate any help on this. I saw there was a similar question posted a few weeks ago but the link to that question is now broken. Thanks!
true
48,812,910
1.2
0
0
4
The path 'org.openqa.selenium.support.ui.Select' is a Java descriptor. In Python, make sure you have the Python Selenium module installed with pip install selenium, and then import it with import selenium. For the Select function specifically, you can import that with the following from selenium.webdriver.support.ui import Select Then you'll be able to use it like this: select = Select(b.find_element_by_id(....))
0
5,709
0
1
2018-02-15T17:20:00.000
python,selenium
Selenium / Python - No module named 'org'
1
1
1
48,813,013
0
1
0
I made server using python on laptop. And I made client using Java on samelaptop. They were connected, and They were communicated. But when I made client using Java on another laptop, client didn't find server What is wrong?? and What could I do??
false
48,852,421
0
0
0
0
On the laptop running the server: The client can access using localhost:<port> or 0.0.0.0:<port> Connecting from another laptop (same network): You have to connect to: <pc-server-local-ip>:<port> To get <pc-server-local-ip, using the laptop running your server: - Windows : type ipconfig in console, value next to IPV4 - Linux / Mac : type ifconfig in console, value next to inet
0
27
0
0
2018-02-18T13:52:00.000
java,python,server,client
python server and java client(another PC) Error
1
1
1
48,852,566
0
0
0
In order to test our server we designed a test that sends a lot of requests with JSON payload and compares the response it gets back. I'm currently trying to find a way to optimize the process by using multi threads to do so. I didn't find any solution for the problem that I'm facing though. I have a url address and a bunch of JSON files (these files hold the requests, and for each request file there is an 'expected response' JSON to compare the response to). I would like to use multi threading to send all these requests and still be able to match the response that I get back to the request I sent. Any ideas?
true
48,880,508
1.2
0
0
0
Well, you have couple of options: Use multiprocessing.pool.ThreadPool (Python 2.7) where you create pool of threads and then use them for dispatching requests. map_async may be of interest here if you want to make async requests, Use concurrent.futures.ThreadPoolExecutor (Python 3) with similar way of working with ThreadPool pool and yet it is used for asynchronously executing callables, You even have option of using multiprocessing.Pool, but I'm not sure if that will give you any benefit since everything you will be doing is I/O bound, so threads should do just fine, You can make asynchronous requests with Twisted or asyncio but that may require a bit more learning if you are not accustomed to asynchronous programming.
1
177
0
0
2018-02-20T08:13:00.000
python,multithreading,python-2.7,asynchronous
using threading for multiple requests
1
1
2
48,881,300
0
0
0
We are currently trying to process user input and checking if user has entered a food item using elastic search. With elastic search we are able to get results for wide range of terms: Garlic , Garlic Extract etc... How should we handle use cases E.g. Blueberry Dish-washing soap Or Apple based liquid soap . How do we omit these searches ? As I search Blueberry Dish-washing soap I still get search results related to Blueberry
true
48,891,679
1.2
0
0
2
Your objective requires that you perform part of speech tagging on your query, and then use those tags to identify nouns. You would then need to compare the extracted nouns to a pre-curated list of food strings and, after identifying those that are not food, remove the clauses of which those nouns are the subject and /or the phrases of which they are the object. This functionality is not built into elasticsearch. Depending on what language you are processing your queries with, there are various libraries for part of speech tagging and string manipulation. Updated answer: Just read through this and realized this answer isn't very good. The best way to solve this problem is with document/phrase vectorization. Vectorized properly, you should be able to encode the noun phrases 'Blueberry' and 'Blueberry dishwashing soap' as very different vectors, and then you can take all sorts of approaches as far as inferring classifications from those vectors.
0
35
0
1
2018-02-20T18:11:00.000
python,elasticsearch,nlp
How to filter out elastic searches for invalid inputs
1
1
1
48,898,306
0
1
0
What is the current best practice and method of loading a webpage (that has 10 - 15 seconds worth of server side script). User clicks a link > server side runs > html page is returned (blank page for 10 - 15 seconds). User clicks a link > html page is immediately returned (with progress bar) > AJAX post request to the server side > complete script > return result to html. Other options (threading?) I am running Google App Engine (Python) Standard Environment.
false
48,896,407
0.197375
0
0
2
Best Practice would be for the the script to not take 10-15 seconds. What is your script doing? Is it generating something that you can pre-compute and cache or save in Google Cloud Storage? If you're daisy-chaining datastore queries together, is there something you can do to make them happen async in tandem? If it really has to take 10-15 seconds, then I'd say option 2 is must: User clicks a link > html page is immediately returned (with progress bar) > AJAX post request to the server side > complete script > return result to html.
0
73
0
3
2018-02-21T00:27:00.000
javascript,python,html,ajax,google-app-engine
Best practice for loading webpage with long server side script
1
2
2
48,897,992
0
1
0
What is the current best practice and method of loading a webpage (that has 10 - 15 seconds worth of server side script). User clicks a link > server side runs > html page is returned (blank page for 10 - 15 seconds). User clicks a link > html page is immediately returned (with progress bar) > AJAX post request to the server side > complete script > return result to html. Other options (threading?) I am running Google App Engine (Python) Standard Environment.
true
48,896,407
1.2
0
0
1
The way we're doing it is using the Ajax approach (the second one) which is what everyone else does. You can use Task Queues to run your scripts asynchronously and return the result to front end using FCM (Firebase Cloud Messaging). You should also try to break the script into multiple task queues to make it run faster.
0
73
0
3
2018-02-21T00:27:00.000
javascript,python,html,ajax,google-app-engine
Best practice for loading webpage with long server side script
1
2
2
48,899,196
0
1
0
Right now, I am generating the Allure Report through the terminal by running the command: allure serve {folder that contains the json files}, but with this way the HTML report will only be available to my local because The json files that generated the report are in my computer I ran the command through the terminal (if i kill the terminal, the report is gone) I have tried: Saving the Allure Report as Webpage, Complete, but the results did not reflect to the page, all i was seeing was blank fields. So, what im trying to to do is after I execute the command to generate the report, I want to have an html file of the report that i can store, save to my computer or send through email, so i do not have to execute the command to see the previous reports. (as much as possible into 1 html file)
true
48,914,528
1.2
1
0
5
It's doesn't work because allure report as you seen is not a simple Webpage, you could not save it and send as file to you team. It's a local Jetty server instance, serves generated report and then you can open it in the browser. Here for your needs some solutions: One server(your local PC, remote or some CI environment), where you can generate report and share this for you team. (server should be running alltime) Share allure report folder as files({folder that contains the json files}) to teammates, setup them allure tool, and run command allure server on them local(which one). Hope, it helps.
0
10,030
0
7
2018-02-21T20:04:00.000
python,automation,frameworks,allure
Is there a way to export Allure Report to a single html file? To share with the team
1
2
6
48,926,889
0
1
0
Right now, I am generating the Allure Report through the terminal by running the command: allure serve {folder that contains the json files}, but with this way the HTML report will only be available to my local because The json files that generated the report are in my computer I ran the command through the terminal (if i kill the terminal, the report is gone) I have tried: Saving the Allure Report as Webpage, Complete, but the results did not reflect to the page, all i was seeing was blank fields. So, what im trying to to do is after I execute the command to generate the report, I want to have an html file of the report that i can store, save to my computer or send through email, so i do not have to execute the command to see the previous reports. (as much as possible into 1 html file)
false
48,914,528
0
1
0
0
Allure report generates html in temp folder after execution and you can upload it to one of the server like netlify and it will generate an url to share.
0
10,030
0
7
2018-02-21T20:04:00.000
python,automation,frameworks,allure
Is there a way to export Allure Report to a single html file? To share with the team
1
2
6
63,722,118
0
0
0
I am basically running my personal project,but i'm stuck in some point.I am trying to make a login request to hulu.com using Python's request module but the problem is hulu needs a cookie and a CSRF token.When I inspected the request with HTTP Debugger it shows me the action URL and some request headers.But the cookie and the CSRF token was already there.But how to can do that with request module? I mean getting the cookies and the CSRF token before the post request? Any ideas? Thanks
true
48,940,807
1.2
0
0
0
First create a session then use GET and use session.cookies.get_dict() it will return a dict and it should have appropriate values you need
0
882
0
0
2018-02-23T03:48:00.000
python,post,cookies,request
How to get cookies before making request in Python
1
1
1
48,940,818
0
0
0
I am creating a REST API. Basic idea is to send data to a server and the server gives me some other corresponding data in return. I want to implement this with SSL. I need to have an encrypted connection between client and server. Which is the best REST framework in python to achieve this?
true
48,942,393
1.2
0
0
3
You can choose any framework to develop your API, if you want SSL on your API endpoints you need to setup SSL with the Web server that is hosting your application You can obtain a free SSL cert using Let's encrypt. You will however need a domain in order to be able to get a valid SSL certificate. SSL connection between client and server does not depend on the framework you choose. Web Servers like Apache HTTPD and Nginx act as the public facing reverse proxy to your python web application. Configuring SSL with your webserver will give you encrypted communication between client and server
0
6,205
0
1
2018-02-23T06:32:00.000
python,rest,django-rest-framework,flask-restful,falcon
REST API in Python over SSL
1
1
2
48,942,911
0
0
0
Basically, I want to use python to query my IB order history and do some analyze afterwards. But I could not find any existing API for me to query these data, does anyone have experience to do this?
false
48,942,917
1
0
0
6
You have to use flex queries for that purpose. It has full transaction history including trades, open positions, net asset value history and exchange rates.
0
4,976
0
5
2018-02-23T07:13:00.000
python,api,interactive-brokers
Interactive brokers: How to retrieve transaction history records?
1
2
2
51,470,089
0
0
0
Basically, I want to use python to query my IB order history and do some analyze afterwards. But I could not find any existing API for me to query these data, does anyone have experience to do this?
true
48,942,917
1.2
0
0
2
TWS API doesn't have this functionality. You can't retreive order history, but you can get open orders using recOpenOrders request and capture executions in realtime by listening to execDetails event - just write them to a file and analyse aftewards.
0
4,976
0
5
2018-02-23T07:13:00.000
python,api,interactive-brokers
Interactive brokers: How to retrieve transaction history records?
1
2
2
49,012,298
0
0
0
Hi i am new to GRPC and i want to send one message from server to client first. I understood how to implement client sending a message and getting response from server. But i wanna try how server could initiate a message to connected clients. How could i do that?
false
48,969,107
0
0
0
0
Short answer: you can't gRPC is a request-response framework based on HTTP2. Just as you cannot make a website that initiates a connection to a browser, you cannot make a gRPC service initiating a connection to the client. How would the service even know who to talk to? A solution could be to open a gRPC server on the client. This way both the client and the server can accept connections from one another.
0
325
0
0
2018-02-25T00:38:00.000
python,grpc
How to let server send the message first in GRPC using python
1
1
1
49,018,750
0
1
0
I was working with Pyrebase( python library for firebase) and was trying .stream() method but when I saw my firebase dashboard it showed 100 connection limit reached. Is there any way to remove those concurrent connection?
false
48,973,464
0
0
0
0
There is a limit of 100 concurrent connections to the database for Firebase projects that are on the free Spark plan. To raise the limit, upgrade your project to a paid plan.
0
399
0
0
2018-02-25T12:34:00.000
python,rest,firebase,firebase-realtime-database
Firebase connection limit reached
1
1
1
48,975,506
0
0
0
I am using a python gRPC client and make request to a service that responds a stream. Last checked the document says the iterator.next() is sync and blocking. Have things changed now ? If not any ideas on overcoming this shortcoming ? Thanks Arvind
true
48,979,972
1.2
0
0
1
Things have not changed; as of 2018-03 the response iterator is still blocking. We're currently scoping out remedies that may be ready later this year, but for the time being, calling next(response_iterator) is only way to draw RPC responses.
0
1,426
0
1
2018-02-26T00:34:00.000
python,grpc
Is grpc server response streaming still blocking?
1
1
2
49,501,641
0
0
0
Does someone have a solution for detecting and mitigating TCP SYN Flood attacks in the SDN environment based on POX controller?
false
49,003,874
0
0
0
0
As my understand, you may need to prepare third-party program for collect flow information (e.g. sFlow). and write one program for communicating with SDN Controller. SDN Controller cover all traffic on switches. It don't handle over L4 event in general case
0
613
1
0
2018-02-27T08:05:00.000
python,sdn,pox
Python Code to detect and mitigate TCP SYN Flood attacks in SDN and POX controller
1
1
1
49,146,800
0
0
0
I want to use Lyft Driver api like in the Mystro android app however iv searched everywhere and all I could find is lyft api. To elaborate more on what I'm trying to achieve, I want api that will allow me to intergrate with the lyft driver app and not the lyft rider app, I want to be able to for example view nearby ride requests as a driver. The Mystro android app has this feature, how is it done
false
49,011,180
0.197375
0
0
1
The Mystro app does not have any affiliation with either Uber or Lyft nor do they use their APIs to interact with a driver (as neither Uber or Lyft have a publicly accessible driver API like this). They use an Android Accessibility "feature" that let's the phone look into and interact with other apps you have running. So basically Mystro uses this accessibility feature (Google has since condemned the use of the accessibility feature like this) to interact with the Uber and Lyft app on the driver's behalf.
0
249
0
0
2018-02-27T14:36:00.000
android,python,ios,node.js,lyft-api
How do I use Lyft driver API like Mystro android app?
1
1
1
52,992,307
0
0
0
Both an existing raspberry pi 3 assistant-sdk setup and a freshly created one are producing identical errors at all times idle or otherwise. The lines below are repeating over and do not seem to be affected by the state of the assistant. Replicates across multiple developer accounts, devices and projects. Present with both the stock hotword example and modified scripts that worked previously. All cases are library assistant and python 3 on raspberry pi 3 model B running raspbian stretch. [9780:9796:ERROR:assistant_ssdp_client.cc(210)] Failed to parse header: LOCATION: about:blank [9780:9796:ERROR:assistant_ssdp_client.cc(76)] LOCATION header doesn't contain a valid url
false
49,041,313
0.197375
0
0
1
This fixed it for me: pip3 install google-assistant-library==0.1.0
0
126
0
0
2018-03-01T01:33:00.000
python,raspberry-pi,raspberry-pi3,google-assistant-sdk
Assistant SDK on raspberry pi 3 throwing repeated location header errors
1
1
1
49,223,023
0
1
0
At the moment i am working on an odoo project and i have a kanban view. My question is how do i put a kanban element to the bottom via xml or python. Is there an index for the elements or something like that?
false
49,046,224
0
0
0
0
I solved it myself. I just added _order = 'finished asc' to the class. finished is a record of type Boolean and tells me if the Task is finished or not.
0
63
0
0
2018-03-01T09:10:00.000
python-2.7,odoo-8,odoo
Is there a way to put a kanban element to the bottom in odoo
1
1
1
49,046,718
0
0
0
I'm just starting with Selenium in python, and I have set up an ActionChains object and perform()ed a context click. How do I tell whether a context menu of any sort has actually popped up? For example, can I use the return value in some way? The reason is that I want to disable the context menu in some cases, and want to test if this has actually been done.
true
49,058,060
1.2
0
0
2
Selenium cannot see or interact with native context menus. I recommend testing this in a JavaScript unit test, where you can assert that event.preventDefault() was called. It's arguably too simple/minor of a behavior to justify the expense of a Selenium test anyway.
0
155
0
0
2018-03-01T20:20:00.000
python,selenium,contextmenu
Selenium: how to check if context menu has appeared
1
1
1
49,062,808
0
1
0
I would like to use ActionChains function of Selenium. Below is like my codes. But It does not work when it opens right click menu. The ARROW_DOWN and ENTER are implemented in main window not, right click menu. How can the ARROW_DOWN and ENTER code be implemented in right click menu. Brower = webdriver.Chrome() actionChain = ActionChains(Browser) actionChain.context_click(myselect[0]).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ENTER).perform()
false
49,063,955
0.099668
0
0
1
Selenium cannot see or interact with native context menus.
0
913
0
0
2018-03-02T06:38:00.000
python-3.x
(Python Selenium with Chrome) How to click in the right click menu list
1
1
2
49,070,495
0
0
0
as the title said, i'm looking for a way to send an AT command to a remote xbee and read the response. my code is in python and i'm using digi-xbee library. another question: my goal of using AT command is to get the node ID of that remote xbee device when this last one send me a message, i don't want to do a full scan of the network, i just want to get its node id and obviously the node id doens't come within the frame. so i had to send to it an AT command so he send me back its node ID. if you have any suggestions that may help, please tell me, i'm open to any helpful idea. PS. i tried to use read_device_info() within the callback function that launch when a data received but it didn't work. it works outside the function but inside no! thanks in advance
false
49,088,925
0
0
0
0
There is a way to send a command to a remoted xbee: First, connect to the local XBee and then send a command to the local Xbee so the local Xbee can send a remote_command to the remoted XBee. Here are the details: Create a bytearray of the command. For e.g: My command is: 7E 00 10 17 01 00 13 A2 00 41 47 XX XX FF FE 02 50 32 05 C5, generated using XCTU. It is a remote AT command to set the pin DIO12 of the remoted XBee to digital out, high [5]. Create a raw bytearray of it. raw = bytearray([0x7E,0x00,0x10,0x17,0x01,0x00,0x13,0xA2,0x00,0x41,0x47,0xXX,0xXX, 0xFF,0xFE,0x02,0x50,0x32,0x05,0xC5]) Create a packet: using from digi.xbee.packets.common import RemoteATCommandPacket ATpacket = RemoteATCommandPacket.create_packet(raw, OperatingMode.API_MODE) Send the packet to the local XBee: device.send_packet(ATpacket) Bonus: A more simple way to create a packet: ATpacket = RemoteATCommandPacket(1,XBee16BitAddress.from_hex_string("0013A2004147XXXX"),XBee16BitAddress.from_hex_string("FFFE"),2,"P2",bytearray([0x05]))
0
777
1
0
2018-03-03T20:31:00.000
python,xbee
how to send remote AT command to xbee device using python digi-xbee library
1
2
2
52,888,170
0
0
0
as the title said, i'm looking for a way to send an AT command to a remote xbee and read the response. my code is in python and i'm using digi-xbee library. another question: my goal of using AT command is to get the node ID of that remote xbee device when this last one send me a message, i don't want to do a full scan of the network, i just want to get its node id and obviously the node id doens't come within the frame. so i had to send to it an AT command so he send me back its node ID. if you have any suggestions that may help, please tell me, i'm open to any helpful idea. PS. i tried to use read_device_info() within the callback function that launch when a data received but it didn't work. it works outside the function but inside no! thanks in advance
false
49,088,925
0
0
0
0
when you recive a message you get an xbee_message object, first you must define a data receive callback function and add it to device . In that message you call remote_device_get_64bit_addr().
0
777
1
0
2018-03-03T20:31:00.000
python,xbee
how to send remote AT command to xbee device using python digi-xbee library
1
2
2
49,214,426
0
0
0
We are trying to convert a gRPC protobuf message to finally be a json format object for processing in python. The data sent across from server in serialized format is around 35MB and there is around 15K records. But when we convert protobuf message into string (using MessageToString) it is around 135 MB and when we convert protobuf message into a JSON string (using MessageToJson) it is around 140MB. But the time taken for conversion is around 5 minutes for each. It does not add any value wherein we take so much time to convert data on the client side. Any thoughts or suggestion or caveats that we are missing would be helpful. Thanks.
false
49,091,459
0
1
0
0
Fixed the issue by only picking the fields that is needed when deserializing the data, rather than deserialize all the data returned from the server.
0
1,175
0
0
2018-03-04T02:49:00.000
python,json,protocol-buffers,grpc
Converting gRPC protobuf message to json runs for long
1
1
1
50,177,889
0
0
0
I want to share my local WebSocket on the internet but ngrok only support HTTP but my ws.py address is ws://localhost:8000/ it is good working on localhost buy is not know how to use this on the internet?
false
49,129,451
0.197375
0
0
1
You can use ngrok http 8000 to access it. It will work. Although, ws is altogether a different protocol than http but ngrok handles it internally.
0
2,555
0
2
2018-03-06T11:12:00.000
python,websocket,localhost,ngrok,serve
how to use ws(websocket) via ngrok
1
1
1
52,701,751
0
1
0
I want to put jpg in dropzone from other window. Can I do that? In my test I open new window (my html with jpg) and I want to drag and drop it to dropzone on my main window. I have error: Message: stale element reference: element is not attached to the page document. Maybe there is another solution for placing this file eg from a disk? I've tried several ways, including loading from a file from the disk and sending it using send keys.
false
49,148,607
0
0
0
0
I solved the problem by creating a script in AutoIT.
0
87
0
0
2018-03-07T09:40:00.000
python,selenium,drag-and-drop,webdriver
Can i use drag and drop from other window? Python Selenium
1
1
1
49,151,725
0
1
0
I have a query as to whether what I want to achieve is doable, and if so, perhaps someone could give me some advice on how to achieve this. So I have set up a health check on Route 53 for my server, and I have arranged so that if the health check fails, the user will be redirected to a static website I have set up at a backup site. I also have a web scraper running regularly collecting data, and my question is, would their be a way to use the data I have collected, and depending on its value, either pass or fail the heath check, therefore determining what site the user would be diverted to. I have discussed with AWS support and they have said that their policies and conditions are there by design, and long story short would not support what I am trying to achieve. I'm a pretty novice programmer so I'm not sure if it's possible to work this, but this is my final hurdle so any advice or help would be hugely appreciated. Thanks!
true
49,198,057
1.2
0
0
2
Make up a filename. Let's say healthy.txt. Put that file on your web server, in the HTML root. It doesn't really matter what's in the file. Verify that if you go to your site and try to download it using a web browser, it works. Configure the Route 53 health check as HTTP and set the Path for the check to use /healthy.txt. To make your server "unhealthy," just delete the file. The Route 53 health checker will get a 404 error -- unhealthy. To make the server "healthy" again, just re-create the file.
0
42
0
0
2018-03-09T16:24:00.000
python,amazon-web-services,amazon-route53,health-monitoring
Intentionally Fail Health Check using Route 53 AWS
1
1
1
49,201,020
0
0
0
In the docs for heapq, its written that heapq.heappushpop(heap, item) Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop(). Why is it more efficient? Also is it considerably more efficient ?
true
49,228,574
1.2
0
0
4
heappop is pop out the first element, then move the last element to fill the in the first place, then do a sinking operation, which moving the the element down through consecutive exchange. thus restore the head it is O(logn) then you headpush, place the element in the last place, and bubble-up like heappop but reverse another O(logn) while heappushpop, pop out the first element, instead of moving the last element to the top, it place the new element in the top, then do a sinking motion. which is almost the same operation with heappop. just one O(logn) as above even though they are both O(logn), it is easier to see heappushpop is faster than heappop then heappush.
0
963
0
2
2018-03-12T05:21:00.000
python-3.x,heap
How is heapq.heappushpop more efficient than heappop and heappush in python
1
2
2
49,232,244
0
0
0
In the docs for heapq, its written that heapq.heappushpop(heap, item) Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop(). Why is it more efficient? Also is it considerably more efficient ?
false
49,228,574
0.197375
0
0
2
heappushpop pushes an element and then pops the smallest elem. If the elem you're pushing is smaller than the heap's minimum, then there's no need to do any operations., because we know that the element we're trying to push (which is smaller than the heap min), will be popped if we do it in two operations. This is efficient, isn't it?
0
963
0
2
2018-03-12T05:21:00.000
python-3.x,heap
How is heapq.heappushpop more efficient than heappop and heappush in python
1
2
2
57,665,038
0
1
0
I have required to send POS Receipt to customer while validating POS order, the challenge is ticket is defined in point_of_sale/xml/pos.xml receipt name is <t t-name="PosTicket"> how can i send this via email to customer.
false
49,235,894
0
1
0
0
You can create a wizard at the time of validation of POS order which popup after validating order. In that popup enter mail id of customer and by submit that receipt is directly forwarded to that customer.
0
205
0
0
2018-03-12T12:59:00.000
python-3.x,odoo,point-of-sale,odoo-11
Send POS Receipt Email to Customer While Validating POS Order
1
1
1
52,870,087
0
0
0
I am playing around with scapy (module for Python). I want to build packages and send them across my local network from one host to another. When I buil my package like that, I do not receive anything on my destination host: packet = Ether() / IP(dst='192.168.0.6') / TCP(dport=8000) => sendp(packet). However, when I build it like that it works: packet = IP(dst='192.168.0.6') / TCP(dport=8000), send(packet). I capture the packages on my destination host with the help of wireshark. Why doesn't the Ethernet-Variant work? I have all my PCs connected with ethernet cables... Thanks for help!
true
49,243,269
1.2
0
0
1
send() uses Scapy's routing table (which is copied from the host's routing table when Scapy is started), while sendp() uses the provided interface, or conf.iface when no value is specified. So you should either set conf.iface = [iface] ([iface] being the interface you want to use), or specify sendp([...], iface=[iface]).
0
404
1
1
2018-03-12T19:41:00.000
python,wireshark,scapy
Can't send ethernet packages across my LAN
1
1
1
49,250,191
0
0
0
I am trying to click on an element but getting the error: Element is not clickable at point (x,y.5) because another element obscures it. I have already tried moving to that element first and then clicking and also changing the co-ordinates by minimizing the window and then clicking, but both methods failed. The possible duplicate question has answers which I have already tried and none of them worked for me. Also, the same code is working on a different PC. How to resolve it?
false
49,252,880
-0.099668
0
0
-2
I found that sometimes the webpage is not fully loaded and the answer is as simple as adding a time.sleep(2)
0
15,329
0
11
2018-03-13T09:46:00.000
python,selenium,selenium-webdriver
Element is not clickable at point (x,y.5) because another element obscures it
1
2
4
68,574,224
0
0
0
I am trying to click on an element but getting the error: Element is not clickable at point (x,y.5) because another element obscures it. I have already tried moving to that element first and then clicking and also changing the co-ordinates by minimizing the window and then clicking, but both methods failed. The possible duplicate question has answers which I have already tried and none of them worked for me. Also, the same code is working on a different PC. How to resolve it?
true
49,252,880
1.2
0
0
11
There is possibly one thing you can do. It is very crude though, I'll admit it straight away. You can simulate a click on the element directly preceding the element in need, and then simulate a key press [TAB] and [ENTER]. Actually, I've been seeing that error recently. I was using the usual .click() command provided by bare selenium - like driver.find_element_by_xpath(xpath).click(). I've found that using ActionChains solved that problem. Something like ActionChains(driver).move_to_element(element).click().perform() worked for me. You will need: from selenium.webdriver.common.action_chains import ActionChains
0
15,329
0
11
2018-03-13T09:46:00.000
python,selenium,selenium-webdriver
Element is not clickable at point (x,y.5) because another element obscures it
1
2
4
49,261,182
0
0
0
I develop HTTP GET Webservices (REST) in a distributed microservices architecture. For performance issues, I need the cache on the clients of the webservices. Is there an urllib-like library that uses HTTP cache headers of the webservices to cache? Note: requests-cache does not seem to read http headers
false
49,273,441
0
0
0
0
Why we need to cache the HTTP headers? Normally, only GET responses are valuable to be cached on the client.
0
13
0
0
2018-03-14T09:02:00.000
python-3.x,rest,web-services,urllib,microservices
urllib-like library that caches accordingly to HTTP headers in python?
1
1
1
49,355,304
0
0
0
I have a remote cron job that scrapes data using selenium every 30 minutes. Roughly 1 in 10 times the selenium script fails. When the script fails, I get an error output instead (various selenium error messages). Does this cause the cron job to stop? Shouldn't crontab try to run the script again in 30 minutes? After a failed attempt, when I type crontab -l, it still shows my cron job. How do I ensure that the crontab tries again in 30 minutes?
false
49,283,567
0
1
0
0
ANSWER: The website I was scraping was sophisticated enough to find out I was using selenium because cron was running the job every 30 minutes on the dot. So they flagged my VM's IP address after the 4-5th attempt. My solution was simple: add randomness to the interval with which I scrapped the website using random.uniform and time.sleep - now I have no issues scraping.
0
306
0
0
2018-03-14T16:56:00.000
python-3.x,selenium,cron
Does cron job persist/run again if the python script fails?
1
2
2
49,321,214
0
0
0
I have a remote cron job that scrapes data using selenium every 30 minutes. Roughly 1 in 10 times the selenium script fails. When the script fails, I get an error output instead (various selenium error messages). Does this cause the cron job to stop? Shouldn't crontab try to run the script again in 30 minutes? After a failed attempt, when I type crontab -l, it still shows my cron job. How do I ensure that the crontab tries again in 30 minutes?
false
49,283,567
0
1
0
0
Who is sending the error output? If it's the cron daemon, then your job should be dead; if the selenium process itself is sending the mail, then it may still be running, and stuck.
0
306
0
0
2018-03-14T16:56:00.000
python-3.x,selenium,cron
Does cron job persist/run again if the python script fails?
1
2
2
49,287,487
0
1
0
lets say i have used with selenium chrome_options.add_argument("--headless") but now i want the browser to open. Is this possible? thanks
false
49,290,655
0
0
0
0
No, it isn’t possible. The --headless option is a command-line flag used to instantiate the browser, meaning it is being told to execute headlessly for the entirety of its existence.
0
51
0
0
2018-03-15T02:31:00.000
python,selenium,selenium-chromedriver
How to open headless browser in selenium?
1
1
1
49,301,624
0
0
0
During the installation of exchangelib the installation tries to connect to the internet to get dependencies. On this computer it is not possible to to open the firewalls to provide the access - it is a very restricted system. Is there a way for an offline installation of the exchangelib? Best Regards Klaus Heubisch
false
49,293,207
0
0
0
0
You have a couple of different possibilities. I think the most simple one is to create a virtualenv on a system that does have Internet access and install exchangelib and its dependencies there. You can then copy that virtualenv to the system with no Internet access. Virtualenvs contain absolute paths, so you would need to either copy it to the same path on the other server, or make the virtualenv relocatable.
0
224
0
2
2018-03-15T06:54:00.000
python-3.x,exchange-server,exchangelib
I cannot install exchangelib on a very restricted system which has no internet connection and it is not possible to create one
1
1
1
49,296,477
0
0
0
I am trying test my python script using jenkins.Issue I am facing is with the test report generation I have created a folder 'test_reports' in my jenkins workspace. C:\Program Files (x86)\Jenkins\jobs\PythonTest\test_reports But then when I run the script from jenkins I get the error as, ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error? How do I actually configure the test report? Is the xml file generated automatically? Any help would be greatly appreciated
false
49,361,114
0
1
0
0
This was an expected result because,The script file I wrote was not a unit-test module.It was just a normal python file(It wasn't supposed to create any XML results). Once I created the script using unit-test framework and import the xml runner,I was able to generate the xml files of the result.
0
648
0
0
2018-03-19T10:49:00.000
python,testing,jenkins,report
How to configure xml Report in jenkins
1
1
1
49,377,372
0
0
0
I am trying to pull invoices by Accounts and have not managed to find a way to link the two. Am I missing something? I tried through Contacts but it doesn't seem to have an Account or Account ID to match I am using Pyxero for this, however this doesn't seem relevant, more so the data from xero api. Thanks
false
49,382,835
0
0
0
0
I've figured it out - these details only appear when pulling an invoice one by one or paginated in the line items column.
0
127
0
0
2018-03-20T11:18:00.000
python,xero-api
Xero Api matching Accounts with Invoices
1
1
1
49,387,718
0
0
0
Good day. I have a question about proceeding accepted connections. I have a pythons tornado IOLoop and listening socket. When a new client is connected and this connection is accepted by tornado handler client - interaction begins. That interaction includes multiple requests/responses, so there is a reason to poll accepted socket for available bytes. How to do polling the correct way? The direct way is to use epoll/select, but this is like reinventing IOLoop. But is this correct to create IOLoop for each new connection?
true
49,390,862
1.2
0
0
0
I've searched how "torando.web" does it. It works with default IOLoop instance and that instance accepts connections and handles (processes) new sockets that were created after connections were accepted. The second part is done by IOStream. So the answer is to use the same IOLoop object and not to poll sockets manually
0
120
1
0
2018-03-20T17:40:00.000
python,select,tornado,epoll,ioloop
IOLoop/epoll/select for accepted connections
1
1
1
49,399,600
0
0
0
I have tried downloading small files from google Colaboratory. They are easily downloaded but whenever I try to download files which have a large sizes it shows an error? What is the way to download large files?
false
49,428,332
0
0
0
0
Google colab doesn't allow you to download large files using files.download(). But you can use one of the following methods to access it: The easiest one is to use github to commit and push your files and then clone it to your local machine. You can mount google-drive to your colab instance and write the files there.
0
7,373
0
11
2018-03-22T12:08:00.000
python-3.x,tensorflow,gpu,google-colaboratory
How to download large files (like weights of a model) from Colaboratory?
1
1
4
49,431,101
0
0
0
I'm using the warcio library to read and write warc files. When trying to write a record of a response object from requests.get(URL,stream=False), warcio is writing only HTTP headers to the record but not the payload. However, when stream mode is enabled it works fine. Is there a way store the payload when stream mode is not enabled?
true
49,429,211
1.2
0
0
0
I've found a workaround but not sure if it's the correct way. Instead of making request object streamable, I've made the payload streamable BytesIO(response.text.encode()) and this seems to work.
0
461
0
1
2018-03-22T12:52:00.000
python,python-3.x,python-requests,warc
Creating a warc record with requests.get() response using warcio
1
1
1
49,430,305
0
0
0
Im am writing a piece of code where it is vital that the browser stays open however i need to be able to close windows, to stop the browser from over populating. I have been using the webbrowser module but it seems that webbrowser doesnt have a way of close the tab once open. Any ideas? Remember the browser must stay open, so killing all tabs will close the browser. I must only close the tabs that were opened by my code! Any help would be greatly appreciated. Sorry if this isn't in the right place, feel free to move it. Software: Python 3.6.4 (32 bit) Modules used: Time, Random and Webbrowser.
false
49,457,439
0
0
0
0
Webbrowser is a limited api module for interfacing with popular browsers. The way I see it you have a few options: Find a module pertaining to the particular browser you're dealing with. Work with the api of the browser(s) you're working with directly Request feature of webbrowser in the future, but won't help you now, as they likely won't implement it any time soon.
0
42
0
0
2018-03-23T19:56:00.000
python
I need to be able to close an internet tab, but i cannot close the browser
1
1
1
49,458,231
0
0
0
I'm using ldap3. I can connect and read all attributes without any issue, but I don't know how to display the photo of the attribute thumbnailPhoto. If I print(conn.entries[0].thumbnailPhoto) I get a bunch of binary values like b'\xff\xd8\xff\xe0\x00\x10JFIF.....'. I have to display it on a bottle web page. So I have to put this value in a jpeg or png file. How can I do that?
false
49,458,945
0.099668
0
0
1
The easiest way is to save the raw byte value in a file and open it with a picture editor. The photo is probably a jpeg, but it can be in any format.
0
4,282
0
1
2018-03-23T22:11:00.000
python-3.x,ldap
how to get and display photo from ldap
1
1
2
49,470,034
0
0
0
Am using Python 3.6.5rcs , pip version 9.0.1 , selenium 3.11.0. The Python is installed in C:\Python and selenium is in C:\Python\Lib\site-packages\selenium. The environment variables have been set. But the code from selenium import webdriver gives an unresolved reference error. Any suggestion on how to fix the problem.
false
49,482,586
0.066568
0
0
1
I used this command to resolve my error. pip install webdriver_manager
1
8,895
0
1
2018-03-26T00:59:00.000
python,selenium,pycharm
Pycharm Referenced Error With Import Selenium Webdriver
1
3
3
68,728,420
0
0
0
Am using Python 3.6.5rcs , pip version 9.0.1 , selenium 3.11.0. The Python is installed in C:\Python and selenium is in C:\Python\Lib\site-packages\selenium. The environment variables have been set. But the code from selenium import webdriver gives an unresolved reference error. Any suggestion on how to fix the problem.
false
49,482,586
0.066568
0
0
1
I found this worked for me. I'm using PyCharm Community 2018.1.4 on Windows. Navigate to: File->Settings->Project: [project name] -> Project Interpreter On this page click the configuration wheel at the top which should provide a drop down menu. Click "Add" and a window should appear called "Add Python Interpreter" You will be defaulted onto "Virtualenv Environment" tab. There should be a checkbox called "Inherit global site-packages". Check this. Click OK. All your installed packages should be added.
1
8,895
0
1
2018-03-26T00:59:00.000
python,selenium,pycharm
Pycharm Referenced Error With Import Selenium Webdriver
1
3
3
51,881,788
0
0
0
Am using Python 3.6.5rcs , pip version 9.0.1 , selenium 3.11.0. The Python is installed in C:\Python and selenium is in C:\Python\Lib\site-packages\selenium. The environment variables have been set. But the code from selenium import webdriver gives an unresolved reference error. Any suggestion on how to fix the problem.
true
49,482,586
1.2
0
0
3
Pycharm > Preferences > Project Interpreter Then hit the '+' to install the package to your project path. Or you can add that path to your PYTHONPATH environment variable in your project.
1
8,895
0
1
2018-03-26T00:59:00.000
python,selenium,pycharm
Pycharm Referenced Error With Import Selenium Webdriver
1
3
3
49,482,631
0
0
0
I am testing complex and non-public webpages with python-selenium, which have interconnected iframes. To proper click on a button or select some given element in a different iframe I have to switch to that iframe . Now, as contents of the pages might reload to the correct iframe I constantly have to check if the correct iframe is loaded yet, otherwise I have to go back to the default content, do the check again etc. I find this completely annoying and user-unfriendly behavior of selenium. Is there a basic workaround to find e.g. an element in ANY iframe? Because I do not care about iframes. I care about elements...
false
49,492,516
0
0
0
0
Unfortunately the API is built that way and you can't do anything about it. Each IFrame is a separate document as such, so eventually search a object in every IFrame would mean Selenium has to switch to every IFrame and do that for you. Now you can build a workaround by storing the IFrame paths and using helper methods to automatically switch to that IFrame hierarchy in your code. Selenium won't help you here, but you can ease your pain by writing helper methods designed as per your needs
0
717
0
0
2018-03-26T13:21:00.000
python,selenium,iframe
Is there a workaround to avoid iframes in selenium testing?
1
1
2
49,492,946
0
1
0
I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.
false
49,494,093
-0.066568
0
0
-1
I'm no expert but I would say that your speed is pretty slow. I just went to google, typed in the word "hats", pressed enter and: about 650,000,000 results (0.63 seconds). That's gonna be tough to compete with. I'd say that there's plenty of room to improve.
0
453
0
5
2018-03-26T14:38:00.000
python,scrapy,web-crawler
What is a good crawling speed rate?
1
2
3
49,523,000
0
1
0
I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.
false
49,494,093
0
0
0
0
It really depends but you can always check your crawling benchmarks for your hardware by typing scrapy bench on your command line
0
453
0
5
2018-03-26T14:38:00.000
python,scrapy,web-crawler
What is a good crawling speed rate?
1
2
3
70,224,507
0
0
0
os.path.ismount() will verify whether the given path is mounted on the local linux machine. Now I want to verify whether the path is mounted on the remote machine. Could you please help me how to achieve this. For example: my dev machine is : xx:xx:xxx I want to verify whether the '/path' is mounted on yy:yy:yyy. How can achieve this by using os.path.ismount() function
false
49,504,741
0
0
0
0
If you have access to both machines, then one way could be to leverage python's sockets. The client on the local machine would send a request to the server on the remote machine, then the server would do os.path.ismount('/path') and send back the return value to the client.
0
128
0
0
2018-03-27T05:06:00.000
python,python-2.7
Verify mountpoint in the remote server
1
1
1
49,505,061
0
0
0
A robot is connected to a network with restricted outbound traffic. Only inbound traffic is allowed from one specific IP address(ours IP, e.g. 111.111.111.111). All outgoing traffic is forbidden. There is settings and dhcp corresponding to external IP(e.g. 222.222.222.222). We want to connect to Pepper from the IP 111.111.111.111. The connection through SSH is fine with ssh [email protected] and password but we can not connect through Choregraphe or Python scripts. This is very important because we want to be able to connect with the robot remotely to upload different Choregraphe applications. This is the error when we are trying to connect with a Python script: [W] 18872 qimessaging.transportsocket: connect: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond ... RuntimeError: Failed to connect to service ALBehaviorManager on machine 1296211e-1921-3131-909b-69afa37ааа28. All endpoints are unavailable. The Choregraphe hangs and crashes after a certain period of time. Can you give me some advice?
true
49,508,903
1.2
0
0
3
NAOqi connections go through port 9559 by default, so you could check whether that one is blocked. If you are unable to connect through port 9559, you can do a port forwarding. But I think this is a more network related question.
0
490
1
0
2018-03-27T09:18:00.000
python,networking,connection,pepper,choregraphe
How to connect Choregraphe/Python script to remote Pepper robot from different network?
1
1
1
49,528,384
0
0
0
I want to include google correlate into my application using Python but I require its API to do so. Please help me where to look at or share me some insights about it. Thanks.
false
49,526,421
0
0
0
0
Google correlate data is valid up till 2017 March, not sure if it's deprecated but it definitely won't be useful if you're after up-to-date correlations
0
218
0
0
2018-03-28T04:59:00.000
python-3.x
Is there any google correlate API that I can refer to?
1
1
1
55,298,815
0
0
0
So this is a bit of a tricky situation. Using Three.js/ReactJS and canvas. Scenario: When I click and drag a sphere beyond its boundaries a tooltip will show a warning message over the mouse pointer. When I release the mouse the tooltip will disappear. When I click and drag the sphere back to a position inside the boundaries, the tooltip will not be displayed once inside the boundaries. Bear in mind this is tied into the state of the app handled by react, and in this instance the tooltip is being updated when the conditions are met and updated in the state. The tooltip element is present however, the attributes and content gets updated on a click and hold when the sphere is out of bounds... using ActionChains(page.driver).move_to_element_with_offset(sphere_order_panel, -1047, 398).click_and_hold().move_to_element_with_offset(sphere_order_panel, -1633, 265).click_and_hold().perform() clicks on the element and drags it to the position outside of its boundaries, but the tooltip is NOT updated... i've put a breakpoint on the page once i manually click into the page, my sphere gets snapped to my mouse location (meaning click_and_hold is indeed working... but i check the html and verify that the tooltip is not updated. however if i manually use my mouse and click on the sphere the tooltip will update! is selenium automation not executing the click_and_hold correctly? I don't think this is the case. Is there a way to add the mouse pointer to the page using selenium? Or is there a way to use execute_script() to use javascript on the page to satisfy my conditions to get the tooltip to be updated? I'm really stuck on this.. and this is a tricky situation (for me at least) Any help greatly appreciated.
false
49,539,286
0
0
0
0
to get around my issue, I had to do this chain = ActionChains(page.driver).move_to_element_with_offset(sphere_order_panel, -1047, 398).click_and_hold() chain = chain.move_to_element_with_offset(sphere_order_panel, -1047, 398) chain.perform()
0
198
0
0
2018-03-28T16:04:00.000
python,reactjs,selenium,canvas,three.js
Python Selenium: Show Tooltip on Mouse Pointer (Three.js/React/Canvas)
1
1
1
49,634,904
1
0
0
In select,there is a list for error socket or epoll has event for ERROR But in selectors module just has events for EVENT_READ and EVENT_WRITE. therefore,how can I know the error socket without event?
true
49,547,266
1.2
0
0
6
An error on the socket will always result in the underlying socket being signaled as readable (at least). For example, if you are waiting for data from a remote peer, and that peer closes its end of the connection (or abends, which does the same thing), the local socket will get the EVENT_READ marking. When you go to read it, you would then get zero bytes (end of file), telling you that the peer is gone (or at least finished sending). Similarly, if you were waiting to send data and the peer resets the connection, you will get an EVENT_WRITE notification. When you then go to attempt a send, you will get an error from the send (which, in python, means an exception). The only thing you lose here from select is the ability to detect exceptional conditions: the xlist from select.select or POLLPRI from select.poll. If you needed those, you would need to use the lower-level select module directly. (Priority/out of band data is not commonly used so this is not an unreasonable choice.) So the simplified interface provided by selectors really loses no "error" information. If there is an error on the socket that would have caused a POLLERR return from select.poll, a RST from the remote, say, you will get a EVENT_READ or EVENT_WRITE notification and whatever error occurred will be re-triggered as soon as you attempt send or recv. A good rule of thumb to keep in mind with select, poll and friends is that a result indicating "readable" really means "will not block if you attempt to read". It doesn't mean you will actually get data back from the read; you may get an error instead. Likewise for "writable": you may not be able to send data, but attempting the write won't block.
0
624
0
2
2018-03-29T02:53:00.000
python-3.x,sockets
why python selectors module has no event for socket error
1
1
1
49,563,017
0
0
0
to access google drive files, you need to call: google.colab.auth.authenticate_user(), which presents a link to an authetication screen, which gives a key you need to paste in the original notebook is it possible to skip this all together? after all the notebook is already 'linked' to a specific account is it possible to save this token hardcoded in the notebook for future runs? is it possible to create a token that can access only some files (useful when sharing the notebook with others - you want to give access only to some data files) is it possible to simplify the process (make it a single click, without needing to copy paste the token)
true
49,548,471
1.2
0
0
2
Nope, there's no way to avoid this step at the moment. No, there's no safe way to save this token between runs. Sharing the notebook doesn't share the token. Another user executing your notebook will go through the auth flow as themselves, and will only be able to use the token they get for Drive files they already have access to. Sadly, not right now. :)
0
763
0
3
2018-03-29T05:12:00.000
python,google-authentication,google-colaboratory
When accessing google driver from google colab, is it possible to eliminate, or simplify authentication?
1
1
1
49,583,498
0
0
0
In cases where I need to cancel an order, I need to know whether to void or refund the transaction. I'm trying to learn whether the transaction has settled using the Transaction Details API. transactionDetailsResponse.transaction.transactionStatus seems like it might be the right thing to look at. Does anyone know what the possible values are for transactionStatus. At this point, I'm only in the sandbox where there is only one value, capturedPendingSettlement. Do transaction in the sandbox settle?
true
49,558,617
1.2
0
0
0
That is the right place to look. The possible values for that field are: uthorizedPendingCapture capturedPendingSettlement communicationError refundSettledSuccessfully refundPendingSettlement approvedReview declined couldNotVoid expired generalError failedReview settledSuccessfully settlementError underReview voided FDSPendingReview FDSAuthorizedPendingReview returnedItem Items in the sandbox do settle.
0
70
0
0
2018-03-29T14:25:00.000
python,authorize.net
How can I tell if an authorize.net transaction has settled?
1
1
1
49,566,293
0
1
0
I'm developing a chatbot using heroku and python. I have a file fetchWelcome.py in which I have written a function. I need to import the function from fetchWelcome into my main file. I wrote "from fetchWelcome import fetchWelcome" in main file. But because we need to mention all the dependencies in the requirement file, it shows error. I don't know how to mention user defined requirement. How can I import the function from another file into the main file ? Both the files ( main.py and fetchWelcome.py ) are in the same folder.
false
49,561,062
0
0
0
0
If we need to import function from fileName into main.py, write "from .fileName import functionName". Thus we don't need to write any dependency in requirement file.
0
694
0
0
2018-03-29T16:33:00.000
python,heroku
Heroku Python import local functions
1
1
2
49,571,369
0
1
0
Is there a way to use Celery for: Queue a HTTP call to external URL with Form parameters (HTTP Post to URL) The external URL will respond HTTP response, 200, 404, 400 etc, if response is in form of error non-200-ish response it will retry for a certain number of retry and will retire as needed Add Task / Job / Work queue into Celery using REST API, passing the URL to call and Form parameters
false
49,608,179
0.066568
0
0
1
you can use flower rest API to do the same, as flower is a monitoring tool for celery. But it comes with rest api to add task and all https://flower.readthedocs.io/en/latest/index.html
0
7,519
1
4
2018-04-02T08:53:00.000
python,rest,celery
Celery REST API
1
1
3
58,300,556
0
0
0
Windows command netsh interface show interface shows all network connections and their names. A name could be Wireless Network Connection, Local Area Network or Ethernet etc. I would like to change an IP address with netsh interface ip set address "Wireless Network Connection" static 192.168.1.3 255.255.255.0 192.168.1.1 1 with Python script, but I need a network interface name. Is it possible to have this information like we can have a hostname with socket.gethostname()? Or I can change an IP address with Python in other way?
false
49,624,485
0.379949
0
0
2
I don't know of a Python netsh API. But it should not be hard to do with a pair of subprocess calls. First issue netsh interface show interface, parse the output you get back, then issue your set address command. Or am I missing the point?
0
2,892
1
0
2018-04-03T07:25:00.000
python,static-ip-address
How to find out Windows network interface name in Python?
1
1
1
49,625,033
0
1
0
My website is hosted on Google App Engine using Standard Python. In request handlers, I am setting HTTP header "cache-control: max-age=3600, public" So frontend server "Google Frontend" caches the response for 1hr(which I want to save cost). In rare cases the content of page changes and I want the content in frontend cache to be invalidated. How can I do that? Is there any API for it? I can't alter the URL as they have 90% traffic from Google Search.
false
49,658,369
0.379949
0
0
2
When you set cache-control via the header or meta tag, that tells the browser to store the response. So, the next time, it will not even ping your server. This means that you cannot invalidate that cache after set. What you need is a backend cache. Frameworks like Django, Flask, etc. make this easy. You can set a template cache, so it responds quickly, without much processing. You can store the response in GAE's memcache, and send from there. You can easily invalidate that cache, because you have complete control over it. Alternatively, you could change the url. Google reads your canonical meta tag to get the desired indexed url, so you can add a query string, etc., but still save the Google score to that indexed url.
0
130
1
0
2018-04-04T18:53:00.000
python,google-app-engine
How to invalidate cashed URL response from GAE "server: Google Frontend"
1
1
1
49,659,939
0
0
0
I'm using python 2.7.10 virtualenv when running python codes in IntelliJ. I need to install requests[security] package. However I'm not sure how to add that [security] option/config when installing requests package using the Package installer in File > Project Structure settings window.
false
49,679,283
0
0
0
0
Was able to install it by doing: Activating the virtualenv in the 'Terminal' tool window: source <virtualenv dir>/bin/activate Executing a pip install requests[security]
1
852
0
1
2018-04-05T18:38:00.000
python,python-2.7,intellij-idea,virtualenv
How to Install requests[security] in virtualenv in IntelliJ
1
1
1
49,679,964
0
0
0
I am using a azure http triggered fucntion to perform a task and I am passing the function key as http header parameter and then my payload is a json with some data that I invoking down stream procedures.I am using urllib(python lib) for this request and this is the response I am getting but the function is getting triggered. urllib.error.HTTPError: HTTP Error 417: Expectation Failed
false
49,682,697
0
0
0
0
This was more of a Firewall issue.We have been trying to connect to a azure analysis service from ADW and we have added the IP filtering(Our Corporate public IP) for AAS and then when the Function's procedure is trying to connected to AAS it is facing some IP issue(this is NOT the corporate public IP). We have added that IP and now things are working fine.
1
410
0
0
2018-04-05T22:51:00.000
python,azure,azure-functions
Azure HTTP trigger function call returning 417 error code
1
1
1
49,696,760
0
0
1
I am currently working a large graph, with 1.5 Million Nodes and 11 Million Edges. For the sake of speed, I checked the benchmarks of the most popular graph libraries: iGraph, Graph-tool, NetworkX and Networkit. And it seems iGraph, Graph-tool and Networkit have similar performance. And I eventually used iGraph. With the directed graph built with iGraph, the pagerank of all vertices can be calculated in 5 secs. However, when it came to Betweenness and Closeness, it took forever for the calculation. In the documentation, it says that by specifying "CutOff", iGraph will ignore all path with length < CutOff value. I am wondering if there a rule of thumb to choose the best CutOff value to choose?
true
49,713,991
1.2
0
0
0
The cutoff really depends on the application and on the netwrok parameters (# nodes, # edges). It's hard to talk about closeness threshold, since it depends greatly on other parameters (# nodes, # edges,...). One thing you can know for sure is that every closeness centrality is somewhere between 2/[n(n-1)] (which is minimum, attained at path) and 1/(n-1) (which is maximum, attained at clique or star). Perhaps better question would be about Freeman centralization of closeness (which is somehow normalized version of closeness that you can better compare between various graphs). Suggestion: You can do a grid search for different cutoff values and then choose the one that makes more sense based on your application.
0
676
0
1
2018-04-08T03:04:00.000
python,igraph
Cutoff in Closeness/Betweenness Centrality in python igraph
1
1
1
51,268,892
0
0
0
I want to know after finding the closest match from the text section of the response table how chatterbot is generating the "in_response_to" list and the "in_response_to_contains" list. If somebody could enlight me this then it would be a great help.
true
49,719,767
1.2
0
0
0
The in_response_to list is generated based on previous input statements that the bot receives. So for example, lets say that the following interaction occurs: User: "Hello, how are you?" Bot: "I am well, how are you." User: "I am also well." In this case, the bot would learn based on how the user responded to it. So "I am also well." is added to the in_response_to list of "I am well, how are you." For the second part of your question, the in_response_to__contains is an attributed used to tell the chat bot to query the database for statements where the in_response_to field contains a particular response.
0
98
0
1
2018-04-08T15:51:00.000
python-3.x,chatterbot
how chatterbot is creating the in_response_to and in_response_to_contains list
1
1
1
50,184,310
0
0
0
I have a tika code in a server. I want to create an SFTP session with another server with files and run Apache tika on that server. I am using python as back end. Will this work ? is my approach correct ? Thanks
true
49,776,224
1.2
1
0
0
So, what I was planning to do was not ideal . . Apache Tika requires to scan physical files to fetch metadata. I made a bridge and started from sessions pulling files to the server Tika code was hosted.
0
73
0
0
2018-04-11T13:19:00.000
python,sftp,apache-tika
using apache tika for scanning documents on servers using sftp
1
1
1
50,650,506
0
1
0
I'm working on a some automation work, as per my requirement I need to click on Chrome Physical buttons like left nav, right nav, bookmarks, menu etc. I can do with shortcuts but my requirement is to click on browser buttons. Any ideas would be helpful. Thanks in advance.
false
49,799,864
0
0
0
0
This can't be done with selenium webdriver and I think also not with the standalone selenium server. Selenium only allows to interact with the DOM. The only way to achieve what you want to do is to use an automation tool that actually runs directly in the OS that you use. Java can be used to write such a program. I would however recommend to not go this route. Instead try to convince whoever is responsible for your requirements to re-think and allow to use other means of achieving back and forward actions.
0
1,530
0
0
2018-04-12T15:00:00.000
java,python,google-chrome,selenium,selenium-chromedriver
Selenium click chrome physical buttons like menu, left, right navigation, bookmarks
1
1
3
49,800,000
0
1
0
I've got an issue with scrapy and python. I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. So I can't match url of each subpage with the outputed data. Like: crawled url, data1, data2, data3. Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...
false
49,896,079
-0.066568
0
0
-1
Ok, It seems that the solution is in settings.py file in scrapy. DOWNLOAD_DELAY = 3 Between requests. It should be uncommented. Defaultly it's commented.
0
79
0
0
2018-04-18T09:29:00.000
python,scrapy
Scrapy - order of crawled urls
1
2
3
49,899,202
0
1
0
I've got an issue with scrapy and python. I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. So I can't match url of each subpage with the outputed data. Like: crawled url, data1, data2, data3. Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...
false
49,896,079
0
0
0
0
time.sleep() - would it be a solution?
0
79
0
0
2018-04-18T09:29:00.000
python,scrapy
Scrapy - order of crawled urls
1
2
3
49,898,314
0