input
stringlengths
0
27.7k
CreationDate
stringlengths
29
29
__index_level_0__
int64
0
179k
I am implementing a simple program in c++. I am facing a problem here. I have a student class in which I have fname and lname attributes. I am returning a student instance from findLoser method. I am able to print the fname but not lname. It is not giving any error. I tried to print the length of lname from getter and it is giving correct result and also i tried to print letter by letter from lname using indexing and it is printing but when I try to print the whole lname it is not printing. Below all the header and remaining files #ifndef CLISTHPP #define CLISTHPP #include string using namespace std; class Student{ private: string fname; string lname; bool seen; public: Student(){}; Student(string fname, string lname); string getLname(){return this- lname;}; string getFname(){return this- fname;}; void setSeen(){seen = !seen;}; bool getSeen(){return seen;}; }; class Cell{ public: Cell next; Cell prev; Student student; }; class Clist{ public: friend class Cell; Clist(){current = nullptr;tracker = nullptr;temp = nullptr; count = 0;} void print(); Cell current ; Cell tracker; Cell temp ; int count; Student findLoser(); void insert(string fname, string lname); }; endif Above is the header file. #include iostream #include fstream #include string #include Clist.hpp using namespace std; Student::Student(string fname, string lname){ this- fname = fname; this- lname = lname; this- seen = false; } Student Clist::findLoser(){ tracker = current; int i = current- student- getLname().length()-1; while(true){ while(i 0){ tracker = tracker- next; i--; } if(tracker- student- getSeen()){ return tracker- student; } else{ tracker- student- setSeen(); } } } void Clist::print(){ tracker = current; int i =0 ; while(i 6){ cout tracker - student- getLname() endl; tracker = tracker - next; i++; } } void Clist::insert(string fname, string lname){ Student student = new Student(fname, lname); Cell newCell = new Cell(); newCell- student = student; if(current == nullptr){ current = newCell; newCell- next = newCell; newCell- prev = newCell; } else{ temp = current- prev; newCell- next = current; temp- next = newCell; current- prev = newCell; } count++; } bool checkIsFileEmpty(ifstream& pFile) { return pFile.peek() == std::ifstream::traits_type::eof(); } int main(){ ifstream inputFile( late.txt ); if(checkIsFileEmpty(inputFile)){ cout Congratulations!!!! to EveryOne No one was Late endl; return 0; } string name; string firstName; string lastName; Student loserStudent; Clist list = new Clist(); while(true){ if (inputFile.eof()){ break; } getline(inputFile, name); size_t spacePos = name.find(' '); firstName = name.substr(0, spacePos); lastName = name.substr(spacePos + 1); list- insert(firstName, lastName); } loserStudent = list- findLoser(); cout loserStudent- getLname(); cout loserStudent- getFname(); } and above is the main file. Joe Smith Paul Jones Joseph Bryson Jim Galin Brian Morrisy Jane Doe Raj Patel this is the text file. Could you please look into the code and point out what I am doing wrong which is stopping me from printing the Lname.
2024-03-16 15:09:28.853000000
25,985
I am trying to extract data an AWS RDS. This RDS is in different account and VPC. When I am testing the connection it is showing successful. Also my Glue crawler can crawl the data successfully. But when I am running a glue job with same connection, I am getting a SSL handshake error. RDS is using mysql, and had the certificate 'rds-ca-rsa2048-g1' but reverted back to 'rds-ca-2019'.
2024-03-05 19:35:05.823000000
124,176
Your code has incorrect syntax, the proper syntax is: background-image: linear-gradient(to right, red, blue); linear-gradient specifies that you are creating a linear gradient. to right defines the direction of the gradient, going from left to right. red and blue are the colors of the gradient, where the gradient starts with red and ends with blue. You can also use hex codes. In your case it is this line: background: linear-gradient(#f7f7f7, #eee) It needs a direction.
2024-03-21 20:19:19.620000000
103,554
From the Firebase FAQ : Which FCM APIs were deprecated on June 20, 2023, and what should I do if I am using those APIs? The following APIs/SDKs will be affected by the deprecation: API Name API Endpoint Impact on users Action Required ... ... ... ... Upstream messaging via XMPP fcm-xmpp.googleapis.com:5235 API calls to FirebaseMessaging.send in the app won"t trigger upstream messages to the app server after 6/21/2024. Implement this functionality in your server logic. For example, some developers implement their own HTTP/gRPC endpoint and call the endpoint directly to send messages from their clients to the app server. See this gRPC Quick start for an example implementation of upstream messaging using gRPC. ... ... ... ... So you will have to implement your own HTTP or gRPC endpoint for this logic.
2024-03-20 13:22:26.353000000
1,673
Use multi_dropdown for selecting multiple options for filtering data or picking various preferences.
2024-03-18 12:14:19.177000000
107,882
dataset Hi everyone, i need to find the model to answer the question that young people are going to theater less than before. Could you guy help me to find which model fits the best to this dataset in R? Age: numerical value ParentsRelevance: ordinal value from 0 to 4. This is the question: are u agree that the theater is importance in ur home? 0 is agree 100% and 4 is totally not agree PersonaRelevance: ordinal value from 0 to 4. This is the question: are u agree that the theater is importance for urself? 0 is agree 100% and 4 is totally not agree Theatervisitfrequency: ordinal value from 0 to 6. 0 is never and 6 is always visiting theater NotGoingReason: this counts how many reason in each age Thanks a lot I need to find the model with the best fit for this model with R
2024-03-20 15:26:35.303000000
25,136
I'm trying to plot a scatter plot with two marginal distribution plots. I have two datasets and I want to compare the two in a single plot. The problem is that I can plot scatter plots of both datasets but I can't plot the distribution plots of the second dataset in the marginal axes. Thanks in advance. I have come this far that I can plot scatter and distribution plots of the first dataset, and the scatter plot of the second dataset using the code below. g = sns.JointGrid(data=data1, x= observed , y= predicted ) g.plot(sns.scatterplot, sns.distplot) g.axjoint.scatter(data=data2, x= observed , y= predicted , c='r') The figure below shows what I get running my code: When I want to add distribution plots of the second dataset by adding the code below to the one above, the distribution plots of the second dataset replaces that of the first one. I want to have both of them for comparison purposes. g.axmargx.hist(data=data2, x= observed ) g.axmarg_y.hist(data=data2, x= observed , orientation= horizontal ) This is what I get by doing so:
2024-02-29 10:06:39.530000000
27,152
Let's say I have a page with a grid consisting of 2 rows, and a button which modifies the grid size by adding a 3rd row, or removing the 3rd row if it has already been added, like this: private void Button_Clicked(object sender, EventArgs e) { int count = parentGrid.RowDefinitions.Count; if (count == 2) { parentGrid.RowDefinitions.Add(new RowDefinition(GridLength.Star)); } else { parentGrid.RowDefinitions.Remove(parentGrid.RowDefinitions[count - 1]); } } Grid x:Name= parentGrid RowDefinitions= 40, * Button Text= Change grid Clicked= ButtonClicked / ProgressCircle Grid.Row= 1 / /Grid Now, the height of the element contained in the first row of the grid will be impacted whenever the button is clicked and will either increase or decrease. What I am trying to achieve is to animate that height increase/decrease, for example doing something like this: public ProgressCircle() { SizeChanged += ProgressCircleSizeChanged; } private void ProgressCircle_SizeChanged(object? sender, EventArgs e) { // Here I would like the circle to slowly either shrink or increase to its target dimension. this.ScaleTo(1, 2000); } I just started using animations though and I'm not sure how to approach this, thanks in advance for any input on this.
2024-02-24 18:17:22.893000000
78,431
I have a Gatsby project using Bun. Today suddenly gatsby develop is showing me code changes I've removed. For example, theres a react component A rendering another component B. Even if I remove B from A, the site still shows B and even tracks changes in B in live preview. I've tried deleting .cache and public dirs and even node_modules. Tried gatsby clean as well. I can't see mention of any other cache location in Gatsby docs. I've tried in other browsers as well. Any idea why its picking up an old code?
2024-02-15 00:13:33.023000000
81,285
correctidx is a boolean array. outp2 is a tensor with same shape as correctidx , trainpreds is a tensor. trainpreds has same shape as number of trues in correctidx I want to achieve the following equivalent numpy result : outp2[correctidx] = trainpreds Can you please suggest something which will also be gradient calculation compatible. correctidxs2 = tf.converttotensor(correctidx) outp2 = tf.where(correctidxs2, train_preds, tf.constant(float('nan'),dtype=tf.float32)) this results in high dimension
2024-03-07 19:41:23.107000000
168,303
The reason is that the HTML content has been consumed by the first call to BeautifulSoup. So, when you pass html (the http.client.HTTPResponse instance) to BeautifulSoup a second time, there's nothing to be read. You could read the response yourself and use it multiple times like this: from urllib.request import urlopen from bs4 import BeautifulSoup url = [LINK] with urlopen(url) as response: if response.status == 200: content = response.read() bsObj2 = BeautifulSoup(content, html.parser ) bsObj3 = BeautifulSoup(content, html.parser ) assert bsObj2 == bsObj3
2024-02-15 10:12:32.890000000
52,946
counter/255 divides one integer by another. Just like in C or C++, this results in an integer , not a floating point value. As such, 255 fragment shader invocations will get the same result. That's probably the intent of the code. As to the more orderly arrangement in one picture vs. the other, that's all implementation-dependent.
2024-02-27 15:24:51.097000000
159,290
One possible fix is to make sure that the Gateway Endpoint you created to access S3 buckets is enough permissive, even if for some weird reason is not enough that you give permission on the bucket where Elastic Beanstalk stores files and metadata. I solved setting the endpoint policy as { Version : 2012-10-17 , Statement : [ { Effect : Allow , Principal : , Action : , Resource : arn:aws:s3:* } ] }
2024-03-11 17:27:18.160000000
124,107
I am currently experiencing following issue and would greatly appreciate any assistance. Thanking you in advance. Issue Description: The Ribbon Workbench Enable rule is not firing only when Group By is applied on an editable grid. The enable rule fires in all other cases, such as for single-record selection and even for multiple-record selection(Select All) when Group By is not applied. Surprisingly, it also works for single-record selection manually even when grouping is applied, but it does not work only for 'Select all' selection when grouping is applied. Note: The editable grid is formed by applying the Editable Grid component on a normal Subgrid. Detailed Explanation: I've added an 'Approve' button to the editable subgrid using Ribbon Workbench. The visibility logic of the button is controlled by an enable rule, which involves a Custom Rule calling JavaScript logic. The visibility works seamlessly with both normal subgrids and editable subgrids, except in one scenario where Group By is applied to the editable grid. In the working scenario , when all records are selected in the editable grid and Group By is set to 'no Grouping,' the 'Approve' button is visible. Deselecting all records hides the 'Approve' button. However, in a non-working scenario , when all records are selected in the editable grid and Group By is set to 'Start Date,' the 'Approve' button should be visible, but it remains hidden. The issue lies in the enable rule not getting fired in this case, specifically when Group By is applied to 'Start Date.'
2024-03-08 12:49:35.113000000
53,727
i have some real-time cloud watch events in log group and i want to filter them for specific KEYWORD and send it to SQS and i don't want to use lambda and kinesis, since the logs are real-time and getting created every second i don't want to use lambda and kinesis. I don't see ant option in Cloud watch to send logs to SQS
2024-03-21 07:22:15.723000000
154,844
[LINK]> By default, all User models can access Filament locally. However, when deploying to production, you must update your App\Models\User.php to implement the FilamentUser contract — ensuring that only the correct users can access your panel: public function canAccessPanel(Panel $panel): bool { return strendswith($this- email, '[USER].com') && $this- hasVerifiedEmail(); }
2024-03-22 16:52:18.670000000
63,790
I'm working on a NodeJS integration to manage Leads. I'm currently able to create Leads and an Activity connected to the Lead using the endpoint /crm.activity.todo.add.json . Now I need to notify the Responsible of the Activity to attend that activity, so I'm trying to create a reminder for that activity. I can't find any way to achieve it in the Bitrix Docs. However, I notice that the Activity has a field NOTIFYTYPE and NOTIFYVALUE [LINK]> I've also analyzed the web app to see if I could intercept the URLs it uses under the hood and I noticed that it uses this PHP page: [LINK]> and sends this form data: Action: crm.activity.todo.updatePingOffsets Form data: - ownerTypeId: 1 // lead type - ownerId: 555 // lead id - id: 1111 // task id - value[0]: 30 // minutes and it returned me a 400 error. Any help is welcome
2024-02-23 22:42:34.433000000
144,948
Why the md-dialog is always on top? DEMO I tried z-index etc. and nothing works. Can you help me css masters? hot-toast-container div style= position: fixed; z-index: 9999; top: 0; right: 0; bottom: 0; left: 0; pointer-events: none; div style= position: relative; height: 100%; div hot-toast div style= position: absolute; top: 50%; left: 50% toast test /div /hot-toast /div /div /div /hot-toast-container md-dialog open div slot= content test /div /md-dialog
2024-03-11 18:35:31.393000000
18,752
chatGPT fixed the error, but im not sure if thats the correct way useEffect(() = { let isUnmounted = false; const changeRouteTimeout = setTimeout(() = { if (!isUnmounted) { sessionStorage.removeItem( redirectedFlag ); redirect( /trainees ); } }, 7500); return () = { isUnmounted = true; }; }, []);
2024-02-19 12:49:19.180000000
107,998
According to the manual , when defining http.oauth2Login(oauth2 - { oauth2.loginPage( /login/oauth2 ); }); You need to provide a [USER] with a [USER]( /login/oauth2 ) that is capable of rendering the custom login page. You could write something like that in a servlet: [USER] public class LoginController { private final Pattern bizIpsPattern; public LoginController ([USER]( biz-ips-pattern ) String pattern) { this.bizIpsPattern = Pattern.compile(pattern); } [USER]( /login/oauth2 ) public RedirectView getAuthorizationCodeInitiationUrl(HttpServletRequest request) { // Instead of throwing, you could as well decide to redirect to com IDP final var header = Optional.ofNullable(request.getHeader( X-Forwarded-For )).orElseThrow(() - new MissingForwardedForException()); final var matcher = bizIpsPattern.matcher(header); return matcher.matches() ? new RedirectView( /oauth2/authorization/idpbiz ) : new RedirectView( /oauth2/authorization/idpcom ); } [USER](HttpStatus.UNAUTHORIZED) static final class MissingForwardedForException extends RuntimeException { private static final long serialVersionUID = 8215954933514492489L; MissingForwardedForException() { super( X-Forwarded-For header is missing ); } } } In a reactive application, the controller method would use ServerWebExchange instead of HttpServletRequest : [USER]( /login/oauth2 ) public Mono RedirectView getAuthorizationCodeInitiationUrl(ServerWebExchange exchange) { final var header = exchange.getRequest().getHeaders().getOrEmpty( X-Forwarded-For ); if (header.size() 1) { throw new MissingForwardedForException(); } final var matcher = bizIpsPattern.matcher(header.get(0)); return Mono.just(matcher.matches() ? new RedirectView( /oauth2/authorization/idpbiz ) : new RedirectView( /oauth2/authorization/idpcom )); } the SecurityWebFilterChain would be configured slightly differently too: http.exceptionHandling(exceptionHandling - { exceptionHandling.authenticationEntryPoint(new RedirectServerAuthenticationEntryPoint( /login/oauth2 )); });
2024-02-22 14:41:45.773000000
33,002
I have User and Holiday models related this way: User model: class User extends Authenticatable { public function holidays(): HasMany { return $this- hasMany(Holiday::class); } Holiday model: class Holiday { protected $fillable = [ 'dayoff', // DATE YYYY-MM-DD 'user_id', ]; public function user(): BelongsTo { return $this- belongsTo(User::class); } Given $month and $year , I would like to get all the users and their related holidays ( dayoff ). How can I achieve that ? Edit: I re-wrote the below use Illuminate\Database\Eloquent\Builder; $users = User::whereHas('holidays', function (Builder $query) use ($year, $month) { $query- whereYear('dayoff', $year)- whereMonth('dayoff', $month); })- get(); But this gives the list of users without their related holidays (list of dayoff s of each user)
2024-02-19 12:39:44.263000000
87,145
turns out the answer is yes, the particular field is .HostConfig.AutoRemove , so for example: docker inspect --format='{{.HostConfig.AutoRemove}}' $id
2024-03-23 14:59:03.793000000
28,873
From your code, I see that by brackets you mean parentheses . This option is configurable using arrow-function-parentheses . Documentation In your case, you want: arrowParens: avoid
2024-02-25 08:42:06.233000000
11,499
I inherited a project that requires efficient data handling and query execution. I'm interested in optimizing performance and I'd like to know about the feasibility of parallelizing query execution using the Memgraph C++ client. Is it possible to execute multiple queries in parallel directly through the Memgraph C++ client? If it can be done, what is the recommended approach to implement this? Are there specific patterns to follow (e.g., managing connections, handling concurrency, etc.)?
2024-02-15 10:37:33.610000000
83,723
I am using eventbridge to deliver events to my lambda. My lambda store the data associated with the event into the database. I also have one Eventbridge schedule which keep triggering this events every 15 mins. Now issue is happening that suppose my lambda got event for Event A and it timeout due to too much data to process so it is trying to run it again. At the same time, my 15 min scheduler is trying to process the same event again as its still not completed. I know there is a way that disable the retry but i dont to disable all the retry like in case of the target (lambda) not available or some other issues. I just want to target the rerun on the timeout. Though i can maintain state for the event bt my only worry is that if the lambda crashes or timeout before i try to change the state then my event will be left in invalid state. Is there a way that i can disable reruning the lambda on timeout on the event?
2024-03-05 08:05:31.393000000
28,461
You should setting the port, database, username , and password in .env file with correct setting. It means you should have user to access database and the database
2024-03-20 09:59:59.210000000
142,799
You cannot do it with ion-datetime, but you can use ion-picker to have the result you want the HTML will be something like this (showDateTime is a boolean that you will use to show and hide the ion-picker): ion-picker [isOpen]= showDateTime [columns]= monthYearPickerColumns [buttons]= monthYearPickerButtons /ion-picker For the months, the years and the buttons, you can use the following configuration : monthOptions = Array.from(new Array(12), (x, i) = i + 1).map(val = ({ text: (val+'').padStart(2, '0'), value: val, })); yearOptions = Array.from(new Array(176), (x, i) = i + 1924).map(val = ({ text: ${val}, value: val, })); public monthYearPickerColumns: PickerColumn[] = [ { name: 'month', options: this.monthOptions, selectedIndex: this.monthOptions.findIndex(option = option.value === ((new Date).getMonth()+1)) }, { name: 'year', options: this.yearOptions, selectedIndex: this.yearOptions.findIndex(option = option.value === ((new Date).getFullYear())) }, ]; public monthYearPickerButtons: PickerButton[] = [ { text: 'Cancel', role: 'cancel', }, { text: 'Ok', handler: (data) = { console.log(data.month.value); console.log(data.year.value); }, }, ];
2024-02-15 00:16:10.863000000
56,686
If you are using react I would recommend TextareaAutosize library. It's nothing but simple textarea under the hood but frees you of the hassles of styling issues and if you are already using the styling for textarea then you dont need to add any additional styling either. on terminal: npm install react-textarea-autosize in code: import TextareaAutosize from 'react-textarea-autosize'; // If you use CommonJS syntax: // var TextareaAutosize = require('react-textarea-autosize').default; React.renderComponent( div TextareaAutosize / /div , document.getElementById('element'), );
2024-02-29 23:51:45.107000000
178,704
I had to do a similar transition in my project from Webpack to Vite. We were using HtmlWebpackPlugin with .ejs templates in the project and also tried to solve the transition with the same plugin as you did but never got it working correctly. I don't remember what was the original problem we faced when using that particular plugin but I eventually ended up writing my own simple plugin for the requirements we had: [LINK]> I hope this helps if someone is facing similar problems.
2024-02-28 08:27:24.627000000
145,391
I am building an app where the user can pick multiple images and I lay all the images out in a gallery view. I need to show an i icon on the images that has very low quality. Now my question is that how do I check the quality of the image in flutter ?
2024-02-27 06:36:12.520000000
87,627
I tried realtime data streams project and use kafka, elasticsearch, kibana, postgres with docker compose and flink. My data streams like this : kafka - flink - elasticsearch and postgres. When I tried to writing kafka streams data into elasticsearch but on kibana dev tools console(GET index/search or GET index) I can't find new data until cancel flink job. flink job start - can't find new data on kibana - cancel flink job - now I can see new data on kibana. Part of my code is DataStream Transaction transactionStream = env.fromSource(source, WatermarkStrategy.noWatermarks(), Kafka source ); transactionStream.sinkTo( new Elasticsearch7SinkBuilder Transaction () .setHosts(new HttpHost( localhost , 9200, http )) .setEmitter((transaction, runtimeContext, requestIndexer) - { String json = convertTransactionToJson(transaction); IndexRequest indexRequest = Requests.indexRequest() .index( transactions ) .id(transaction.getTransactionId()) .source(json, XContentType.JSON); requestIndexer.add(indexRequest); }) .build() ).name( Elasticsearch Sink ); Postgres DB update is fine. I use Mac and Java version: 11 flink : 1.18.0 flink connector kafka : 3.0.1-1.18 flink sql connector elasticsearch7 : 3.0.1-1.17 What I tried: attach setBulkFlushInterval(30000) option Because I found this log WARN org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriter - Writer was closed before all records were acknowledged by Elasticsearch. but another error occurs Unable to parse response body for Response{requestLine=POST /bulk?timeout=1m HTTP/1.1, host=[LINK], response=HTTP/1.1 200 OK} Clone original code repository My code exactly same with this repository [LINK] /> [LINK]> So I clone this, try execute, but same problem happens. On his youtube this problem doesn't happen. What can I do for this?
2024-03-11 14:18:00.633000000
140,130
I'm attempting to determine the time based on the timezone specified in each row using Polars . Consider the following code snippet: df = pl.DataFrame({ time : [datetime(2023, 4, 3, 2), datetime(2023, 4, 4, 3), datetime(2023, 4, 5, 4)], tzone : [ Asia/Tokyo , America/Chicago , Europe/Paris ] }).withcolumns(c.time.dt.replacetimezone( UTC )) df.withcolumns( tokyo=c.time.dt.converttimezone( Asia/Tokyo ).dt.hour(), chicago=c.time.dt.converttimezone( America/Chicago ).dt.hour(), paris=c.time.dt.converttimezone( Europe/Paris ).dt.hour() ) In this example, I've computed the time separately for each timezone to achieve the desired outcome, which is [11, 22, 6], corresponding to the hour of the time column according to the tzone timezone. Even then it is difficult to collect the information from the correct column. Unfortunately, the following simple attempt to dynamically pass the timezone from the tzone column directly into the converttimezone function does not work: df.withcolumns(c.time.dt.converttimezone(c.tzone).dt.hour()) #TypeError: argument 'timezone': 'Expr' object cannot be converted to 'PyString' What would be the most elegant approach to accomplish this task?
2024-03-05 10:39:59.400000000
51,377
Initially I have one node at position (0, 0). Now in next iteration with node size 1 and 2 I want my node 2 position at (1, 0) and add edge between them and then add edge between them. So after 2nd iteration I will have a graph of edges (0, 0), (1, 0). Now I want use this data at my next iteration of adding node and add node and add edges such that i will have graph of edges (0, 0), (1, 0), (2, 0). Can anyone provide a sample code for this using network x. I am new to python as well as network x! I have tried using the for loop but i did not use it as a initial point for next iteration
2024-02-23 03:47:20.010000000
19,663
I want to build an Image Cropper myself. Because I need to zoom into the image and also detect drags on the borders/corners of the Cropping Window, I need to use Modifier.transformable() and Modifier.pointerInput(detectDragGestures {}) together. But one of them always blocks the other one. Is there a way to use those two Modifier together? Or do I build my own Modifier.transformable to get it to work?
2024-03-08 16:27:15.103000000
60,097
I see at many places the fastest solution to a problem(often complex ones) sometimes have this line - int&& a = something; what does it really mean? To test, i tried int&& a = 5; against int b = 5; , and both a and b print same value 5. However, in disassembly, i see - void function() { int&& a = 5; int b = 5; } as 1 function(): 2 push rbp 3 mov rbp, rsp 4 mov DWORD PTR [rbp-16], 5 5 lea rax, [rbp-16] 6 mov QWORD PTR [rbp-8], rax 7 mov DWORD PTR [rbp-12], 5 8 nop 9 pop rbp 10 ret (lines 4, 5, 6 to int&&) 3 line instructions against 1. So, what operations is it performing, and how does it makes code faster? Searched int&& online, but could not find any reference, simple example explaining this.
2024-02-10 06:17:50.670000000
150,942
You also get the error ReferenceError: require is not defined if one of the files you import has the term mock in it, e.g. generate-user-mock.ts . This is a hidden contract entering some sort of special mocking mode. Renaming the file to e.g. generate-user-data.ts solves the problem.
2024-03-05 11:15:22.430000000
31,869
As per Microsoft : Use the special data source filename :memory: to create an in-memory database. When the connection is closed, the database is deleted. When using :memory:, each connection creates its own database. The linked page suggests a different connection string format for persistent in-memory databases: Data Source=InMemorySample;Mode=Memory;Cache=Shared
2024-02-26 18:52:57.207000000
162,099
Install LimeReport-qt-6-4 1.7.6 a Python bindings for LimeReport on windows I tried to install it on ubuntu 23.10 using pip install LimeReport-qt-6-4 and it worked - python 3.11 and PySide6 6.6.2 I'm trying to install it on windows but it stops with Installing build dependencies ... error . I tried with different versions of python (3.9, 3.11, 3.12) and PySide6 (6.4.2 and 6.6.2) but still doesn't work. Any suggestions how to install it on windows? This is the full error message I got: Collecting LimeReport-qt-6-4 Using cached LimeReport-qt-6-4-1.7.6.tar.gz (15.5 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. exit code: 1 [9 lines of output] Collecting cmake =3.18 Using cached cmake-3.28.3-py2.py3-none-win_amd64.whl.metadata (6.5 kB) Collecting setuptools =42 Using cached setuptools-69.1.1-py3-none-any.whl.metadata (6.2 kB) Collecting wheel Using cached wheel-0.42.0-py3-none-any.whl.metadata (2.2 kB) ERROR: Ignored the following versions that require a different python version: 6.3.0 Requires-Python 3.11, =3.6; 6.3.1 Requires-Python 3.11, =3.6; 6.3.2 Requires-Python 3.11, =3.6; 6.4.0 Requires-Python 3.11, =3.6; 6.4.0.1 Requires-Python 3.12, =3.7; 6.4.1 Requires-Python 3.12, =3.7; 6.4.2 Requires-Python 3.12, =3.7; 6.4.3 Requires-Python 3.12, =3.7; 6.5.0 Requires-Python 3.12, =3.7; 6.5.1 Requires-Python 3.12, =3.7; 6.5.1.1 Requires-Python 3.12, =3.7; 6.5.2 Requires-Python 3.12, =3.7; 6.5.3 Requires-Python 3.12, =3.7 ERROR: Could not find a version that satisfies the requirement shiboken6==6.4.2 (from versions: 6.6.0, 6.6.1, 6.6.2) ERROR: No matching distribution found for shiboken6==6.4.2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. exit code: 1 See above for output. note: This error originates from a subprocess, and is likely not a problem with pip.
2024-03-10 15:31:22.187000000
94,943
Use os.scandir instead of os.walk . Your use of os.walk loads the entire directory tree into memory, and then iterates over it. os.scandir will return an iterator that steps through the tree without loading it all into memory.
2024-03-11 15:41:39.267000000
101,492
Two users have the same error (This item isn't avalible) I able to download and update the internal test Both users have signed upto the program and I have only made an internal test track Both their emails are on the list I ran out of ideas so I've also started closed beta testing but that'll be a few days enter image description here Check email list - both email on the list Enabled developer mode on phones Google play settings enabled internal app sharing Both phone are shown as compatible and have a correct version of android
2024-02-09 23:34:08.993000000
65,375
I am getting a list of docs when hitting my solr query. Strange issue I am facing is that when I am trying to explain the score calculation for a document then the score is coming different in what is shown in the doc list. Example: score for a doc in the docs : [{ id : 1DMOXQ3VN7FF , score : 6.[HASH] }, { id : 6WNYN6NVVPDU , score : 6.699659 }] Now the score for the doc in id when I try to explain it 6WNYN6NVVPDU : { match : true, value : 10.562147, description : some score calc logic here } You can see two scores for the same doc. I have never seen this earlier in solr. What is the possible explanation here. Thank!
2024-03-14 13:07:51.463000000
5,586
I have following razor.cs code that is calling a function in a scoped service. The function in the scoped service is reading a dynamic changing process value from a rest API . When I call the function in my razor.cs page once I can read the value without problem. When I call the function repeatedly (cyclic): I see that the value is correctly changing each time in the service function (from the log). But the value on the razor page is showing just the first read value, and is not actualized on the next function calls. What could be wrong? My razor page code: table tr td Process item /td td [USER] /td /tr /table My razor.cs code: public string valuescreen= ; public async Task Readvaluefromscopedservice() { for(int i=0; i 10; i++) { valuescreen = await Task.Run(() = ServiceAPI.ReadProcessvalue(url)); StateHasChanged(); Thread.Sleep(1000); } } my function in the scoped service (ServiceAPI) : public string valuefromAPI= ; public string ReadProcessvalue(string url) { // some code that delivers the value from a API using (StreamWriter sw = File.AppendText(logpath)) { sw.WriteLine( itemvaluer= + valuescreen + + Convert.ToString(DateTime.Now)); // in the logfile i can see the updated 10 values for each call } return valuefrom_API; }
2024-02-23 10:13:22.030000000
90,257
I have an element that has position:fixed and width:100% . The problem is that none of the parent elements have a fixed width. So my element is actually larger than any of its parent elements. The implementation of my code is quite complex but I think I was able to simplify things. In the code below you will notice that the green box is larger than its parent elements. export default function App() { return ( div style={{ width: 100% , padding: 0px 15px , boxSizing: border-box , height: 80px , }} div style={{ width: 100% , border: 1px solid blue , height: 80px , }} div style={{ width: 100% , border: 1px solid transparent , height: 80px , }} div style={{ width: inherit , height: 25px }} div style={{ width: inherit , height: 25px }} div style={{ width: inherit , position: fixed , height: 25px , border: 1px solid green , }} /div /div /div /div /div /div ); }
2024-03-26 18:42:06.103000000
115,743
Have been searching with no avail about the autocomplete feature seen in screenshot above, which seems to suddenly appear around half year ago or so. It provides auto-completion for powershell cmdlets and certain other external commands (such as git ), and only appear from within vscode integrated terminal where powershell is used (no such feature for cmd.exe or other shells). For me the most important question is, how can it be permanently disabled? It was causing much pain and disruption when using the integrated terminal, such as double typing of each and every character, or outright permanent terminal deadlock (need to kill and restart).
2024-03-19 17:47:09.493000000
35,070
Lately I'm working on a project that use raw pointers in some places in code, everything seems fine and I want to upgrade my code by using smart pointers just for safety like everyone says, but it turns out the wrong results. I was looking for the bug for hours until I realized this fatal inconvenience of sharedptr that I will just only summarize that in this little experiement showed below. This is the experiment code: int main() { // code 1 int arr[5] = { 1, 5, 6, 7, 9 }; cout &arr[2] endl; sharedptr ptr = make_shared int (arr[2]); cout ptr \n ; // code 2 cout &arr[2] endl; int* ptr1 = &arr[2]; cout ptr1 \n ; } Basically, what I'm doing here is that I want a pointer that points to an element in the array (my project uses vector, but this experiment uses array still gives the same issue) contains the same address of that element, for example in //code 2 (old method using raw pointer) those 2 cout commands will give the same result, but not for //code 1 when it just prints out 2 separate addresses. After debugging for about half a minute, I realized that this line make_shared int (arr[2]) in code 1 will create the poniter that points to an address that has the same value as the element in the array but not points to the element in the array , which is quite inconvenient because I was expecting it to act like code 2 . additional info: When I asks chatgpt will the code 1 give the same result, it answers yes , which proves that this problem is not as obvious for many people. By the way, any suggest for fix? Thank you.
2024-03-10 11:38:08.003000000
17,132
You may split the initial string to array of tokens and use a handy hasAny function to check if two arrays have intersection by some elements. WITH ['Man', 'Sisters'] AS filterwords SELECT title FROM VALUES ( 'title String' , 'The Colour of Magic' , 'The Light Fantastic' , 'Equal Rites' , 'Wyrd Sisters' , 'Reaper Man' ) t WHERE hasAny(splitByWhitespace(t.title), filterwords) The result is: title Wyrd Sisters Reaper Man
2024-03-22 06:58:22.517000000
154,284
Thanks to [USER]'s comment. My actual code defined formatreadfuncmapping in the module's global namespace, and that interfered with the patch . After I move formatreadfuncmapping into my_func as a local variable, the mock/patch works.
2024-02-22 08:56:18.627000000
101,499
Adding the attributes of data-keyboard= false and data-backdrop= static to your modal div will keep the escape key from closing the window.
2024-03-19 16:34:25.673000000
83,241
You can use tostruct and unnest() to convert returned list to separate columns: df.withcolumns( cnt=pl.col('x').mapelements(usesobject) ).withcolumns( pl.col('cnt').list.tostruct(fields=['countof10','countof12']) ).unnest('cnt') x countof10 countof12 --- --- --- i64 i64 i64 11 1 0 22 1 1
2024-02-28 09:10:22.103000000
72,078
The NodeJS doesn't support ES module by default. You have to change module : ESNext to module : commonjs in your tsconfig.json at your bot folder. If you want to use ESNext as your TS output module type. You should add --experimental-modules to your start command like package.json shows below. (ignored other option) { scripts : { start : node --experimental-modules ./lib/index.js } } You can test your project by running command like cd /yourprojectroot/bot npm install npm run build npm run start The outcome of your changes will determine whether you see error output or the server starts.
2024-02-07 07:50:46.447000000
2,401
I was just in need of same thing and asked on discord and got not satisfying answer below. To set a cron trigger value from an environment variable like {{CRON_SCHEDULE}} instead of a hardcoded cron expression, you would typically need to ensure that the environment variable is correctly set in the environment where Hasura is running. However, based on the provided documentation and search results, there isn't a direct mention of using environment variables specifically for cron schedules in the creation of cron triggers through the Hasura Console, CLI, or Metadata API. In general, Hasura allows the use of environment variables for various configurations, especially when defining webhook URLs for event triggers, which can be set to use environment variables directly in the console or through the Hasura metadata files. For cron triggers, the documentation primarily focuses on specifying the cron schedule directly in the configuration. If your goal is to dynamically set the cron schedule based on different environments (e.g., development, staging, production), you might consider a couple of approaches: Use Hasura Metadata API or CLI for Automation: You could automate the deployment or update of your cron triggers using the Hasura Metadata API or CLI. In your CI/CD pipeline, you can have a step that replaces a placeholder in your metadata definition files with the appropriate value from an environment variable before applying the metadata. This way, you can maintain different cron schedules for different environments outside of Hasura's configuration. External Configuration Management: Manage your cron trigger configurations externally. You can have a script or a small service that reads the cron schedule from environment variables and then uses the Hasura Metadata API to create or update the cron triggers accordingly. This script or service can be part of your deployment process. As of my last update, directly using environment variables to specify the cron schedule in the Hasura Console or through metadata files without some form of preprocessing or external management isn't documented. It's always a good idea to check the latest Hasura documentation or reach out to the Hasura community for new features or workarounds that might have been introduced.
2024-03-23 23:13:15.970000000
31,324
I solved this problem by deleting non-ASCI symbols from external files in my project (deleting Russian comments in the jslib file in my case). Version of Unity: 2022.3.10f1
2024-03-16 10:58:23.553000000
97,589
Extension Name= CSVASCII Type= Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering OverrideNames Name Language= en-US CSV (ASCII) /Name /OverrideNames Configuration DeviceInfo Encoding ASCII /Encoding /DeviceInfo /Configuration /Extension it was necessary for me to add the name over ride, or i got two CSV options. you can also put visible= false inside the Extension tag I can not find anything on ms.com about configuring these, I did find this article that shows the name override [LINK]>
2024-03-13 18:36:33.867000000
9,996
initialValues only works on first render. After that if you want to set values, use form.setFieldsValue for multiple values or form.setFieldValue for single field. If you want to reset the form, use form.resetField. I removed some of the code without changing the actual requirement. I create form instance in Account component instead of ModalForm and pass form as prop to ModalForm component and connect it to the form. Also there's no need to use Form.Provider . You can directly pass onFinish function as prop to ModalForm and connect it to the form. Also there's no need of useResetFormOnCloseModal hook. Here's the complete code import { EyeInvisibleOutlined, EyeOutlined, PlusOutlined } from '[USER]-design/icons'; import { Button, Form, type FormInstance, Input, Modal, Space, Table } from 'antd'; import { useState } from 'react'; interface UserData { id: string; name: string; studentid: string; password: string; } interface ModalFormProps { open: boolean; onCancel: () = void; mode: 'add' | 'edit'; form: FormInstance; onFinish: (values: UserData) = void; } const ModalForm = ({ open, onCancel, mode, form, onFinish }: ModalFormProps) = { return ( Modal title={${mode === 'add' ? 'Add' : 'Edit'} Account`} open={open} onOk={form.submit} onCancel={onCancel} Form form={form} layout='vertical' onFinish={onFinish} Form.Item label='Name' name='name' rules={[{ required: true, message: 'Please input the name!' }]} Input / /Form.Item Form.Item label='Student ID' name='studentid' rules={[{ required: true, message: 'Please input the student ID!' }]} Input / /Form.Item Form.Item label='Password' name='password' rules={[{ required: true, message: 'Please input the password!' }]} Input.Password / /Form.Item /Form /Modal ); }; const Account = () = { const [showPasswordsFor, setShowPasswordsFor] = useState Array string ([]); const [modalOpen, setModalOpen] = useState(false); const [modalMode, setModalMode] = useState 'add' | 'edit' ('add'); const [form] = Form.useForm UserData (); const togglePasswordVisibility = (key: string) = { setShowPasswordsFor((prevState) = (prevState.includes(key) ? prevState.filter((k) = k !== key) : [...prevState, key])); }; const handleAdd = () = { setModalOpen(true); form.resetFields(); setModalMode('add'); }; const handleEdit = (record: UserData) = { setModalOpen(true); form.setFieldsValue(record); setModalMode('edit'); }; const handleCancel = () = { setModalOpen(false); }; const onFinish = (values: UserData) = { console.log('Received values of form: ', values); setModalOpen(false); }; const columns = [ { width: '8%', title: '#', key: 'index', render: (: string, : UserData, index: number) = index + 1 }, { width: '20%', title: 'Name', dataIndex: 'name', key: 'name' }, { width: '30%', title: 'Student ID', dataIndex: 'studentid', key: 'studentid' }, { width: '30%', title: 'Password', dataIndex: 'password', key: 'password', render: (text: string, record: UserData) = ( Space {showPasswordsFor.includes(record.id) ? text : '*'.repeat(12)} {showPasswordsFor.includes(record.id) ? ( EyeInvisibleOutlined onClick={() = togglePasswordVisibility(record.id)} / ) : ( EyeOutlined onClick={() = togglePasswordVisibility(record.id)} / )} /Space ) }, { width: '12%', title: 'Action', key: 'action', render: (: string, record: UserData) = ( Space size='middle' a onClick={() = handleEdit(record)} Edit /a /Space ) } ]; return ( Space direction='vertical' size={16} Button type='primary' icon={ PlusOutlined / } onClick={handleAdd} Add Account /Button Table scroll={{ x: 540 }} size='small' columns={columns} dataSource={[ { id: '1', name: 'John Doe', studentid: '2018-0001', password: 'password' }, { id: '2', name: 'Jane Doe', studentid: '2018-0002', password: 'password' }, { id: '3', name: 'John Doe', studentid: '2018-0003', password: 'password' } ]} rowKey='_id' bordered / ModalForm mode={modalMode} open={modalOpen} onCancel={handleCancel} form={form} onFinish={onFinish} / /Space ); }; export default Account;
2024-03-06 13:51:00.497000000
19,335
I found out that adding homepage : [LINK]/ fixed the problem
2024-03-11 08:34:33.193000000
128,672
You should be able to run either pip install scipy or conda install scipy to fix the error.
2024-02-09 00:35:53.313000000
77,591
I have created a bubble map with a geographical map at the base and points represented on it in the form of bubbles. The points have latitude longitude information with the help of which they get rendered on the map. I am using highcharts library for doing the same in a typescript project. My requirement is that the area in which bubbles are present should be zoomed into when the map gets loaded. The remaining map which is blank need not be shown. I tried using the mapView.fitToBounds() method but the zooming in is not working. Reference: [LINK]> Since, I have latitude and longitude information, I need to convert it to pixel values first because mapView.fitToBounds() expects pixel values as inputs. I tried using the mapView.lonLatToPixels() method for the same. Reference: [LINK]> I tried calling these methods on the chart instance but it doesn't have any effect on the output. I expect the area defined by the minimum and maximum latitude and longitude values to be zoomed into whenever the map loads. I need to use the above mentioned mentioned methods for my project setup. Could you please let me know what is it that I am missing while using them? Link to code sandbox: [LINK]>
2024-03-20 07:54:35.583000000
91,593
It is generally a good practice to include a new start state as you mentioned but for this particular problem,it is unnecessary.A new start state is used to prevent the machine from producing unexpected strings since the initial start state may be pointed by arrows coming from non-accept states.But there is no such transition arrow in this particular example and introducing a new start state is unnecessary.
2024-02-29 00:45:15.903000000
29,662
When you call CollectionReference#add() method it: Returns a DocumentReference with an auto-generated ID, after populating it with provided data . This means that it writes the data to Firestore with an auto-generated ID. If you need to have this ID before writing the data to Firestore, then you have to call DocumentReference#set() on a DocumentReference object. This also means that such a document contains an id property which contains: This document's given ID within the collection. Once you have this ID, you can then pass it to the CollectionReference#doc() function so you can write the data to Firestore.
2024-03-01 07:43:48.970000000
107,195
I'm trying to get our .net project set up locally on a Mac using VS Code. I have been running into build issues and was finally able to narrow it down to not having access to our private Azure repo. We have a few packages that our project uses stored in there. Initially the issue was that it just couldn't find them. The fix was to create a nuget.config file and add the to that. But I was still running into access errors. It turns out my account wasn't granted access to azure so got that taken care of. I can now access the package file (JSON) directly in the browser no problem, but an still running into this error when building the project: error NU1301: Unable to load the service index for source What I've looked into: double check the account I'm logged into VSCode with is the same account I can log into Azure with. This supposedly should just work . created a PAT in azure and added to my nuget.config file as such: ?xml version= 1.0 encoding= utf-8 ? configuration packageSources clear / !-- (optional) this is for clear sources like default Offline repo -- add key= nuget.org value= [LINK] protocolVersion= 3 / add key= [myKey] value= [LINK]... protocolVersion= 3 / /packageSources packageSourceCredentials MyPrivateFeed add key= Username value= [myUsernameForThePAT] / add key= ClearTextPassword value= [myPAT] / /MyPrivateFeed /packageSourceCredentials /configuration Deleted the nuget cache under .local/share/nuget googled 'nuget' issues extensively and learned that Mac has a rough go at it as VSCode doesn't get any of the nuget management tools Visual Studio gets and the CLI tool for nuget is an .exe file. And now I am out of ideas. Any suggestions as to what to look into next appreciated!
2024-02-29 23:33:53.670000000
109,943
Try to run in your terminal: python manage.py makemigrations powerlistapp and then python manage.py migrate If this solution does not help, could you please send your project structure (dir and file names)
2024-03-15 12:12:47.517000000
53,101
The error seems to be happening because you're adding .id to each.value which isn't needed. You can simply use each.value directly in your awsinstance resource like this: subnetid = each.value
2024-03-13 22:32:21.090000000
17,129
i am trying to set custom claims in the jwt token that i am taking it from [LINK] something like username, id etc.. and later pass it to my service. when i decode token i do not find the username but the scope exists here is my request: curl --location '[LINK]' \ --header 'Authorization: Basic Y19uYVlMWTVQTF9rYndvNmsyeVZzYWc1cWcwYToyU1RUMWlrQ2VwNUc5UHZZcExXRmtlOGs3SDRh' \ --header 'Content-Type: application/json' \ --data '{ username : asdasd , granttype : clientcredentials , scope : test1 }' i got this : { sub : admin , aut : APPLICATION , aud : cnaYLY5PLkbwo6k2yVsag5qg0a , nbf : [HASH], azp : cnaYLY5PLkbwo6k2yVsag5qg0a , scope : default , iss : [LINK] , exp : [HASH], iat : [HASH], jti : [HASH]-03a6-4e1d-a7f5-[HASH] , clientid : cnaYLY5PLkbwo6k2yVsag5qg0a } but i expected this: { username: asdasd , sub : admin , aut : APPLICATION , aud : cnaYLY5PLkbwo6k2yVsag5qg0a , nbf : [HASH], azp : cnaYLY5PLkbwo6k2yVsag5qg0a , scope : default , iss : [LINK] , exp : [HASH], iat : [HASH], jti : [HASH]-03a6-4e1d-a7f5-[HASH] , clientid : cnaYLY5PLkbwo6k2yVsag5qg0a } even if the result i am trying to get can't be done using this approach i want to know, and i want to know what do you think of it.
2024-02-15 12:00:54.550000000
70,077
I have a SplideJS slider with an amount of slides. I need to perform an action when the last slider is active. I check documentation but I didn't found any reference to this kind of event so I write a script to acheive that goal but i need some help My code : // Class for each slide const splideSlide = document.querySelector('.splide__slide') // Check if a next element sibling is available if(splideSlide.nextElementSibling === undefined) { console.log( On last, I call aother action ) } else { console.log( Not last ) }
2024-03-07 10:30:26.183000000
58,423
I want to enforce Row-Level Security (RLS) for PostgreSQL with Row Security Policies for ALL users including admins and table owners: ALTER ROLE postgres WITH NOBYPASSRLS; -- enforce for the superuser ALTER TABLE items ENABLE ROW LEVEL SECURITY; ALTER TABLE items FORCE ROW LEVEL SECURITY; -- enforce RLS for table owners CREATE POLICY neveranythingpolicy ON items FOR ALL -- cannot do anything USING (false); -- never true Still I can query ALL items as user postgres
2024-02-20 15:01:22.507000000
82,437
The telnet contains two characters at the end. '\n' and '\r' . The problem is solved by checking those two values at the end of the string. Credit to: [USER]
2024-03-07 10:03:53.467000000
114,004
The problem is your api json is start with jsonObject and you are parsing with wrong data format seem like List BookshelfBook . Api service should change to this, [USER]( ?q=jazz+history ) suspend fun getBooks(): BooksVolumes Add BooksVolumes data class, to be same structure with api json. [USER] data class BooksVolumes( val kind: String, val totalItems: Int, val items: List BookshelfBook )
2024-02-17 17:08:56.250000000
18,358
Nested Objects Accessing As we can see the object structure is rather complex, we need to go deep inside all the subobjects, i.e. day and then to time. To do that, we can write a function: this.jsonData = { "Monday": [ { "morning": [ { "starttime": "02:00", "endtime": "07:30" } ], "afternoon": [ { "starttime": "02:00", "endtime": "05:00" } ], "evening": [ { "starttime": "02:30", "endtime": "07:00" } ] } ], "Tuesday": [ { "morning": [ { "starttime": "02:00", "endtime": "07:30" } ], "afternoon": [ { "starttime": "02:00", "endtime": "05:00" } ], "evening": [ { "starttime": "02:30", "endtime": "07:00" } ] } ], } for (var day in this.jsonData) { console.log("Day:" + day); // Accessing the array of schedules for the current day var schedules = this.jsonData[day]; // Looping through the array of schedules for (var i = 0; i schedules.length; i++) { var schedule = schedules[i]; // Looping through the keys (morning, afternoon, evening) for (var timeOfDay in schedule) { console.log("Time of day: " + timeOfDay); // Accessing the array of time slots for the current time of day var timeSlots = schedule[timeOfDay]; // Looping through the array of time slots for (var j = 0; j timeSlots.length; j++) { var timeSlot = timeSlots[j]; console.log("Start time: " + timeSlot.starttime); console.log("End time: " + timeSlot.endtime); } } } } Ps: Instead of logging everything, you can add a search key, and find a particular day or time, and you can also save those values somewhere too.
2024-02-17 08:10:30.937000000
84,600
Just to add my scenario in case it helps someone, I was having trouble connecting to a newly span up ec2 aws linux 2 instance and getting this exact error: No supported authentication methods available (server sent: gssapi-keyex,gssapi-with-mic) and ssh was configured correctly and keys were working on other instances. Could connect when I enabled password authentication. After much searching and testing the issue was we are using a Directory as a Service provider called Jumpcloud and their default registration of machines changed so that Enable Publik Key Authentication was now unticked by default, despite it being enabled in ssh_config.
2024-03-14 23:04:45.707000000
7,151
I updated maven version and it resolved both the issue groupId org.apache.maven.plugins /groupId artifactId maven-surefire-plugin /artifactId version 2.22.1 /version Now my test cases are running perfectly in local as well as in Jenkins both.
2024-03-27 10:57:58.227000000
154,667
I am attempting to: Route some messages from an AWS IoT Topic to a Step Function using Rules Choose a specific Alias to run for that Step Function However, in the AWS Console (web interface) you can only select the State machine . There aren't any further options for versions or alias's. Is it possible to target a specific version or alias from the AWS IoT Rules actions?
2024-02-22 16:32:14.080000000
20,836
You have to add in a test for whether or not column_a is zero. Then check the appropriate param in each case Select * from mytable where (columna = param1 or columna = 0) and (columnb = param2 or columna 0) If a is zero, then the first clause is always true If a is nonzero then the second clause is always true
2024-02-15 10:54:11.293000000
51,549
I am a FreeCodeCamp learner trying to use regular expressions in JavaScript to clean an input string by removing specific characters. I've declared a regex variable and assigned it a pattern, but I'm having trouble integrating it into my cleanInputString function. The existing code uses a loop to iterate through the string and create a new array, but I've been advised that using regular expressions might be more efficient. This is the original code I'm trying to replace: function cleanInputString(str) { const strArray = str.split(''); const cleanStrArray = []; for (let i = 0; i strArray.length; i++) { if (![ + , - , ].includes(strArray[i])) { cleanStrArray.push(strArray[i]) } } } Here's my attempt using regular expressions: function cleanInputString(str) { const regex = /hello/; for (let i = 0; i regex.length; i++) { if (![ + , - , ].includes) { regex.push } } } I'm struggling with integrating the regex into the loop and properly replacing the existing logic. Can someone guide me on how to use regular expressions efficiently in this context?
2024-03-08 11:27:45.100000000
80,357
Dataset.fromgenerator() needs the generator function itself as its first parameter, not a instanciated generator object that you provided with e.g. traingen . The correct code would be (with the args parameters for the generator arguments): files = [os.path.join(folderpath, f) for f in os.listdir(folderpath) if f.endswith('.dat')] trainfiles, validationfiles = traintestsplit(files, testsize=validationsplit) traindataset = tf.data.Dataset.fromgenerator( datagenerator, args=(tf.constant(trainfiles), tf.constant(batchsize), tf.constant(numclasses)) outputsignature=( tf.TensorSpec(shape=(None, NUMOFFEATURES), dtype=tf.float32), tf.TensorSpec(shape=(None, NUMOFCLASSES), dtype=tf.float32) ) ) Also, I'd not batch the data inside the generator. Dataset has its own .batch(size) method for that, and also paddedbatches for filling with e.g. zeros. Beware that I seldom worked with generators, so I'm not exactly sure about that. Here is an example for a Datset from generator, which then gets batched and padded.
2024-02-15 14:57:03.703000000
125,996
I've called the Azure consumption API in PowerBI to get up to date information on resource costs etc. However there is the issue of having long term up to date information, where instead of having just the previous month to this date, I would like the last year to this date. Ideally I want to get data from the current date to a date that the cloud infrastructure was started but for now a year will do. The issue I'm having is a type issue, or in PowerBI it is called a DataFormat error. The error says We couldn't convert to date however my code has converted the date to text via the Date.ToText() function. The body of the request looks like this: currentdate = Date.ToText(Date.From(DateTime.LocalNow)), body = { type : Usage , timeframe : Custom , timeperiod : { from : 2023-03-01T00:00:00+00:00 to : & currentdate & , }, dataset : { aggregation : { totalCost : { name : Cost , function : Sum } }, granularity : Daily , grouping : [ { type : Dimension , name : ResourceGroup }, { type : Dimension , name : ServiceName } ] } } The code I'm focusing on is the timeframe key pair. The from is set to the 1st of March 2023, and the to is supposed to be set to whatever the current date is. Does anyone know how to get the script to recognise that currentdate has been converted to text?
2024-03-28 15:12:57.247000000
136,755
We are developing a Smart Home / IoT product that integrates into HomeKit. The integration is mostly done and interation with the product works great. We have only one problem: naming inside the home app. Each time we add one of our devices to the home app, the services names are changed to the name of the accessory. Here is an example with the homekit tree: Accessory Service: Accessory Information Characteristrics: name = my accessory Service: Light Bulb Characteristics: name = light 1 Service: Light Bulb Characteristics: name = light 2 Service: Light Bulb Characteristics: name = light 3 Service: Light Bulb Characteristics: name = light 4 Service: Temperature Sensor Characteristics: name = temperature sensor 1 When adding this device with 4 lights and 1 temperature sensor to homekit, all the services inside the app gets renamed my accessory . They do not keep their name. It is especially strange since HomeKit knows about the names of the services since if I modify the name of the first light, HomeKit will suggest (in the background on the text input in gray) to rename it light 1 . I know that my characteristics declaration are correct because I verified it with the HomeKit Accessory Tester. What I can do so that the services offered by my device are not renamed to the name of the accessory itself ? Thanks in advance.
2024-03-12 12:58:58.060000000
46,946
activate.bat is only compatible with cmd.exe. When you're using PowerShell, you want to use activate.ps1 instead -- or just invoke activate and let the shell select an appropriate implementation.
2024-03-12 17:48:19.650000000
169,213
I tried import 'chrome' which works but unfortunately you won't be able to compile the TS since it's not an actual module. Error: Cannot find module 'chrome' It turns out you can just reference the chrome namespace for types directly without importing in TS files like const manifest: chrome.runtime.ManifestV3 = { manifest_version: 3, ... } or call a function like chrome.storage.local.get([ key ]).then((result) = { console.log( Value is + result.key); });
2024-02-18 03:59:44.787000000
164,285
I have a dataframe of weather data that I am reading in from a csv file and two of the columns 'SeaLevelPressure' and 'WindSpeed' have numeric values that are suffixed with an 's' I would like to remove. However when I use: df['SeaLevelPressure'] = df['SeaLevelPressure'].str.replace('s','') df['WindSpeed'] = df['WindSpeed'].str.replace('s','') The result is that for the first half of the rows the 'SeaLevelPressure' values are replaced with null and at the same row in the dataframe in the last half of the rows the values for 'WindSpeed' are replaced with null. The data type for both columns is object. Here is sample code that will download the csv from NOAA and print a csv before and after applying str.replace. The break in null values for the two columns happens at 2020-09-09 16:52 as you can see in the second csv file that is output. import pandas as pd url = '[LINK]' df = pd.readcsv(url) df = df[df.REPORTTYPE == 'FM-15'] df = df[['DATE', 'HourlyDryBulbTemperature','HourlyRelativeHumidity','HourlySeaLevelPressure','HourlyWindSpeed','HourlyPrecipitation']] df.rename(columns={'HourlyDryBulbTemperature': 'TempF', 'HourlyRelativeHumidity':'RelHumidity', 'HourlySeaLevelPressure':'SeaLevelPressure','HourlyWindSpeed':'WindSpeed','HourlyPrecipitation':'Precip'}, inplace=True) df.tocsv('weatherbfreplace.csv', index=False) df['SeaLevelPressure'] = df['SeaLevelPressure'].str.replace('s','') df['WindSpeed'] = df['WindSpeed'].str.replace('s','') df.tocsv('weatherafter_replace.csv',index=False) Interestingly, if I save the df to a temp csv prior to doing the str.replace and then read the temp csv back into a df and apply str.replace to that dataframe it works fine. I tried adding the str.replace to the original dataframe right after reading the csv and I get the same behaviour so the few lines of filtering and renameing the columns is not causing the issue. I also examined the original csv file around the datetime where the break occurs and there is nothing unusual in the data. Thanks in advance for the help. I am at my wits end with this.
2024-02-28 14:48:05.913000000
156,474
For those of you who have found this issue when using react, you may want to take into consideration this appear also occurring with paste events. Building on [USER]-garcia's response above, you need to add an onPaste handler to the component to intercept this: - onKeyDown={handleEmailKeyDown} onPaste={handleEmailPaste} Handlers: - const handleEmailKeyDown = (e: KeyboardEvent HTMLInputElement ): void = { if (e.key === ' ') e.preventDefault() } const handleEmailPaste = (e: ClipboardEvent HTMLInputElement ): void = { if (e.clipboardData.getData('text').trim() === '') e.preventDefault() } Note I used trim() is case of multiple whitespaces being pasted (very much an edge case all round).
2024-02-17 16:01:54.123000000
117,604
I have a list of numbers and I want to highlight duplicates. It's already working in its current state with conditional formatting and a custom formula: =COUNTIF(B:B;B1) 1 However, I would like to improve on that. I only want it to highlight the cells value, if the duplicate is within the last 150 rows of that column, or after the current cell. I tried the first part with going backwards, like =COUNTIF(B1000:B850;B1000) 1 , but the range always gets re-ordered again.
2024-03-19 14:55:50.663000000
35,932
I am trying to use 'jsonwebtoken' to create a JWT with RS256. I have created the keys with this command: ssh-keygen -t rsa -b 4096 -m PEM -f filename The output for the private key looks like this: -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: AES-128-CBC,[HASH] bQ4mTHOuQgGobjCKwfgOAml1BIa8Qs7VMuGTRYDyXFCNjx+5gdz687z1GdwEQlFu GYbD15... -----END RSA PRIVATE KEY----- I read in the private key with 'fs' and pass it to jsonwebtoken with: jwt.sign(myData, privateKey, options) However, I am getting this error every time: Error: secretOrPrivateKey must be an asymmetric key when using RS256 I have looked it up and others with the same issue have solved it by putting it in the format that I have. It seems to be in the correct format to me, but jsonwebtoken refuses to encrypt it. I tried remove the 2 header lines (Proc-Type and DEK-Info) but that doesn't help at all. Why is it claiming that my key is incorrect? How do I create the correct key for it if this is not it?
2024-02-26 15:31:26.763000000
177,198
I have a live chat support button on a Next.js 13 app that it's added as follows in the component that renders the button: import Script from 'next/script'; ... div id={onlinehelp-button-${codePlan}} Script id= chatjs strategy= afterInteractive dangerouslySetInnerHTML={{ html: var OnlineHelpAPI=OnlineHelpAPI||{};(function(t){function e(e){var a=document.createElement( script ),c=document.getElementsByTagName( script )[0];a.type= text/javascript ,a.async=!0,a.src=e+t.siteid,c.parentNode.insertBefore(a,c)}t.chatbuttons=t.chatbuttons||[],t.chatbuttons.push({codeplan: ${codePlan} ,divid: onlinehelp-button-${codePlan} }),t.siteid=[HASH],t.maincodeplan= ${codePlan} ,e( [LINK]= ),setTimeout(function(){t.loaded||e( [LINK]= )},5e3)})(OnlineHelpAPI||{}) , }} / /div When the language changes, the codePlan changes and then a different live support chat should be available in a different language. This works if I reload the page after changing the language, but it's a much better UX if there is no page reload because it's not necessary for the language change. I can't figure out how to re-add/re-initiate the script on language change. When the codePlan changes I'm just left with the empty div id={onlinehelp-button-${codePlan}} but no script is added to the page's HTML and therefore not Live Chat button. Is there a way to achieve this?
2024-03-20 10:14:24.880000000
119,266
Thanks everyone, I just needed to disable the firewall -sudo ufw disable
2024-02-28 10:46:37.753000000
9,548
I'm trying to execute a SQL query in Power Automate flow using OData query language. Here's my SQL query: SELECT columnname1, columnname2, columnname3 FROM tablename WHERE CAST(TransactionDate AS DATE) = DATEADD(day, -7, CAST(GETDATE() AS DATE)) AND CAST(AmountinUSD AS FLOAT) = 10000 AND TYPEOFTRANSACTION = 'PURCHASE' AND TransactionCountry = 'US'; Note: The data in the TransactionDate column looks like '2024-03-21'. Below is how I translated it into OData: Filter query: TransactionDate ge datetime'2024-03-03T00:00:00Z' and AmountinUSD ge 10000 and TYPEOFTRANSACTION eq 'PURCHASE' and TransactionCountry eq 'US' Select query: columnname1, columnname2, columnname3 However, when I run this, I keep getting an error message: Unrecognized 'Edm.String' literal 'datetime'2024-03-03T00:00:00Z'' at '31' in 'TransactionDate ge datetime'2024-03-03T00:00:00Z' and AmountinUSD ge 10000 and TYPEOFTRANSACTION eq 'PURCHASE' and TransactionCountry eq 'US''. inner exception: Unrecognized 'Edm.String' literal 'datetime'2024-03-03T00:00:00Z'' at '31' in 'TransactionDate ge datetime'2024-03-03T00:00:00Z' and AmountinUSD ge 10000 and TYPEOFTRANSACTION eq 'PURCHASE' and TransactionCountry eq 'US''. clientRequestId: [HASH]-d0e8-4e45-b63e-[HASH] Can someone help me troubleshoot this issue? My flow looks like this: The details of query looks like this:
2024-03-21 04:25:08.507000000
18,162
The answers here didn't make much sense to me and took me a while to figure things out. Let's say I have this string: return this.logSomething('text'); and I want to turn it into this: this.logSomething('text'); return; ...then this is what the find field should contain: return (this.logSomething\('[a-zA-Z0-9]*'\);) and this is what the replace field should contain: $1 return; The insight here is the additional parentheses within the find field. They delimited your groups, which you then reference with $1 , $2 , and so forth.
2024-02-15 06:01:28.737000000
115,617
You can follow the below approach : Use Get meta data activity and get the list of all files present within the folder as Child items property Use a lookup activity to get the list of expected files use filter activity to get the list of matching files and in case if that count is equivalent to overall file count, then proceed ahead . Somewhat similar thread for reference: <a href="[LINK] to check and compare the file names that are inside a folder (Datalake) using ADF</a>
2024-02-27 11:22:57.987000000
24,892
After some back-and-forth on Parcel's GitHub, here's a solution . Essentially, I needed to create a custom resolver: In a .parcelrc file: { extends : [ [USER]/config-default ], resolvers : [ .parcel-resolvers.mjs , ... ] } In a .parcel-resolvers.mjs file (or whatever you choose to name it): import {Resolver} from '[USER]/plugin'; import path from 'path'; export default new Resolver({ async resolve({ options, specifier }) { if (specifier.startsWith('[USER]') && ! ['[USER]/icons'].includes(specifier)) { const propertyName = specifier .substring(11) .toLowerCase() .replace(/(-\w)/g, (m) = m.toUpperCase().substring(1)); return { filePath: path.join( options.projectRoot, wp-${propertyName}.js ), code: module.exports = wp['${propertyName}'];, }; } return null; }, });
2024-03-22 20:38:24.507000000
165,556
if I understanded correctly by updating array you mean this part: private void SpawnGem(Gem gemToSpawn,Vector2Int pos) { Gem gem = Instantiate(gemToSpawn,new Vector3(pos.x,pos.y,0),Quaternion.identity); gem.transform.parent = transform; gem.transform.name = Gem + pos.x + , + pos.y; //this one allGems[pos.x, pos.y] = gem; gem.SetupGem(pos,this); } I think you misunderstood Instantiate functionality. when you call Instantiate you should give it a reference to prefab then it will clone that prefab into new GameObject in scene. if you give it an already existing GameObject in scene it will do similar thing (clone it into new GameObject) that most likely is not desired behavior. so I think that is the problem, gem freshly created in this method and no one has reference to it so you need to assign it in array. after instantiating, gem and gemToSpawn have nothing to do with each other. or maybe I didn't understand your problem correctly if so, please specify the part of code that you had problem with, those are very large portion of codes to track.
2024-03-24 07:59:42.213000000
163,002
To avoid permission denied, the following code is OK for trashing the file instead of removing it: = service.files().update(fileId=fileDelete, body={'trashed': True}).execute() and if you are in a shared drive, remember to use: = service.files().update(fileId=fileDelete, body={'trashed': True}, supportsAllDrives=True ).execute()
2024-03-29 09:12:42.460000000
174,425
The problem is that you're incrementing p only for the non-CSV files. But your whole approach is much more complicated than needed. Put all the CSV files into a list, which you can do easily with glob.glob() . Then print the list to create the menu, and use the user's input as an index into the list. import glob def seeProject(): csv_files = glob.glob( *.csv ) for i, filename in enumerate(csvfiles, 1): print(f'{i}. {filename}') while True: f = int(input( Which would you like to open?\n )) if 1 = f = len(csvfiles): break print(f Enter a number between 1 and {len(csvfiles)} ) filename = csvfiles[f-1] with open(filename, r ) as file: for line in f: print(line)
2024-03-11 21:48:58.667000000
116,525
This script removes comments from code: use anyhow::{Context, Result}; use std::io::{self, Read}; fn main() - Result () { let mut buffer = String::new(); io::stdin() .readtostring(&mut buffer) .withcontext(|| Failed to read from stdin )?; let modifiedbuffer = buffer .lines() .map(|line| { if line.trim().startswith( // ) { // If the line is a comment, return an empty string String::new() } else if let Some(pos) = line.find( // ) { // If there's an inline comment, return the line up to the comment line[..pos].trim().tostring() } else { // If there's no comment, return the line as is line.tostring() } }) .filter(|line| !line.isempty()) // Remove empty lines resulting from full line comments .collect:: Vec String () .join( \n ); println!( {} , modified_buffer); Ok(()) } If I select all the lines (e.g. in a text editor and pipe the result): // This is a comment x = 1 // This is a comment And run the script, all the comments will be deleted and no undesired new lines will remain: x = 1 But if I only select the first line and run the script, an empty line will remain on top of the second line: [There is an undesirable new line here] x = 1 // This is a comment Why is this, and how to fix it?
2024-02-21 12:48:07.623000000
158,506
I have a macOS app project implementing the direct Cpp-swift interop mechanism. The project some cpp classes and few Swift class. I wanted to pass a cpp object to a swift function as parameter, so I can use it in my swift code. But the documentation is not very clear on how to do this. Can some please share a code snippet explaining how can I achieve this ? I have tried the below code, but it is giving linker error. Swift code: import StClass_modulemap public class SwiftClass { .. public static func getCppObj (pObj: inout StClass) - Void { ... } } Cpp code: #include Student.hpp #include StClass.hpp #include InteropLib-Swift.h using namespace InteropLib; void Student::Introduce() { //creating cpp to be passed to swift // StClass StObj(50); // Teacher::getCppObj(StObj); std::cout header included in hpp std::endl; } calling Student::Introduce is giving a linker error for StClass
2024-02-26 12:06:35.237000000
127,680
hi i think you should add this code in your initialize function: raster = new ol.layer.Tile({ source: new ol.source.OSM() }); you didn`t define any source for your map.
2024-02-20 11:23:07.333000000
103,098
Was wondering if anyone has used this before and has any information they could tell me about it? I've read through the documentation and they dont seem to say a lot on its implamentation, I have gone and added to an apache server but it still seems to do nothing, I think i might be doing somethin wrong
2024-02-14 22:11:01.567000000
102,079
There is a spacer that act like a place holder for the right part position. span class= example-spacer /span you need also to add its css class, for example .example-spacer { flex: 1 1 auto; } You can find similar implementations in the official website for angular materials. [LINK]>
2024-03-17 12:29:05.980000000
33,315
I was reading the draft of the third edition of Modern C by Jens Gustedt and came across a paragraph discussing the use of function notation for function pointer parameters. Specifically, it states that using function notation emphasizes, semantically , that a null function pointer as an argument is not allowed: / This emphasizes that the ˋˋhandler"" argument cannot be null. / int atexit(void handler(void)); / Compatible declaration for the same function. / int atexit(void(* handler)(void)); However, upon examining the 3220n draft, I couldn't find any mention of this specific notation or its implications. Compiling the code with GCC trunk version doesn't yield any warnings either . Does the use of semantically in the paragraph imply that this notation is merely a developer esoteric convention or is it actually defined somewhere in the C standard?
2024-03-02 10:27:32.640000000
120,977
I would like to show a corrgrapher object within Shiny, which allows a user to move the nodes. The package help says Feel free to pass it into plot, include it in knitr report or generate a simple HTML. but I am struggling with the right syntax. What am I missing? Object looks like this : Code See example below. I tried HTMLOutput/renderHTML as well. library(tidyverse) library(shiny) library(corrgrapher) if (interactive()) { ui - fluidPage( fluidRow(h1( corrgrapher plot ), plotOutput( corrgram )) ) server - function(input, output, session) { output$corrgram - renderPlot({ plotdata - corrgrapher(mtcars) plot(plotdata) }) } shinyApp(ui = ui, server = server) }
2024-02-27 04:56:11.467000000
5,634
I am new to python and I am trying to copy the values of a 2D array to another array I have the code R=[[0]3]3 p=[[1,0.25,0.14],[0.25,1,0.1],[0.14,0.1,1]] for i in range(0,3,1): for j in range(0,3,1): R[i][j]=p[i][j] print(R) But the output has all the rows of R as the last row of p. [[0.14, 0.1, 1], [0.14, 0.1, 1], [0.14, 0.1, 1]] Can anyone please explain why this is happening?
2024-02-29 09:28:48.823000000
154,867
The datasources were non-XA and I don't get this error in JBoss. I made one of my databases and driver XA, but I have a driver that I can't make XA. Is there a Glassfish setting that would stop this exception from occurring? Exception creating object: java.sql.SQLException: Error in allocating a connection. Cause: java.lang.IllegalStateException: Local transaction already has 1 non-XA Resource: cannot add more resources. at com.sun.gjc.spi.base.AbstractDataSource.getConnection(AbstractDataSource.java:119) at com.crlcorp.docqc.app.jdbc.DocQCDataConn. init (DocQCDataConn.java:27) at com.crlcorp.docqc.app.jdbc.DataLoader.getUnreviewedItems(DataLoader.java:111) at com.crlcorp.docqc.app.ejb.DocQCBean.getUnreviewedItems(DocQCBean.java:27)
2024-03-21 20:40:10.190000000
85,554