input
stringlengths
0
27.7k
CreationDate
stringlengths
29
29
__index_level_0__
int64
0
179k
You can read filenames in specific directory using os . Just specify path to the directory. import os filenameslist = [] for file in os.listdir('path'): filenameslist.append(file) output = 'filenames.txt' with open(output, 'w') as f: for filename in filenames_list: f.write(filename + ',\n')
2024-03-20 15:23:20.750000000
143,453
There are multiple approaches to solve this task. The best option depends on the context: Are you doing just on-the-fly JSON transformation? Or maybe you need to read different sources and deserialize them into objects to work with them in your application? How many different data sources you have? What is the probability of changes to the format you receive data in? In the generic case, I would define some interface like: interface IPeopleReader { IEnumerable Person Read(string json); } And implement it for each of the data sources (e.g. StudentReader and EmployeeReader ). While you have the same structure with just different property names, you can have shared implementation like: class PeopleReader : IPeopleReader { private readonly string collectionName; private readonly string firstNameField; private readonly string lastNameField; public PeopleReader(string collectionName, string firstNameField, string lastNameField) { collectionName = collectionName; firstNameField = firstNameField; lastNameField = lastNameField; } public IEnumerable Person Read(string json) { var node = JsonNode.Parse(json); if (node?[collectionName]?.AsArray() is { } array) { foreach (var item in array) { if (item?.AsObject() is { } obj) { yield return new Person { FirstName = obj[firstNameField]?.ToString(), LastName = obj[_lastNameField]?.ToString() }; } } } } } And just reuse it for your cases: class StudentReader() : PeopleReader( Students , FirstName , LastName ) { } class EmployeeReader() : PeopleReader( Employee , FName , LName ) { } This way you can be flexible about implementations: you don't duplicate much code while the structure is the same and in case one of the sources will change its format, you can just update corresponding class to whatever code you want (e.g. define custom models within that class or do any other logic).
2024-03-23 15:03:18.547000000
15,359
I have a LitElement -based web component to assist with advance forms validation. I would like to allow extensible validators to be added later via dependency injection. In my testing the following code, IValidate classes registered before my web component is defined are available to the web component, but classes registered after the web component is defined are not: import { html, css, LitElement } from 'lit'; import { customElement, property } from 'lit/decorators.js'; import { container, injectable, InjectionToken } from 'tsyringe'; export interface IValidate { validate(input: HTMLElement): boolean; } export const ValidateToken: InjectionToken IValidate = 'ValidateToken'; [USER]() export class ZipCodeValidator implements IValidate { validate(input: HTMLElement): boolean { return true; / Implementation / }; } container.register(ValidateToken, { useClass: ZipCodeValidator }); [USER]('my-validate') export class MyValidate extends LitElement { validators: IValidate[] = []; async connectedCallback() { super.connectedCallback(); this.validators = container.resolveAll IValidate (ValidateToken); debugger; // this is hit first; this.validators.length == 1; I expected 2 } } [USER]() export class ExistsValidator implements IValidate { validate(input: HTMLElement): boolean { return true; / Implementation / }; } debugger; // this is hit 2nd container.register(ValidateToken, { useClass: ExistsValidator }); When I debug: The debugger breakpoint in connectedCallback() is hit before the 2nd debugger The ExistsValidator has not yet been registered. I am surprised that the connectedCallback is being executed before we ever hit the container.register(ValidateToken, { useClass: ExistsValidator }); I create a distinct test that works as I would expect: import 'reflect-metadata'; import { container, injectable, InjectionToken } from 'tsyringe'; import { expect } from '[USER]-bundle/chai'; export interface IFoo { validate(input: string): boolean; } export const FooToken: InjectionToken IFoo = 'FooToken'; [USER]() export class FooA implements IFoo { validate(input: string): boolean { return input === a }; } container.register(FooToken, { useClass: FooA }); [USER]() export class FooB implements IFoo { validate(input: string): boolean { return input === b }; } container.register(FooToken, { useClass: FooB }); describe('DI', () = { it('registers all validators', () = { const validators = container.resolveAll IFoo (FooToken); expect(validators.length).to.equal(4); console.log(validators.length); }); }) [USER]() export class FooC implements IFoo { validate(input: string): boolean { return input === c }; } container.register(FooToken, { useClass: FooC }); [USER]() export class FooD implements IFoo { validate(input: string): boolean { return input === d }; } container.register(FooToken, { useClass: FooD }); What am I missing? Is there a 'proper' way to mix DI ( tsyrige ) with Lit ?
2024-03-12 21:40:59.410000000
133,477
In removeOdd() , you are incrementing it on every loop iteration, even when you remove an item, which will cause you to skip the next item, or even increment out of bounds. You need to increment it only for items that you don't remove, eg: void removeOdd(list int & li) { for(list int ::iterator it = li.begin(); it != li.end(); ) { if (*it % 2) it = li.erase(it); else ++it: } } A better option is to use std::removeif() with list::erase() instead of using a manual loop, eg: void removeOdd(list int & li) { li.erase( removeif(li.begin(), li.end(), [](int i){ return (i % 2) != 0; } ), li.end() ); } In C++20 and later, you can use std::eraseif() instead, eg: void removeOdd(list int & li) { eraseif(li, [](int i){ return (i % 2) != 0; } ); } UPDATE: In your test() example, your original code does not actually work: it checks and removes 5 from the list it skips checking 2, leaving it in the list! it checks and keeps 8 in the list it checks and removes 9 from the list it skips checking 6, leaving it in the list! it checks and removes 7 from the list it skips checking 3, leaving it in the list! it checks and keeps 4 in the list it checks and removes 1 from the list IT SHOULD STOP HERE, BUT IT DOESN'T!!!! it skips checking 5, leaving it in the list! it checks and keeps 2 in the list it checks and keeps 8 in the list it checks and keeps 6 in the list it checks and removes 3 from the list it skips checking 4, leaving it in the list! Online Demo You removed the last item from the list, and then incremented it (which is now equal to the end iterator), thus going out of bounds of the list because it != li.end() is no longer valid, and thus your code exhibits undefined behavior . So, simply by a fluke of UB, you ended up with a result of [2,8,6,4] (before sorting). You would have gotten [2,8,6,3,4] instead if the UB was not present. Fixing removeOdd() as I described above solves the UB problem, and gives the expected result: Online Demo
2024-03-06 04:52:14.420000000
34,678
I am training an image classification model in a flutter application using the google ml_kits' custom model function. I have tried using 'TensorFlow Lite Model Maker' but it didn't work due to the import and compatibility issues they are experiencing. I also tried using keras to create a model and convert to tensorflow lite but couldn't add metadata to it because of tflite-support dependency incompatibilities all on either colab or my personal workspace with different python virtual environments. Most of the tutorials out there just don't seem to work anymore. Is there anyway to do this
2024-03-13 16:11:24.703000000
151,207
I have fix this error by reinstalling android studio with standard setting none of the given method worked for me i have tried everything available on internet. but before that you have to uninstall the complete android studio.
2024-03-07 18:32:52.427000000
128,941
I get some data from the web browser and should create a PDF document. I get the data looks like the following: rectangle for the text placement; font size font name text The problem is I can't understand how I can calculate the text position on the PDF document correctly. The text in the browser looks like that , but the text in my Pdf looks like that I get the X coord as I get it from the browser, but the text is shifted on the left. I read that the each glyph has a bounding box and placed the by the center. I tried to calculate my x,y by the following way: public Rectangle2D.Float calculatePosition(PDFont font, float fontSize, float x, float y, String text, float pageWidth, float pageHeight) { final PDFontDescriptor fontDescriptor = font.getFontDescriptor(); final float ascent = fontDescriptor.getAscent(); final float descent = fontDescriptor.getDescent(); final float textHeight = (ascent - descent) fontSize / 1000 + textRise; final float textWidth = font.getStringWidth(text) fontSize / 1000; return new Rectangle2D.Float(x, y, textWidth, textHeight); } Can anyone help me, how to calculate x,y position?
2024-02-23 23:07:44.543000000
130,648
I wanted to create a uber-fat jar for my spring boot app and for that I am using maven-shade-plugin .After mvn clean install , i see there are two jar created one is normal and another shaded,normal jar is working fine but when I am trying to run shaded jar it is failing with below error Error: Could not find or load main class com.walmart.SpringAppInitializer Below is my plugin added in pom.xml properties start-class com.walmart.SpringAppInitializer /start-class /properties plugins plugin groupId org.springframework.boot /groupId artifactId spring-boot-maven-plugin /artifactId version ${spring-boot.version} /version configuration fork true /fork mainClass ${start-class} /mainClass /configuration executions execution phase package /phase goals goal repackage /goal /goals /execution /executions /plugin plugin groupId org.apache.maven.plugins /groupId artifactId maven-shade-plugin /artifactId version 3.3.0 /version executions execution phase package /phase goals goal shade /goal /goals configuration transformers transformer implementation= org.apache.maven.plugins.shade.resource.ManifestResourceTransformer mainClass ${start-class} /mainClass /transformer /transformers filters filter artifact : /artifact excludes exclude META-INF/maven/ /exclude exclude META-INF/.SF /exclude exclude META-INF/.DSA /exclude exclude META-INF/.RSA /exclude /excludes /filter /filters relocations relocation pattern com.google.gson /pattern shadedPattern com.shaded.google.gson /shadedPattern /relocation /relocations shadedArtifactAttached true /shadedArtifactAttached shadedClassifierName shaded /shadedClassifierName /configuration /execution /executions /plugin /plugins Can anyone please suggest why fat jar is not working
2024-03-27 10:53:36.683000000
159,759
This is my code ListView.builder( itemCount: customerList.length, shrinkWrap: true, scrollDirection: Axis.vertical, itemBuilder: (BuildContext context, int index) { return Container( child: Expanded( child: ListTile( title: Text(customerList[index].customerName), subtitle: Column( children: [ Text(customerList[index].customerID), Text(customerList[index].customerAddress), ], ), ), ), ); }), I have Used Shrink Wrap and Scroll direction and I have tried Getting Rid of the expanded widget but i am still getting this error here is debug console The following assertion was thrown during performLayout(): Vertical viewport was given unbounded width. Viewports expand in the cross axis to fill their container and constrain their children to match their extent in the cross axis. In this case, a vertical shrinkwrapping viewport was given an unlimited amount of horizontal space in which to expand. RenderBox was not laid out: RenderShrinkWrappingViewport#90e43 relayoutBoundary=up23 NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE 'package:flutter/src/rendering/box.dart': Failed assertion: line 1972 pos 12: 'hasSize'
2024-03-05 08:10:35.030000000
23,035
still no data are being displayed Display data using Date Range, The user choose the Start Date and End Date The Problem is no data been displayed even though there is data in my Msql database please helps #My date range layout: el-row :gutter= 20 class= search-bar gutterWidth el-col :span= 6 Users | Operation Head /el-col el-col :span= 7 el-date-picker v-model= startDate type= datetime placeholder= Start Date clearable format= YYYY-MM-DD HH:mm:ss date-format= MMM DD, YYYY time-format= HH:mm / /el-col el-col :span= 7 el-date-picker v-model= endDate type= datetime placeholder= End Date clearable format= YYYY-MM-DD HH:mm:ss date-format= MMM DD, YYYY time-format= HH:mm / /el-col el-col :span= 4 el-button [USER]= searchVisit Search /el-button /el-col /el-row #My Table: div class= dataTableContent el-table :data= users el-table-column label= Name sortable template #default= { row } {{ row.firstname }} {{ row.lastname }} /template /el-table-column el-table-column prop= email label= Email sortable /el-table-column el-table-column prop= locationName label= Location sortable /el-table-column el-table-column label= Role sortable template #default= { row } {{ row.roleName }} {{ row.roleLevel }} /template /el-table-column el-table-column prop= contactNumber label= Contact Number sortable /el-table-column el-table-column prop= dateTimeCreated label= Date/Time Created sortable /el-table-column el-table-column label= Action template #default= scope section class= row d-flex flex-nowrap section class= col-4 el-button [USER]= viewUser(scope.row.id) type= primary Assign /el-button /section section class= col-4 el-button type= danger [USER]= deleteUser(scope.row.id) Delete /el-button /section section class= col-4 el-button plain [USER]= viewUserDetails( scope.row.id, scope.row.firstname, scope.row.lastname, scope.row.email, scope.row.contactNumber, ) Edit /el-button /section /section /template /el-table-column /el-table !--End table-- And i have made a method for it: computed: { filteredUsers() { const startDate = new Date(this.startDate); const endDate = new Date(this.endDate); return this.users.filter((user) = { const userDate = new Date(user.dateTimeCreated); return userDate = startDate && userDate = endDate; }); }, }, still no data are being displayed Display data using Date Range, The user choose the Start Date and End Date The Problem is no data been displayed even though there is data in my Msql database please helps
2024-03-07 00:22:17.563000000
14,190
I'm just starting with JavaFX, and I encountered 2 Build errors when I hit RUN on my project. I'm using Netbeans 21 and JDK 21. After I run my project, I click Run File on the Main file, and it displays this error: Java 15 has removed Nashorn, you must provide an engine for running JavaScript yourself. GraalVM JavaScript currently is the preferred option. BUILD FAILED ...\jfx-impl.xml:1251: The following error occurred while executing this line: ...\jfx-impl.xml:1259: Unable to create javax script engine for javascript I've been looking for information to solve it, but I don't know what to search for.
2024-03-23 08:41:25.033000000
133,252
folks. I have a problem where I am indexing some values in a dataframe, and trying to set the values of another dataframe where those indexes match. For example, I have a timestamp: i = '2019-01-19 00:00:04' I find the rows in the dataframe one where the timestamp matches: values = list(df1.loc[i]['qc'].values) Then, I look in a separate dataframe where there should be a corresponding series that has the same number of rows, and I try to set those rows to values rawdata[rawdata.index == i]['qc'] = values However, those values are not being set in rawdata to the corresponding values in df1 . Why?
2024-02-13 16:32:08.500000000
84,901
Whenever You want to expecify a value from corrent key, just use the yup.ref( key ), in this way you can use this to create better conditions in yup validation schema!
2024-02-22 15:27:18.230000000
60,730
As title states I have 6 string objects that I would like to update daily at midnight within Node.JS. One is a regular string and the other is an array of strings. I know this isn't natively supported in NodeJS so I had looked into Node-Scheduler and Cron Jobs but they didn't look to do exactly what I needed. Let's say today, March 3rd I want let answer = Bill Gates and tomorrow March 4th I want let answer = Elon Musk is there a way to do this?
2024-03-04 02:01:07.007000000
154,576
If I am allowed 10MB/s bandwidth utilization, and I model that with a token-bucket with this API class TokenBucket { public: void setRefillRate(std::int64t tokensPerMillisecond); bool tryConsume(std::int64t tokens); }; Where a token means a single byte. And bandwidth is allowed to be consumed whenever tryConsume() returns true. Then what rate of replenishing should I set for this to work? If I set 10,000 tokens per millisecond (to model 10MB/s), then isn't the following possible? At time t=0ms, bucket has 10,000,000 tokens (for 10MB) At time t=1ms, client requests 10,000,000 tokens, this is allowed At time t=999ms, client requests additional 9,980,000 tokens, this is allowed At (3) the request is allowed because the token bucket fills up 9,980,000 tokens (based on the refill policy of 10,000 tokens per millsecond. And there have been 998ms between point (1) and (2) above). Now I've violated the 10MB/s limitation, and gone over by almost 2x the limit. How do typical implementations get around this problem?
2024-02-15 04:34:58.850000000
103,621
I'm trying to do run Get-WmiObject / Get-CimInstance cmdlets on every online computers from my domain, I've tought of doing something like this : Get-ADComputer -Filter * | Foreeach-Object { #Do Something } The problem is that if the computer isn't online I'll get an error, and as far as i'm aware I can't run that command for online computers only. So I was thinking about going on the next computer when an error occurs, using -ErrorAction SilentlyContinue but it would just execute the rest of the code inside the loop anyways and I want to avoid that. Is there any way to get outside of the loop in my situation when that occurs or any other way I could do what I need ? Thank you !
2024-03-27 09:23:20.710000000
142,637
This is not so much a question rather than a bug report which I have now raised on GitHub . I am posting it here because lots of people are using SO rather than GitHub to get help. And maybe someone of you has found/ will find a workaround for it. survminer::ggsurvplot doesn't seem to draw risk tables any longer. It must have been a recent update with something (I cannot say of what!), because it used to work until some time ago (I cannot say when it stopped working). This is a reprex of the example code in ?ggsurvplot !!! I just added risk.table = T . library(survminer) Loading required package: ggplot2 Loading required package: ggpubr library(survival) Attaching package: 'survival' The following object is masked from 'package:survminer': myeloma fit - survfit(Surv(time, status) ~ sex, data = lung) ggsurvplot(fit, data = lung, risk.table = T) Error in as.character(call[[1]]): cannot coerce type 'closure' to vector of type 'character' Created on 2024-03-14 with reprex v2.1.0 Session info sessioninfo::sessioninfo() Session info setting value version R version 4.3.2 (2023-10-31) os macOS Big Sur 11.7.6 system x8664, darwin20 ui X11 Packages package version date (UTC) lib source ggplot2 3.4.4.9000 2023-11-19 [1] local ggpubr 0.6.0 2023-02-10 [1] CRAN (R 4.3.0) survival 3.5-7 2023-08-14 [1] CRAN (R 4.3.2) survminer * 0.4.9 2021-03-09 [1] CRAN (R 4.3.0) #
2024-03-14 12:48:55.917000000
104,969
I'm trying to understand if I can make this into CKEditor 5 widget so that it would retain the widgets add empty line before or after the widget buttons and the yellow border: div class= widget-tip p Tip: br lorem ipsum /p /div Only way I could make this work was if applying toWidget() to the widget-tip div, then also creating another div inside it and applying toWidgetEditable() to that. I'm trying to widgetize my existing templates, because there's no more [LINK]> in CKEditor 5 and it's a bit hard to add new lines after divs - after adding new template, you'd have to go to source, add some text after the new block, then come out of the source mode and continue edititing. Here's picture of the widget that has the nice yellow border and the next/prev empty line buttons. I could go with the new stucture, but our system already has the old templates in the db. I'd either have to alter them all in db or create something in the widget to add the missing inner container (I did try that, but failed). Here's my current work. Does anybody have any ideas? import { Plugin } from 'ckeditor5/src/core'; import { Widget, toWidget, toWidgetEditable } from 'ckeditor5/src/widget'; export default class WidgetTip extends Plugin { static get requires() { return [Widget]; } init() { this.defineSchema(); this.defineConverters(); } defineSchema() { const schema = this.editor.model.schema; schema.register('widgetTip', { // Behaves like a self-contained block object (e.g. a block image) // allowed in places where other blocks are allowed (e.g. directly in the root). inheritAllFrom: '$blockObject' }); schema.register('widgetTipInner', { // Cannot be split or left by the caret. isLimit: true, allowIn: 'widgetTip', // Allow content which is allowed in the root (e.g. paragraphs). allowContentOf: '$root' }); } defineConverters() { const conversion = this.editor.conversion; // widgetTip converters conversion.for('upcast').elementToElement({ model: 'widgetTip', view: { name: 'div', classes: 'widget-tip' } }); conversion.for('dataDowncast').elementToElement({ model: 'widgetTip', view: { name: 'div', classes: 'widget-tip' } }); conversion.for('editingDowncast').elementToElement({ model: 'widgetTip', view: (modelElement, { writer: viewWriter }) = { const div = viewWriter.createContainerElement('div', { class: 'widget-tip' }); return toWidget(div, viewWriter, { label: 'simple widgetTip widget' }); } }); // widgetTipInner converters conversion.for('upcast').elementToElement({ model: 'widgetTipInner', view: { name: 'div', classes: 'widget-tipinner' } }); conversion.for('dataDowncast').elementToElement({ model: 'widgetTipInner', view: { name: 'div', classes: 'widget-tipinner' } }); conversion.for('editingDowncast').elementToElement({ model: 'widgetTipInner', view: (modelElement, { writer: viewWriter }) = { // Note: You use a more specialized createEditableElement() method here. const div = viewWriter.createEditableElement('div', { class: 'widget-tip__inner' }); return toWidgetEditable(div, viewWriter); } }); } }
2024-03-21 13:59:16.013000000
165,080
Hello for my website Fameways when i add new CSS or add new blog post the server getting slow during few minutes maybe 5/10 minutes and then the website is fast. It's like ON/OFF button server... Anyone know how to fix this ? i already disable autosave blog post wordpress, add wp-rocket and more but it's not fixing it. Thanks fo your answer Disable autosave wordpress Add wp-rocket Disable heartbeat API wordpress Disable all plugins not necessary on new blog post editor
2024-02-26 07:25:11.073000000
66,001
I've identified the issue in your code and provided a corrected version: The Problem: The issue lies in how you're checking for valid directions in the if command in rooms[location] line. This line only checks if the direction exists as a key in the current room's dictionary, not if the corresponding value points to another room. Code to the Solution: rooms = { 'Locker Room': {'go South': 'Storage Room', 'go West': 'Coaches Office', 'go North': 'Teachers Lounge', 'go East': 'Concession Stand', 'Item': 'gym clothes'}, 'Storage Room': {'go North': 'Locker Room', 'go East': 'Cafeteria', 'Item': 'basketballs'}, 'Cafeteria': {'go West': 'Storage Room', 'Item': 'sports drinks'}, 'Concession Stand': {'go West': 'Locker Room', 'go North': 'Gym', 'Item': 'nachos'}, 'Coaches Office': {'go East': 'Locker Room', 'Item': 'team films'}, 'Teachers Lounge': {'go South': 'Locker Room', 'go East': 'Math Class', 'Item': 'referees'}, 'Math Class': {'go West': 'Teachers Lounge', 'Item': 'teammates'}, 'Gym': {'Item': 'basketball team'} } directions: set[str] = {'go North', 'go South', 'go East', 'go West'} def main(): location = 'Locker Room' inventory = [] def show_status(): print('') print(f'You are currently in the {location}') print(f'You currently have: ', *inventory, sep=' ') if rooms[location]['Item'] in inventory: print('This room is empty.') else: print(f'You see ' + rooms[location]['Item']) while True: showstatus() possiblemoves = {key: value for key, value in rooms[location].items() if key in directions} Filter valid moves currentitem = rooms[location]['Item'] command = input('\nPlease enter a direction or take the available item: \n').strip().lower() if command in directions: if command not in possiblemoves: Check if it's a valid move in the current room print( You can't go that way. Enter a new direction. ) else: location = possiblemoves[command] Update location based on valid move if location == 'Gym': if len(inventory) != 7: print( You've been beat! Game Over! ) break else: print('Congratulations! You have collected all the items and defeated the basketball team!') break elif command == 'exit': print( Thank you for playing. Hope you enjoyed the game. ) break elif command == 'get {}'.format(currentitem): if currentitem not in inventory: print('You got ' + currentitem, 'in your inventory.') inventory.append(currentitem) else: print('You already have this item.') else: print('Invalid input.') if name == 'main': main() Explanation: 1. Filtering Valid Moves: The possiblemoves dictionary is created by iterating through the current room's items and filtering for keys that are also present in the directions set. This ensures you only consider valid directions from the current room. 2. Checking Valid Moves: The if command not in possible_moves check now verifies if the entered direction is actually a valid move from the current room, preventing invalid movements. With this fix, your game should correctly handle user input for directions and allow players to navigate through the rooms as intended.
2024-02-23 03:27:45.613000000
26,384
I had to add ObjectType decorator to UploadedInvestmentDocumentResponse and remove InputType from OmitType(UploadedDocumentResponse, ['documentType']) , since I am creating an object that gets bind to a response field, not an input field ! opportunity-account.output.ts [USER]() class UploadedInvestmentDocumentResponse extends OmitType(UploadedDocumentResponse, ['documentType']) { [USER](ExpectedInvestmentSupportInfoType) [USER](() = ExpectedInvestmentSupportInfoType) documentType: ExpectedInvestmentSupportInfoType; } [USER]() export class OpportunityAccount extends AccountsCommonDetails { [USER]() [USER](() = String, { description: 'Customer id' }) customerId: string; // I believe issue is here ( when I commented out the field below, this error vanished !) [USER](UploadedInvestmentDocumentResponse) [USER]({ each: true }) [USER](() = UploadedInvestmentDocumentResponse) [USER](() = [UploadedInvestmentDocumentResponse], { description: 'Expected investment support info' }) expectedInvestmentSupportInfo: UploadedInvestmentDocumentResponse[]; }
2024-03-25 21:05:06.523000000
167,466
I have a huge multiindex dataframe in long format. There's only one value column. Some entries in value are of type pd.Timedelta . I got an error when trying to save that dataframe as parquet file using pd.toparquet : pyarrow.lib.ArrowInvalid: ( Could not convert Timedelta('2903 days 00:00:00') with type Timedelta: tried to convert to double , 'Conversion failed for column value with type object') If I convert the particular value from the error message to numpy , I get the following: array([Timedelta('2903 days 00:00:00')], dtype=object) I set up a toy example to check if it is possible at all to convert pd.Timedelta to parquet . The following code works just fine: import pandas as pd idx = pd.MultiIndex.fromtuples( [( A , x ), ( A , y ), ( B , x ), ( B , y )], names=[ idx1 , idx2 ]) data = { value : [ pd.Timedelta(days=10), 2.5, pd.Timedelta(days=20), 5 ]} df = pd.DataFrame(data=data, index=idx) df.toparquet( test.parquet ) x = pd.readparquet( test.parquet ) A simple df.iloc[0, :].to_numpy() delivers the exact same type as in my real dataframe: array([Timedelta('10 days 00:00:00')], dtype=object) . I am wondering what might be the difference of my original dataframe compared to my toy example?
2024-03-20 08:37:23.163000000
70,381
The error message you're encountering suggests that Node.js is unable to find the main file (/usr/src/app/dist/main) when trying to start the application. This typically happens when the file specified in the CMD directive of your Dockerfile is not present or not located at the expected path within the Docker container. In your Dockerfile, you've specified CMD [ node , dist/main ], indicating that the main file of your application is expected to be located at /usr/src/app/dist/main within the container. However, it seems like either the build process or the copying process might not have worked as expected, resulting in the absence of this file. Here are a few things you can check: Build Process: Ensure that the build process (npm run build) in your Dockerfile completes successfully without any errors. This process is responsible for generating the dist folder and the main file within it. File Structure: Verify that the dist/main file is present in your project directory before building the Docker image. You can do this by navigating to the directory where your Dockerfile is located and checking if the dist/main file exists. File Copying: Double-check the COPY --from=development /usr/src/app/dist ./dist line in your Dockerfile. This line is responsible for copying the built files from the development stage to the production stage. Ensure that it's correctly copying the dist directory containing the main file. Volume Mounting: If you're using volume mounting during development or deployment, ensure that it's not interfering with the file structure within the container. Volume mounting can override the files within the container, so make sure it's not causing the main file to be inaccessible. By verifying these points, you should be able to identify the issue causing the main file to be missing within the Docker container. Once you've resolved the issue, rebuild your Docker image and try running it again.
2024-02-23 05:49:37.637000000
178,298
I have a query like this. WITH allproducts AS ( SELECT gtin, category, productname, productimage, brand, manufacturer FROM products ), clientproducts as ( SELECT * FROM allproducts WHERE clientid = usdemoaccount and isclientproduct = true ), competitor_products as ( SELECT * FROM allproducts WHERE clientid = usdemoaccount and isclientproduct = false ); Here computation for all_products happens twice because its reference twice in the below code. But we need not do it twice and save on compute if re-use the above results. The reason for this from BQ documentation mentions, non recursive CTE's are not materialized. BigQuery only materializes the results of recursive CTEs, but does not materialize the results of non-recursive CTEs inside the WITH clause. If a non-recursive CTE is referenced in multiple places in a query, then the CTE is executed once for each reference. I'm exploring alternatives to address this issue. While I understand that using temporary tables is one option, I'm concerned about potential drawbacks such as increased storage costs and concurrency issues, especially when the same API is used by multiple users with different parameters. What are some effective strategies or best practices for optimizing the performance of CTEs in BigQuery? Specifically, I'm interested in approaches that can help materialize non-recursive CTEs or improve query performance without resorting to temporary tables. Even with temporary tables, if there is a option to automatically clean up those tables to avoid storage costs and handle concurrently out of the box, that should also be preferable.
2024-03-22 11:51:52.930000000
74,511
If I enter the following code into the JavaScript console, it returns the number 1310, but why? console.log(131+"".split(',').map(Number)); I'm adding to 131 to convert it to a string, then using the split function to convert that string into an array, and the map function to make sure the values in the array are Number s. I'm not sure where the 0 is coming from after the 131, can anyone explain this?
2024-03-25 15:27:37.383000000
89,534
Currently, I use the following hardcoded approach to exclude URLs that have a response header Content-Disposition: attachment to prevent Puppeteer from downloading the files. Puppeteer version: 21.7.0 Node.js version: 20.11 NPM version: 2.4.0 await page.setRequestInterception(true); page.on( request , (interceptedRequest) = { if ( interceptedRequest.resourceType() === image || interceptedRequest.resourceType() === stylesheet || interceptedRequest.resourceType() === font || interceptedRequest.resourceType() === media || interceptedRequest.resourceType() === object || interceptedRequest.resourceType() === other || //Hardcoded workaround to prevent automatic downloads due to Content-Disposition: attachment interceptedRequest.url().includes( example.com/doc/ ) || interceptedRequest.url().includes( example.com/docs/ ) ) { interceptedRequest.abort(); } But there has to be an elegant approach. Through reading the docs and the internet, even the GitHub comments, it seems there is no such possibility yet, apart from an unsupported add-on coded by a random contributor that seems not to work as to a GitHub comment in there. So, I want to use swarm knowledge over here.
2024-03-12 14:37:15.870000000
5,005
I'm trying to get a list of rows in a table as IWebelEments to parse their contents for testing purposes I'm trying using the following piece of code (the variable has been declared globally for the class as public IList IWebElement Rows): public class SearchResults : Widget { public IList IWebElement rows; public SearchResults(IWebDriver driver) : base(driver) { rows = driver.FindElements(By.XPath( //table[[USER]='tblResults']//tr )); The strange thing is that when testing it on selectorshub this Xpath returns the correct value: But when debugging the way it's behaving it appears to be gathering 11 instances of the first row in the IList. Which is strange, since it shows that it's adding the correct number of members to the list, but for some reasons it only gets eleven instances of the first row in the table body. I have tried multiple versions of the Xpath. Making the variable local instead of global, and testing it live con the browser and with selectorshub. I wasn't able to get anything close to the wanted result. UPDATE: I have also tried making single element Xpaths using an iterator to build an Xpath for each row and i'm still finding the same problem, for some reason each of these Xpaths catch the same element on the script while on the browser they catch the proper elements: //table[[USER]='tblResults']//tbody//tr[1] //table[[USER]='tblResults']//tbody//tr[2] //table[[USER]='tblResults']//tbody//tr[3] //table[[USER]='tblResults']//tbody//tr[4] //table[[USER]='tblResults']//tbody//tr[5] and so on...
2024-03-22 21:11:12.690000000
56,491
Add this code in robots.txt file. User-agent: * Disallow: / Disallow all by default Allow: /$ Allow the homepage Allow: /privacy-policy Allow the privacy policy page This configuration tells search engine crawlers to disallow everything by default but explicitly allows indexing for the homepage (/) and the privacy policy page (/privacy-policy). Make sure to adapt these directives according to your actual URLs and site structure. After making changes, it's advisable to test your robots.txt file using tools like Google's Robots Testing Tool to ensure it's working as expected.
2024-03-02 17:31:02.223000000
73,083
The dataset will be created whether there are errors or not, but you can easily delete it after the fact by checking if the number of observations is 0. You can create a macro function that does this for any given dataset. %macro deletedata(data); %let dsid = %sysfunc(open(&data)); %let nobs = %sysfunc(attrn(&dsid, nlobs)); %let rc = %sysfunc(close(&dsid)); %if(&nobs = 0) %then %do; %put NOTE: &data has no observations and will be deleted.; proc delete data=&data; run; %end; %mend; data gooddata1 errordata1; set sashelp.cars; if drivetrain in( Front , Rear , All ) then output gooddata1; else output errordata1; run; %deletedata(errordata1); NOTE: errordata1 has no observations and will be deleted. NOTE: Deleting WORK.ERROR_DATA1 (memtype=DATA). NOTE: PROCEDURE DELETE used (Total process time): real time 0.00 seconds cpu time 0.00 seconds
2024-02-16 16:38:27.507000000
97,094
Generally All the newly created objects will be placed only in geneartion0 not in other generations when the geneartion0 was out of memory then CLR will perform an operation known as Collection. Collection operation have 2 phases marking phase and compact phase . In marking phase CLR will mark which objects are in use and which objects are not in use In Compact phase objects which are in use will be moved to next generation by CLR and objects which are not in use will be removed. At the Initial stages of garbage collector development microsoft has used 10 generations but 10 generations was completely useless and burden to the system because of the following reasons 94-96% of the collection operation has done with in the generation0 only i.e.. most of the time(90-95%) the collection operation was performed on generation0 2-3% of the collection operation has done with in the generation0 and generation1 only i.e.. at a time collection operation was performed on both generation0 and generation1 3)1 or 1% of the collection operation has done with in the generation0 and generation1 and generation_2 So 3 generations are enough to manage the newly created objects and this is the reason why garbage collector will have 3 generations only Refered from CLR Via Cby Jeffrey Richter 4th edition 2012 version
2024-03-18 10:19:31.827000000
162,070
Explicitly flushing stdout can solve the issue: $ expect -c 'puts -nonewline \0\0\0\0 ' | xxd $ expect -c 'puts -nonewline \0\0\0\0 ; flush stdout' | xxd [HASH]: 0000 0000 .... Based on the observation stdout is line-buffered for both Tcl and Expect but seems like Expect does not flush stdout before exiting. NOTE: The issue is not specific for NULL chars. For example expect -c 'puts -nonewline foobar' can also reproduce the issue.
2024-02-06 08:26:57.897000000
47,230
When you run the program, it requests input. It does not print a prompt. The normal behavior of the program is for the terminal to sit, waiting for you to type something. After running the program, type three numbers, separated by spaces or by pressing Return or Enter . After entering all three numbers, press Return or Enter .
2024-02-19 15:50:32.157000000
65,566
To elaborate on my comment, you can pass a flag argument and check that argument during execution to determine whether the script run at startup. Something like this: import argparse parser = argparse.ArgumentParser() parser.addargument('--startup', action='storetrue') args = parser.parse_args() if args.startup: input('The script executed during startup. Press key to continue...') else: input('The script executed after startup. Press key to continue...') This is example using argparse from Standard Library, but same can be acomplished using any CLI framework like Click . Then in the Startup folder add a shortcut with following target path to python executable path to python script --startup If using Task scheduler, pass the --startup flag accordingly. That said, there is not enough information what the difference is when the script run during startup and otherwise. It may be better to have 2 different scripts or take completely different approach, depending on circumstances.
2024-02-12 12:15:22.797000000
80,692
What does that job do? Can you pls share the source code of that job? I will update my answer accordingly. It's not a good idea to attempt a job too many times. Pls specify a tries property to get a clearer error message.
2024-03-24 01:14:52.670000000
36,786
I am struggling to format my code on GitHub, can somebody help me understand the logic behind it? This QA was inspired by an actual discussion [LINK]>
2024-02-29 13:38:38.107000000
58,176
I'm using NextJS 12. I want to be able to import some JSON and csv assets in my src folder. I would prefer to be able to do this using readFileSync , but when I run next build it doesn't bundle those assets into the production bundle. I know this because I've checked the bundle for mentions of those file names. I'm not sure what to do. I could import JSON files with es6 syntax, but that's not convenient as there are a lot of different files and I would like to conditionally import one of them, not all of them at the beginning. But for csv files, I'm not sure what to do. Also, I can't use the public folder, as these assets are meant to be private and not able to be shared with the client Tldr: Need to add dynamically imported assets to production bundle in Next 12 Reproduction steps: Tried running next build . Expected to find those assets in the production bundle, but they weren't there, and wasn't able to load them in production.
2024-02-07 07:17:12.497000000
137,792
for the past few years, I've been developing only WPF applications and therefore I'm quite new to Blazor. I'm currently stuck with a problem, and even hours of searching through forums and YouTube haven't helped. I want to develop a .NET 6 Blazor Server App as an internal company intranet. Windows authentication seems to be a suitable option for this. The Windows authentication works quite well when I enable it on IIS, but only for the entire page. I've used the MS template for the Blazor Server App and I want, for example, the Index.razor to be accessible to everyone, and when someone clicks on the Counter, the login screen should appear. However, this isn't working for me at all. If I only enable Windows authentication on IIS, I have to log in when accessing the page, so already on the Index.razor. If I activate anonymous and Windows authentication together, I can access the page, but I don't see the login screen when I click on Counter . Instead, I only receive a message indicating that I am not authorized. There is no login-pupop like it is when i just use win-auth alone. Do I have some fundamental misunderstanding here? Is it even possible to do it like this? Here's some code: Program.cs ... // Add services to the container. builder.Services.AddAuthentication(NegotiateDefaults.AuthenticationScheme) .AddNegotiate(); builder.Services.AddAuthorization(options = { // By default, all incoming requests will be authorized according to the default policy. options.FallbackPolicy = options.DefaultPolicy; }); ... app.UseAuthentication(); app.UseAuthorization(); ... Tried this in Counter.razor: [USER] /counter [USER] [Authorize] PageTitle Counter /PageTitle h1 Counter /h1 p role= status Current count: [USER] /p button class= btn btn-primary [USER]= IncrementCount Click me /button [USER] { private int currentCount = 0; private void IncrementCount() { currentCount++; } } and this for testing: [USER] /counter [USER] [Authorize] PageTitle Counter /PageTitle h1 Counter /h1 hr class= my-4 / p role= status Current count: [USER] /p button class= btn btn-primary [USER]= IncrementCount Click me /button AuthorizeView Authorized p authorized! /p p logged in as [USER].User.Identity?.Name /p /Authorized NotAuthorized p not authorized! /p /NotAuthorized /AuthorizeView [USER] { p
2024-02-13 10:42:01.030000000
149,803
Imagine the following scenario: I have very large datasets hosted in an analytics datawarehouse The warehouse is very efficient at handling large analytic workloads and can scale arbitrarily I need to process the data in a CPU-intensive way that requires loading much of the data into memory at once I would like to use a DataFrame API (pandas-like or spark-like) What should I consider when choosing between Ibis and Spark for such a task? It seems like the core difference is that with Ibis the compute is happening in the datawarehouse, whereas with Spark it is happening on an external cluster. Spark seems to be the more popular choice. However, Ibis sounds like it would be cheaper/more convenient: I can use compute I am already paying for (the datewarehouse itself) and avoid having to manage a Spark cluster. If this is true, I don't see why Ibis wouldn't be a more popular choice over Spark.
2024-03-28 21:34:26.947000000
77,889
Agreed! but don't you think if would be better the the install checked the system requirements were meet before attempting a install?
2024-02-29 02:50:10.453000000
169,302
It might sound a bit vague, but I'd like to know if anyone has had issues using Twig cache in Slim routes and managed to resolve them somehow. I passed the parameter to use cache in the global Twig container loading, but it still seems not to use the cache in the routes that render directly in the Action. $twig = new Environment($loader, ['debug' = true, 'cache' = sysgettemp_dir()]); No action está renderizando assim; / [USER] Environment $twig / $twig = $this- container- get('twig'); return $twig - load('exemplo/resultado.twig') - render([ 'dados' = array_map( static fn(Resultado $resultado) = $resultado- jsonSerialize(), $arrResultados ), ]);`
2024-02-19 14:39:44.767000000
118,268
I would like to use checkpoints in snakemake to recover output files that are unknown before execution. I have followed the examples here [LINK]>. However, the function that is used to recover the checkpoint outputs throws and error if the output includes more than one wildcard in the path. I have tried to create a small simplified example below based on the example provided in the documentation. I am assuming I need to adjust the aggregateinput function to accept the sample wildcard too. Would anyone be able to advise? SAMPLE = [ A , B , C , D , E ] a target rule to define the desired final output rule all: input: aggregated.txt the checkpoint that shall trigger re-evaluation of the DAG an number of file is created in a defined directory checkpoint somestep: output: directory( results/{sample} ) shell: ''' mkdir -p results mkidr results/{wildcards.sample} cd results/{wildcards.sample} for i in 1 2 3; do touch $i.txt; done ''' input function for rule aggregate, return paths to all files produced by the checkpoint 'somestep' def aggregateinput(wildcards): checkpoint_output = checkpoints.somestep.get(**wildcards).output[0] return expand( results/{sample}/{i}.txt , sample=wildcards.sample, i=globwildcards(os.path.join(checkpointoutput, {i}.txt )).i) rule aggregate: input: aggregateinput output: aggregated.txt shell: cat {input} {output} This generates the error below: InputFunctionException in rule aggregate in file /gpfs/nhmfsa/bulk/share/data/mbl/share/workspaces/groups/clarkgroup/oliw/testsnakemakecheckpoint/Snakefile, line 26: Error: WorkflowError: Missing wildcard values for sample Wildcards: Traceback: File /gpfs/nhmfsa/bulk/share/data/mbl/share/workspaces/groups/clarkgroup/oliw/testsnakemakecheckpoint/Snakefile , line 23, in aggregateinput
2024-02-16 23:50:41.733000000
55,308
use env.json file instead, and while running the application with flutter run use the flag --dart-define-from-file=env.json In your case: replace your .env file with - touch env.json - echo '{ SUPABASEANONKEY : ${{ secrets.SUPABASEURL }} , SUPABASEURL : ${{ secrets.SUPABASEANONKEY }} }' env.json env.json will look something like { SUPABASEANONKEY : YOURSUPABASEANONKEY , SUPABASEURL : SUPABASEURL } In code use String.fromEnvironment('SUPABASEANONKEY') String.fromEnvironment('SUPABASE_URL') finally, add this to run the application. flutter run --dart-define-from-file=env.json
2024-03-08 09:36:33.707000000
148,081
Months 'M' unit is deprecated in last versions and will raise an error 'Unit M is not supported'. So we can simply use days 'D' difference devided by 30. df['duration'] = (df['end'] - df['start'])/np.timedelta64(1, 'D')/30 Or for precise calculations for longer periods use 'toperiod' df['duration'] = (pd.todatetime(df['end']).toperiod('M') - pd.todatetime(df['start']).to_period('M') ).apply(lambda x: x.n)
2024-02-28 15:11:47.567000000
141,082
When you want to get data-something in JS both work or when the data-something is added in JS with dataset.something and want to get it. But performance getAttribute is much better than dataset ( test performance reference ) getAttribute x 122,465 ops/sec ±4.62% (60 runs sampled) dataset x 922 ops/sec ±0.58% (62 runs sampled)
2024-02-24 13:47:17.590000000
136,537
It appears this is also resolved by the February 2024 update to 22621.1263, though they make no mention of it in the changelog .
2024-03-08 03:56:08.313000000
50,882
The documentation listed for Azure Devops Server 1.2 patch 10 lists the Configure TFX and Update tasks using TFS, but I'm unable to follow those steps for the on premise version of Azure Devops Server. When I click the link for Upload Tasks to the Project Collection, it switches to the documentation for Azure Devops Services, not Azure Devops Server, which makes me think it doesn't apply to Azure Devops Server; especially since I cannot run the command to install tfx-cli. Please clarify which steps are for Azure Devops server and which apply only to Azure Devops Services. When I clicked on the link to upload tasks to project collection, I need more details on the step to install tfx-cli. I'm able to install node.js, but the npm command doesn't work.
2024-02-20 20:54:08.630000000
52,865
I encountered this error I used an old config from a previous kubernetes installation or setup. Here's how I fixed it : The below commands will remove the old config and copy the new config to the .kube directory as well as set the correct permissions: rm -rf $HOME/.kube || true mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Next, setup the networking and network policy: CALICO_VERSION=3.27.2 kubectl create -f [LINK] kubectl create -f [LINK] Next, run the command below to get the generated token: kubeadm token list Next, join the worker node(s) to the cluster. Replace the place holders with the actual values: kubeadm join master node IP address :6443 --token generated token \ --discovery-token-ca-cert-hash kube config public keys
2024-03-16 18:57:25.080000000
167,878
I am trying to make sound play once a message is sent from the service-worker. I have been following <a href="[LINK] guide</a> but it always provides the below errors, alternating: Uncaught (in promise) Error: Could not establish connection. Receiving end does not exist. Uncaught Error: Extension context invalidated. Could someone point me in the right direction? I assume I am missing something obvious. Thank you in advance! content.js // Function to check if the conditions are met and click the button function checkAndClick() { chrome.runtime.sendMessage({type: playSoundPlease }); } setInterval(checkAndClick, 5000); offscreen.html script src= offscreen.js /script offscreen.js // Listen for messages from the extension chrome.runtime.onMessage.addListener(msg = { if (message.type === playSoundPlease2 ) playAudio(msg.play); }); // Play sound with access to DOM APIs function playAudio({ source, volume }) { const audio = new Audio(source); audio.volume = volume; audio.play(); } background.js chrome.runtime.onMessage.addListener(function(message, sender) { if (message.type === playSoundPlease ) { playSound(); } }); async function playSound(source = 'notification.mp3', volume = 1) { await createOffscreen(); await chrome.runtime.sendMessage({ play: { source, volume }, type: playSoundPlease2 }); } // Create the offscreen document if it doesn't already exist async function createOffscreen() { if (await chrome.offscreen.hasDocument()) return; await chrome.offscreen.createDocument({ url: 'offscreen.html', reasons: ['AUDIOPLAYBACK'], justification: 'testing' // details for using the API }); } manifest.json { manifestversion : 3, name : Sound Test Manifest V3 , version : 1.0 , description : None , host_permissions : [ http:/// , https:/// ], action : { defaultpopup : popup.html }, background : { serviceworker : scripts/background.js }, content_scripts : [ { matches : [ https://* ], js : [ scripts/content.js ], runat : documentidle } ], webaccessibleresources : [ { resources : [ notification.mp3 ], matches : [ all_urls ] } ], permissions : [ offscreen ] } I tried adding Audio and various other permissions, async lambda functions but am not able to get this to work.
2024-03-05 21:08:18.517000000
69,673
The main problem was in index.ts file. I've used them to export all services from the folder to simplify imports: index.ts export from './api.service'; export from './audio.service'; export * from './auth.service'; But this approach leads to circullar dependencies. The best solution is to get ride of index.ts and import each service from it's file. But in my case I have already had thousands of imports and I didn't want to fix all of them. So, I found another solution that helped my Jasmine and Cypress tests run. I've added emitDecoratorMetadata : false property to tsconfig.json and error disappeared! tsconfig.json { ..., compilerOptions : { ... emitDecoratorMetadata : false, }, }
2024-03-08 10:28:37.630000000
52,438
Composable functions don't work in an exceptional way themselves, it's the same as unwrapping the function, i.e. evaluating ref('Home') in each of these components, this results in title state not being shared between them. If the intention is to share the state, it should be either global state : // titles.js export const title = ref('Home') Or if there can be multiple states per app, the state needs to be provided to component hierarchy with provide / inject : // Home.vue const title = ref('Home') provide('title', title) // Title.vue const title = inject('title')
2024-02-27 02:14:13.970000000
166,325
present possible date time formats Solution which accepts any kind of date time pattern like yyyy-mm-ss'T'HH:mm:ss.SSSX , yyyy-mm-ss'T'HH:mm:ss.SSSZ and many more with a single pattern which accepts these all patterns with a single format which makes code compatible.
2024-03-13 13:06:21.183000000
56,408
For reading long files in Python, I recommend using Dask: import dask.dataframe as dd df = dd.read_csv('yourfile.csv') For more information, visit the Dask documentation page: [LINK]>
2024-03-04 10:38:53.527000000
99,032
As you may know, SwiftUI would run body to find the dependencies (e.g. [USER] , [USER] , [USER] ) of a view. When any of the dependencies change, body is run again to find out what has changed and SwiftUI updates the view according to that. The same applies to ViewModifier s too, with body(content:) . SwiftUI doesn't treat Date() as a dependency . It doesn't make sense for SwiftUI tried to update the view every time Date() changes, after all. When the chart is updated, SwiftUI sees that none of the dependencies of DomainModifer has changed (it literally has no dependencies!), so it doesn't call body(content:) . The first code snippet works because when the chart updates, MyChart.body is executed, and in there you call xdomain() , so xdomain() is called for every update to MyChart . The simplest way to fix this would be to not use a ViewModifier . Just put all the code in the View extension: extension View { // the View extension method and the method that generates a range // should have different names func xDomain() - some View { chartXScale(xdomain()) } } Now xdomain() will be called whenever you call xDomain() , which you do in body . You can also keep using the ViewModifier , but the ClosedRange Date should be passed from the View extension. extension View { // the View extension method and the method that generates a range // should have different names func xDomain() - some View { modifier(DomainModifer(domain: xdomain())) } }
2024-03-12 09:00:55.487000000
143,509
if I insert just the line home in the preview, then everything works if I insert the line stringFromJNI from native-lib.cpp then there is a render error , but if I run the app on a real device, then everything works great I understand that the fact is that during the preview, android studio cannot load the jni library companion object { init { System.loadLibrary( myapplication ) } } how to make the jni library load during preview rendering [CODE] //native-lib.cpp #include jni.h #include string extern C JNIEXPORT jstring JNICALL JavacomexamplemyapplicationMainActivity2Kt_stringFromJNI(JNIEnv *env, jclass clazz) { std::string hello = Andr ; return env- NewStringUTF(hello.c_str()); } enter image description here enter image description here on the first picture there is a kotlin string on the second picture there is a c++ string I think it's about the library, since the preview works with regular strings and the strings from native-lib work on a real device.
2024-02-19 14:19:32.307000000
149,814
I'm writing code for a dropdown menu using Tailwind, and want to change the pointer-event class of the a elements, which are children of a peer to the button itself. I use peer already to alter the opacity of the parent to the a elements, but I cant figure out how to alter the properties of the a elements themselves based on the focus of the button element. Currently, the pointer-events-none is applied to the a elements, and I want it to toggle when the button is focused. script src="[LINK]" /script div class="w-full py-12 mt-3 border border-gray-200 dark:border-gray-700 rounded-xl flex flex-col justify-center items-center" div class="w-48 flex flex-col text-left" button id="basicDropdownButton" class="peer flex items-center h-10 px-4 font-semibold rounded-md transition-all shadow-md text-white bg-violet-500 hover:bg-violet-400 dark:hover:bg-violet-600 border border-violet-400" Dropdown i class="fa-solid fa-chevron-down text-sm ml-auto" /i /button div id="basicDropdownMenu" class="peer w-full my-1 flex flex-col justify-center items-center rounded-md transition-all bg-gray-50 dark:bg-gray-700 border border-gray-200 dark:border-gray-500 opacity-0 peer-focus:opacity-100" a href="" class="py-2 w-full pointer-events-none text-black/[0.86] dark:text-white font-semibold px-4 hover:bg-gray-200 dark:hover:bg-gray-600" Link 1 /a a href="" class="py-2 w-full pointer-events-none text-black/[0.86] dark:text-white font-semibold px-4 hover:bg-gray-200 dark:hover:bg-gray-600" Link 2 /a a href="" class="py-2 w-full pointer-events-none text-black/[0.86] dark:text-white font-semibold px-4 hover:bg-gray-200 dark:hover:bg-gray-600" Link 3 /a /div /div /div
2024-03-03 21:09:55.727000000
64,641
Some skills are not supported in the latest version of [USER]/search-documents . Use the command below to install the beta version: npm i [USER]/search-documents[USER].1.0-beta.1 Then, execute your code. And in the skillset: You can read more about this in the documentation . Above, you can see the skills added in the preview version, but they are not present in the latest version.
2024-03-29 06:19:19.567000000
67,624
You could try constructing the request-body JSON from a hashtable like this - $fuelQuantityLiters = 676.8 $iftaFuelType = Diesel $transactionLocation = 350 Rhode Island St, San Francisco, CA 94103 $amount = 676.8 $currency = usd $transactionReference = [HASH] $transactionTime = 2024-02-22T10:20:50.52-06:00 $vehicleId = 570 Separate hashtable for the nested bit $tranHASH = @{} $tranHASH.add( amount , $amount) $tranHASH.add( currency , $currency) $restHASH = @{} $restHASH.add( fuelQuantityLiters , $fuelQuantityLiters) $restHASH.add( iftaFuelType , $iftaFuelType) $restHASH.add( transactionLocation , $transactionLocation) Add in the previous hashtable to create the nesting $restHASH.add( transactionPrice , $tranHASH) $restHASH.add( transactionReference ,$transactionReference) $restHASH.add( transactionTime , $transactionTime) $restHASH.add( vehicleId , $vehicleId) $JSON = $restHASH | ConvertTo-JSON -compress That yields the JSON you're after for the request body, then pass that to the API $response = Invoke-WebRequest -Uri '[LINK]' -Method POST -Headers $headers -ContentType 'application/json' -Body $JSON Now the -BODY parameter is passed a single argument instead of the command line being awash with quotes.
2024-02-22 18:20:33.377000000
112,763
I think a problem is in App.vue. So I recreated a little project with Vue 3 with your code and it works. So this is index.html : !DOCTYPE html html lang= en head meta charset= UTF-8 link rel= icon href= /favicon.ico meta name= viewport content= width=device-width, initial-scale=1.0 title Vite App /title /head body div id= app /div script type= module src= /src/main.js /script /body /html This is the default code I haven't changed it. ìndex.html is the mount point for your Vue application. The Vue instance attaches to this element and renders the components within it, starting with App.vue . In this file there is link to main.js : import './assets/main.css' import { createApp } from 'vue' import App from './App.vue' import router from './router' const app = createApp(App) app.use(router) app.mount('#app') In a Vue.js application, main.js serves as the entry point for the application. It is the first file that gets executed when you run your Vue app. Creating this application I decided to use a router so App.vue is like this: script setup import { RouterLink, RouterView } from 'vue-router' /script template header div class= wrapper nav RouterLink to= / Home /RouterLink RouterLink to= /about About /RouterLink /nav /div /header RouterView / /template In router folder there is index.js with routes: mport { createRouter, createWebHistory } from 'vue-router' import HomeView from '../views/HomeView.vue' const router = createRouter({ history: createWebHistory(import.meta.env.BASE_URL), routes: [ { path: '/', name: 'home', component: HomeView }, { path: '/about', name: 'about', // route level code-splitting // this generates a separate chunk (About.[hash].js) for this route // which is lazy-loaded when the route is visited. component: () = import('../views/AboutView.vue') } ] }) export default router App.vue serves as the root component from which all other components in the application are mounted or initiated. Essentially, it's the main entry point of a Vue 3 application. If you want to use your own App.vue , without router, like in your example you need to use a file home in your App.vue for example like this: script setup import HomeView from './views/HomeView.vue'; /script template div id= app HomeView/ /div /template
2024-02-29 23:18:48.267000000
137,953
Your XML has a default namespace. All XML elements are bound to it, even if we don't see it explicitly. It needs to be declared via xmlnamespaces(...) clause and used in the XPath expressions. dbfiddle SQL with XMLtext(col) as ( select ' ?xml version= 1.0 encoding= UTF-8 ? purchasePlan xmlns:ns2= [LINK] xmlns= [LINK] xmlns:ns10= [LINK] xmlns:ns11= [LINK] xmlns:xsi= [LINK] xsi:schemaLocation= [LINK] [LINK] body item guid [HASH]-d656-4441-9032-[HASH] /guid /item /body /purchasePlan '::xml ) SELECT r.guid FROM XMLtext as x, XMLTABLE(xmlnamespaces('[LINK]' AS ns1 ), '/ns1:purchasePlan/ns1:body/ns1:item' PASSING x.col COLUMNS guid varchar(50) path 'ns1:guid' ) as r ;
2024-03-08 16:02:47.327000000
54,911
A non boolean value can be considered falsy in JS, and the sample you're provided is a perfectly legitimate check for the presence of grid before continuing. [LINK]>
2024-03-14 02:54:04.867000000
67,813
When using the new input system, some build targets such as andoid can shuffle the names of controller inputs, messing up the bindings. How can I listen to all inputs and print the name of the button pressed at runtime?
2024-02-22 15:22:46.290000000
159,614
I want to create a summary statistics table for my panel data with rMarkdown. I used sumtable() / st() from the package vtable in order to generate summary statistics by group (industries). The output is then handed over to kable for formatting. That is where my problem arises: I want to emphasize the group headings by formatting them bold and italic as well as with a lightgray background (see screenshot). Unfortunately, this only works for fewer than half of the selected rows. As you can see from the code, I feed a vector of the row numbers that I want to emphasize to rowspec . No matter what I do, I cannot get it to apply the formatting to all the rows selected in the vector. Only rows 109, 121, 133, 145 and 157 are formatted accordingly, while the other rows (1, 13, 25, 37, 49, 61, 73, 85, 97) are left completely unchanged. Interestingly, if I put 1:5 in the first slot of the vector, it works for rows 2 to 5, but not for 1. I don't get it and I am going mad about this... So, how can I get this to run, so that all the industry group name rows are emphasized? This is the code I am running. I am sure it is flawed in other ways as well (I am quite new to R). labs - c(NA,NA,NA,NA, H-LD , H-TS ,NA,NA,NA,NA,NA,NA,NA,NA, Capital Intensity ,NA,NA,NA,NA,NA) summstat - st(Datalog, summ = c('mean(x)', 'sd(x)', 'min(x)', 'max(x)'), group = 'Industry', vars = c( Hirings , Layoffs and Discharges , Total Separations , Investment in fixed assets , HLD , HTS , Fixed assets , Full-time equivalent employees , Full-time and part-time employees , Gross output , Gross capital stock , Value added , Real value added , Gross operating surplus , CL , Capacity utilization , Normal output , Productivity , SLTC , IROR ), labels = labs, group.long = TRUE, col.breaks = c(10), out = 'return') summstat$Variable - sub( Industry: , , summstat$Variable) kable(summstat, caption = Summary Statistics by Industry , format = latex , longtable = TRUE) % % rowspec(c(1,13,25,37,49,61,73,85,97,109,121,133,145,157), bold = T, color = black , background = lightgray , italic = T) % % kablestyling(position = left , htmltableclass = lightable-classic , latexoptions= holdposition ) % % kableclassic(fullwidth = F, htmlfont = Cambria ) Below are two screenshots of the rendered PDF. As you can see, the first selected rows (only first page of table shown) are not formatted like specified, whereas the last 5 selected rows are. First rows unformatted Latter rows are formatted My data looks like the df below, except that there are way more variables (see vars in code chunk above): Industries - c( Mining and logging , Construction , Durable goods manufacturing , Nondurable goods manufacturing , Wholesale trade , Retail trade , Information , Educational services , Health care and social assistance , Arts, entertainment, and recreation , Accommodation and food services , Professional and business services , Transportation, warehousing, and utilities , Other services ) set.seed(123) df - data.frame( Year = 2001:2021, Industry = rep(Industries, each = 21), Value1 = sample(300, 294), Value2 = sample(300, 294) )
2024-02-24 11:21:56.743000000
147,432
I'm trying to use the Grammerly extension on VS code on company issued Mac. The extension seem to have an issue trying to initialize, it give the following error in the OUTOPUT: Initializing... [Error - 10:09:08 AM] Server initialization failed. Message: Request initialize failed with message: fetch failed Code: -32603 [Error - 10:09:08 AM] Starting client failed Message: Request initialize failed with message: fetch failed Code: -32603 I might be wring put my guess is that is due to the proxy settings I'm forced to use by company rules. I tried adding http.proxy : [LINK] , in the settings.json file, but this didn't help. I also tried changing the proxy support from override to on and this also didn't help. I'm not entirely sure this's due to proxies, might this error be due to something else?
2024-03-13 16:36:02.253000000
131,982
YOLO requires fewer computational resources compared to some TensorFlow models, which can be beneficial for Raspberry Pi applications. And YOLO is also best for real time object detection as compared to tensorflow. The learning curve of YOLO may be more but it is worth it in the end
2024-03-23 05:08:42.593000000
137,495
Assuming that Playwright server is the solution to Run Playwright Tests on unsupported Linux distributions , I want to run my Java test against a Playwright server (which is running using an official Playwright Docker image). Unfortunately, this doesn't work when the Java test itself is running on an unsupported Linux distribution, because it always wants to create a local driver, for details see my comment on a feature request for a Playwright.connect() for remote connection Is there perhaps a solution to work around this in Java code? (Asking the same in [LINK]> )
2024-02-22 14:07:37.530000000
1,811
I am implementing my own mobile menu code, so b:if cond='data:view.isSingleItem' a class='return_link' expr:href='data:blog.homepageUrl' b:include data='{ button: true, iconClass: &quot;back-button rtl-reversible-icon flat-icon-button ripple&quot; }' name='backArrowIcon'/ /a b:else/ b:include data='{ button: true, iconClass: &quot;hamburger-menu flat-icon-button ripple&quot; }' name='menuIcon'/ /b:if Can I remove this code from blogger contempo theme? If I remove this code, any effect on the side of SEO or adsense?
2024-03-12 06:50:27.983000000
27,259
This is a good case for INDEX and MATCH, or VLOOKUP; since the 'bounds' columns are ordered, match will return the correct lower- or upper- bound row, and you can then use INDEX to get the correct gradient and intercept from the same row; VLOOKUP works similarly, though the bounds would have to be ascending instead of descending. Suppose the score you are checking (say 89.5) is in cell A1, and the data in your table is in B1:E10 then: =MATCH(A1,B1:B10,1) in A2 will give you the the row number of the largest value in B1:B10 that is less than 89.5 (in this case, it returns '2' corresponding to lower bound 89). Now in A3 you can use =INDEX(C1:C10,A2) to get the corresponding gradient, and similarly =INDEX(D1:D10,A2) to get the intercept. You can also combine them. =INDEX(gradient_range,MATCH(grade,lower-bound-range,1)) would return the gradient for a given target grade.
2024-02-14 07:58:14.520000000
139,603
The issue is that you're comparing different dtypes. df.dtypes Name object Age int64 dtype: object So the correct comparison should be: df.dtypes.astype(str).todict() == {'Name': 'object', 'Age': 'int64'} True Or: import numpy as np df.dtypes.todict() == {'Name': np.dtype('O'), 'Age': np.dtype('int64')} True Note that a more robust option would be to use isstringdtype / isintegerdtype : assert (pd.api.types.isstringdtype(df['Name']) and pd.api.types.isintegerdtype(df['Age']) )
2024-03-11 12:11:59.480000000
130,202
Starting from the FluxCD two cluster example at [LINK]>, I'm having difficulty applying a patch to the infrastructure GitRepo. When applying the patch as below, I am getting the error: kustomize build failed: no resource matches strategic merge patch HelmRelease.v2beta2.helm.toolkit.fluxcd.io/ingress-nginx.ingress-nginx : no matches for Id HelmRelease.v2beta2.helm.toolkit.fluxcd.io/ingress-nginx.ingress-nginx; failed to find unique target for patch HelmRelease.v2beta2.helm.toolkit.fluxcd.io/ingress-nginx.ingress-nginx Moving the content of ingress-nginx.yaml into infrastructure/controllers/kustomization.yaml yields the same error If I move the ingress-nginx.yaml to clusters/dev , and update the resources declaration to reflect the change, the patch applies correctly. It seems that I can't apply a patch to a resource that's been defined in a GitRepo? Relevant files below.. clusters/dev/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ./flux-system - ../base/infrastructure.yaml patches: - patch: |- apiVersion: helm.toolkit.fluxcd.io/v2beta2 kind: HelmRelease metadata: name: ingress-nginx namespace: ingress-nginx spec: values: controller: service: type: LoadBalancer ../base/infrastructure.yaml --- apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: infra-controllers namespace: flux-system spec: interval: 1h retryInterval: 1m timeout: 5m sourceRef: kind: GitRepository name: flux-system path: ./infrastructure/controllers prune: true wait: true ./infrastructure/controllers apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ingress-nginx.yaml and finally, ingress-nginx.yaml --- apiVersion: v1 kind: Namespace metadata: name: ingress-nginx --- apiVersion: source.toolkit.fluxcd.io/v1beta2 kind: HelmRepository metadata: name: ingress-nginx namespace: ingress-nginx spec: interval: 24h url: [LINK] --- apiVersion: helm.toolkit.fluxcd.io/v2beta2 kind: HelmRelease metadata: name: ingress-nginx namespace: ingress-nginx spec: interval: 30m chart: spec: chart: ingress-nginx version: * sourceRef: kind: HelmRepository name: ingress-nginx namespace: ingress-nginx interval: 12h values: controller: service: type: NodePort admissionWebhooks: enabled: false
2024-02-07 21:59:29.137000000
74,874
If you can just set the 'schedule_interval' parameter to 'None', you will prevent the next run. This does not change the running dag.
2024-03-06 15:24:08.657000000
178,165
The provided code snippet addresses a potential cause of missing language resources in your app built using Android App Bundles (AAB) for Play Store publishing. Here's a breakdown of the code and its implications: Code Explanation: Gradle android { //... removed for brevity bundle { language { enableSplit = false } } }
2024-03-11 08:50:16.517000000
31,768
Go to [LINK]> Security tab And disable Use Private Key Recommended This worked for me. And just use the Public key in emailjs.init('PUBLIC KEY');
2024-02-14 06:25:16.570000000
150,662
Considering Gradle version 9.0 hasnt even been released yet, I wouldnt worry about it too much. This is a warning, not an error, but if you want to dive in further use --warning-mode all to see where this is originating from when running
2024-02-15 18:51:56.833000000
103,823
sbcl 's argv is in posix-argv and first argument is sbcl. modify your clisp program to sbcl runnable program is #!/opt/homebrew/bin/sbcl --script (defparameter ArgOne (read-from-string (cadr posix-argv))) (defparameter ArgTwo (read-from-string (caddr posix-argv))) (format t Argument One = ~a~% ArgOne) (format t Argument Two = ~a~% ArgTwo)
2024-02-26 03:50:41.287000000
25,632
You can use a vectorized approach for this. import pandas as pd regdf.iloc[:, 1:] = regdf.iloc[:, 1:].sub(reg_df['Intercept'], axis=0)
2024-02-25 09:51:28.370000000
74,209
you can expose your deployment by multiple ways, I'll mention some: 1- by creating service with type loadbalancer apiVersion: v1 kind: Service metadata: name: myapp spec: type: LoadBalancer selector: app: myapp ports: - port: Port targetPort: Target Port but note that this will create a load-balancer with every service you create, this is not efficient as well as it's more expensive. this can be useful in case you want to test something very fast then change type of service later. just make sure to update the security group of eks to allow access to your IP after the change, but yet i don't recommend that So, let's go to the second option. 2- expose your service via ingress create clusterIP service then expose it using ingress resource apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: name annotations: some annotations kubernetes.io/tls-acme: true spec: rules: - host: host.com http: paths: - path: / pathType: Prefix backend: service: name: the-name-of-service-you-created port: number: your-port tls: - hosts: - host.com secretName: secretname of certificate 3- you can use other service types: NodePort - Exposes the Service on each Node's IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP. ExternalName - Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This type requires v1.7 or higher of kube-dns, or CoreDNS version 0.0.8 or higher. you can check the documentation of k8s service resource for more details to decide which one suits you more.
2024-02-21 14:10:11.447000000
152,369
The issue is that you are concatenating the audio chunks. You intent was joining those. Let me illustrate. Let's mark the chunks from the input stream as I0, I1, ... and chunks from output stream as O0, O1, O2, ... and the resulting audio as S0, S1, ... . What is happening now in your application is that chunks from the streams come in random order, so you get something like S = [I0, I1, O0, I2, I3, O1, O2, O3, I4, O4 ... ] As your chunks are short, you are percepting input and output audio streams interleaved as audio slow down (and distorted at the boundaries of chunks from different sources). If you would set the chunk duration to a several seconds you would clearly hear that phemonenon. Your shared audio should be JOINED: S0 = (I0 + O0)/2 for each audio sample in the chunk S1 = (I1 + O1)/2 ... assuming the chunk duration of both streams are the same. You can use shared += audiloop.add(inputchunk, outputchunk) , just make sure the chunks are timely aligned.
2024-02-28 08:10:22.283000000
24,309
[USER] public void onNewIntent(Intent intent) { super.onNewIntent(intent); this.setIntent(intent); } Use this in ManActivity.java to open the app while in foreground or background
2024-03-01 08:16:49.927000000
131,203
I am using Serial Library in robot framework like this Library SerialLibrary If my serial port is already opened in minicom Robot framework is unable to read from that and its logical, but the library is not throwing any error exception to notify me, my serial port code looks like this Open Serial Port [Arguments] ${EXPECTEDPORT} Add Port ${EXPECTEDPORT} ... baudrate=115200 ... bytesize=8 ... parity=N ... stopbits=1 ... timeout=999 Close Serial Port Delete All Ports Am i missing to add something?
2024-02-15 16:21:05.923000000
81,950
I've noticed that my server-hosted video that is larger than 19 MB (I've made some test with video size range from 10 to 30 MB to figured it out) doesn't work on iOS mobile devices. What is the maximum size limit (in MB) for the video html tag on iOS mobile devices?
2024-03-18 09:18:30.807000000
96,062
In my case, I set the github token as an env variable $GITHUB_TOKEN in the docker file, then no need to run the gh auth login .
2024-03-20 17:29:48.797000000
131,151
In my case, passing argument devicemap='auto' wasn't reliable and causing the same error. Solution, dynamically check if there's a GPU. def getdevicemap() - str: return 'cuda' if torch.cuda.isavailable() else 'cpu' device = getdevicemap() 'cpu' basemodel = LlamaForCausalLM.frompretrained( pretrainedmodelnameorpath=huggingfaceparams.pretrainedmodelname, loadin8bit=False, devicemap=device, HERE offloadfolder=huggingfaceparams.offloadfolder ) basemodel.config.savepretrained(huggingfaceparams.offloadfolder) basemodel.savepretrained(huggingfaceparams.offloadfolder) Loading checkpoint shards: 100%|| 3/3 [00:00 00:00, 3.56it/s] This allowed me to successfully instantiate and offload the model: basemodel = LlamaForCausalLM.frompretrained(pretrainedmodelnameorpath=huggingfaceparams.offload_folder) Loading checkpoint shards: 100%|| 6/6 [01:30 00:00, 15.08s/it]
2024-02-29 14:31:10.823000000
40,294
You have a typo in body validator, so body validator doesn't run, and genre.save() fails mongoose schema validation: body( name , Genre name must contain at least 3 characters ) .trim() .isLength({ mid: 3 }) .escape(), isLength should be min instead of mid : .isLength({ min: 3 })
2024-03-11 18:35:39.163000000
66,320
I am fairly new to PowerShell scripting and mostly work with C#. I understand that there are three types of scopes - global, local, and script level. I know that variables declared inside a function cannot be accessed outside, unless the scope modifiers are used. However, I am having confusion when it comes to variables that are declared inside a loop. To understand the issue, please have a look at this example: foreach ($i in 1..3) { $innerVariable = $i } Accessing $innerVariable outside the loop Write-Host Value of innerVariable: $innerVariable I assumed that it would give me $null output or some kind of error message. However, in this case, it gives me 3 , which is the correct output. That means variables declared inside loops can be accessed outside. I have noticed the same behaviour in Do-While loop: $retryCount = 0 do { $response = New-InvokeSomeHttpRequest -ErrorVariable reqError $retryCount++ } while(($reqError.Count -gt 0) -and ($retryCount -lt 3)) With the above, I can access $reqError outside the loop, when the request has failed 3 times. So I have the following questions: Am I missing something, or is this the default behaviour? Is there any advice on this, i.e., best practices? If this is the default behaviour, would it be better to declare the variable outside the loop, e.g., $myVar = $null , and then overwrite it inside the loop? If you want to use that variable outside. Thanks
2024-03-03 15:43:04.600000000
36,416
You can convert the days to datetime then use window='30D' . The days will be converted to datetime, with a date starting from the first possible date in Pandas (1970-01-01). You can optionally specify the starting date with an origin attribute in todatetime , but for your example that wouldn't matter. df = df.sortvalues([ client , day ]).resetindex(drop=True) df[ daydatetime ] = pd.todatetime(df[ day ], unit= D ) df[ foo30day ] = ( df.groupby( client ) .rolling( 30D , on= daydatetime , minperiods=1)[ foo ] .sum() .values ) client day foo daydatetime foo30day 0 A 319 5.0 1970-11-16 5.0 1 A 323 2.0 1970-11-20 7.0 2 A 336 NaN 1970-12-03 7.0 3 A 352 NaN 1970-12-19 2.0 4 A 379 NaN 1971-01-15 NaN 5 A 424 NaN 1971-03-01 NaN 6 A 461 NaN 1971-04-07 NaN 7 A 486 7.0 1971-05-02 7.0 8 A 496 NaN 1971-05-12 7.0 9 A 499 NaN 1971-05-15 7.0 10 B 303 8.0 1970-10-31 8.0 11 B 334 7.0 1970-12-01 7.0 12 B 346 22.0 1970-12-13 29.0 13 B 373 NaN 1971-01-09 22.0 14 B 374 13.0 1971-01-10 35.0 15 B 395 NaN 1971-01-31 13.0 16 B 401 NaN 1971-02-06 13.0 17 B 408 5.0 1971-02-13 5.0 18 B 458 11.0 1971-04-04 11.0 19 B 492 NaN 1971-05-08 NaN
2024-03-22 19:30:57.387000000
173,798
I'll explain how I figured this out to help your understanding: I used the 'Network' tab in my browser's (Chrome) devtools to find the successful request that is sent to the server by my browser. This is the request that allows you to successfully access the file, as opposed to the request sent from the python script which doesn't effectively download the file, and rather returns http error code 403 - Forbidden (Note: as far as I'm concerned, this is obviously the website indicating that downloading the file this way is forbidden). I looked into the request headers and added one to the headers argument of request.get() . This gave me a successful http status code of 200 - OK . I continued to remove headers until finding the minimum set of headers needed, which are Cookie and User-Agent . If you copy those headers into the headers (headers is a dictionary) argument of request.get() it will work That said, you will need to copy the values for the User-Agent and Cookies out of your browser's devtools. The cookies also change over time, so you'll need to use the browser anyway to get up-to-date values for your request. Either that, or you'll need to really dig deep into how the site works and reverse engineer their cookie generation, if that's even possible. I'd advise against pursuing it too much further, as in my opinion the site is indicating that they don't want you downloading files programmatically like this. Where to go from here Apparently every paper in the Heliyon collection, which is the collection your example pdf is in, is also hosted on ScienceDirect : Every published article will be immediately available on both cell.com/heliyon and ScienceDirect and will be indexed by PubMed, Scopus, Web of Science™ and Science Citation Index Expanded™ (SCIE), ensuring that it reaches the widest possible audience. Heliyon's impact factor is 4.0 as of June 2023. Elsevier, which runs ScienceDirect, has a free-to-use API with which you can easily download articles, including the one you are trying to download, with just the doi and your API key. I've put together an extremely simple script below that you can use once you've registered for the Elsevier API and created your own api key: apitemplate = '[LINK]}' doi = '10.1016/j.heliyon.2018.e00938' apikey = ' yourapikey ' httpaccept = 'application/pdf' uri = apitemplate.format(doi=doi, apikey=apikey, httpaccept=httpaccept) res = requests.get(uri) with open('out.pdf', 'wb') as f: f.write(res.content) I've not included things like error handling, or even wrapping this in a function, but this should get you started. Best of luck!
2024-02-27 08:06:18.840000000
100,878
I need to understand the logic of the method with which GraphQL operates a query. In a react application, I have the following code: export const fetchPatientsChatMessages = async ( partitionKey: string, sortKey: string ) = { const client = generateClient(); const response = await client.graphql({ query: getPatientsChatMessages, variables: { limit: 163, filter: { and: [ { PK: { eq: partitionKey } }, { SK: { beginsWith: generateSortKey( CASE , sortKey, APP ) } }, { SK: { contains: CHAT } }, ], }, }, }); return (response as GraphQlResponseList ChatMessage ).data.listMains.items; }; ...and getPatientsChatMessages has the following structure: export const getPatientsChatMessages = / GraphQL / query ListMains( $PK: ID $SK: ModelStringKeyConditionInput $filter: ModelMainFilterInput $limit: Int $nextToken: String $sortDirection: ModelSortDirection ) { listMains( PK: $PK SK: $SK filter: $filter limit: $limit nextToken: $nextToken sortDirection: $sortDirection ) { items { PK SK DateTime Sender Message } nextToken __typename } } ; The situation in the database table, I am accessing, is as follows: Total number of records: 195 , number of records with the partitionkey I am using in the fetch above: 24. I would expect that this instruction (since it is a query and not a scan), will limit its fetch process to the scope of the 24 records with the the partitionkey. But it obviously does not because: When the limit which I have set for test purpose above is below 163, no records are delivered. It appears that GraphQL scans over the entire table. In case the limit parameter is omitted, there is also no result since the system default value is even smaller. Is this behaviour of GraphQL by design or is it more likely that something in my database (DynamoDB) is fundamentally wrong? If it is by design , I do not understand this concept - perhaps someone can explain. What should be the merit of a partitionkey when the system scans all records anyways when the data are fetched with a query? I am using Amplify to work with the underlying DynamoDB. The relevant resolver codes are generated and not modified by me. Here is the resolver code I find for ListMains: [Start] List Request. ** #set( $args = $util.defaultIfNull($ctx.stash.transformedArgs, $ctx.args) ) #set( $limit = $util.defaultIfNull($args.limit, 100) ) #set( $ListRequest = { version : 2018-05-29 , limit : $limit } ) #if( $args.nextToken ) #set( $ListRequest.nextToken = $args.nextToken ) #end #if( !$util.isNullOrEmpty($ctx.stash.authFilter) ) #set( $filter = $ctx.stash.authFilter ) #if( !$util.isNullOrEmpty($args.filter) ) #set( $filter = { and : [$filter, $args.filter] } ) #end #else #if( !$util.isNullOrEmpty($args.filter) ) #set( $filter = $args.filter ) #end #end #if( !$util.isNullOrEmpty($filter) ) #set( $filterExpression = $util.parseJson($util.transform.toDynamoDBFilterExpression ($filter)) ) #if( $util.isNullOrEmpty($filterExpression) ) $util.error( Unable to process the filter expression , Unrecognized Filter ) #end #if( !$util.isNullOrBlank($filterExpression.expression) ) #if( $filterExpression.expressionValues.size() == 0 ) $util.qr($filterExpression.remove( expressionValues )) #end #set( $ListRequest.filter = $filterExpression ) #end #end #if( !$util.isNull($ctx.stash.modelQueryExpression) && !$util.isNullOrEmpty($ctx.stash. modelQueryExpression.expression) ) $util.qr($ListRequest.put( operation , Query )) $util.qr($ListRequest.put( query , $ctx.stash.modelQueryExpression)) #if( !$util.isNull($args.sortDirection) && $args.sortDirection == DESC ) #set( $ListRequest.scanIndexForward = false ) #else #set( $ListRequest.scanIndexForward = true ) #end #else $util.qr($ListRequest.put( operation , Scan )) #end #if( !$util.isNull($ctx.stash.metadata.index) ) #set( $ListRequest.IndexName = $ctx.stash.metadata.index ) #end $util.toJson($ListRequest) [End] We can see that there is default limit of 100 but based on my understanding I can overwrite it to the limit parameter in my query definition above (here 163 for test purpose). But I do not understand the VTL code well enough to derive an answer to my principle question about the scope of records which are considered for the query and whether the partitionkey plays any role here.
2024-03-07 10:36:09.620000000
49,617
The doc is not very clear but there's an example in the file manualtests.R . In fact, the editable argument must be the name of a TRUE/FALSE column of the data which indicates whether the polygon defined at each row is editable or not. So, firstly, your dataframe vertices is not appropriate. You firstly have to construct a polyline which represents this polygon, with the help of the googlePolylines package. Then, make a one-row dataframe with this polyline in a column and with another column containing TRUE to indicate that the polygon represented by this polyline is editable: polygon - data.frame( polyline = googlePolylines::encodeCoordinates(vertices$lng, vertices$lat), editable = TRUE ) then: output$map - renderGooglemap({ googlemap(key = apikey, maptype = satellite , location = c(21.92, 85.43), zoom = 14 ) % % add_polygons( polygon, polyline = polyline , editable = editable ) }) I didn't test because I don't have an API key.
2024-02-16 21:17:01.140000000
94,245
I am attempting to pull data from three specific tables on Survivor wiki pages. Mostly the conestant, season summary, and voting history tables. I can get it to work just fine for the contestant table, but it tells me it cannot find a table for the season summary or voting history tables. My end goal is to combine all of them into one dataframe for cleaning and manipulating. My code that works for the contestant table but not the others looks like this: import pandas as pd listofseasons = ['41', '42', '43', '44', '45', '46'] seasonstart = 41 contestants = {} seasonsummary = {} votinghistory = {} for i in listofseasons : contestants[i] = pd.readhtml('[LINK]' + str(seasonstart), match='contestants') seasonsummary[i] = pd.readhtml('[LINK]' + str(seasonstart), match='season summary') votinghistory[i] = pd.readhtml('[LINK]' + str(seasonstart), match='voting history') seasonstart = seasonstart + 1 print(contestants['45']) print(seasonsummary['45']) print(votinghistory['45']) And the error i get is: Traceback (most recent call last): File c:\Users\bsjes\Documents\Code\Personal Projects\Survivor Data Grabber\SurvivorWikiRipper0.2.py , line 13, in module seasonsummary[i] = pd.readhtml('[LINK]' + str(seasonstart), match='season summary') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File C:\Users\bsjes\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\io\html.py , line 1246, in readhtml return parse( ^^^^^^^ File C:\Users\bsjes\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\io\html.py , line 1009, in parse raise retained File C:\Users\bsjes\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\io\html.py , line 989, in parse tables = p.parsetables() ^^^^^^^^^^^^^^^^ File C:\Users\bsjes\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\io\html.py , line 249, in parsetables tables = self.parsetables(self.builddoc(), self.match, self.attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File C:\Users\bsjes\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\io\html.py , line 622, in parse_tables raise ValueError(f No tables found matching pattern {repr(match.pattern)} ) ValueError: No tables found matching pattern 'season summary' What should I be doing differently? Do i need to learn a different package instead?
2024-03-10 23:13:15.790000000
49,240
Sometimes your editor will confuse python interpreters. Probably use !python -m pip install transformers
2024-02-19 13:02:32.043000000
72,796
I have a 7 GB .parquet file with a 128M rows of accounting data (which I cannot share) and split in to 53 row groups. My task is to 'clean' the data by retaining in each cell, only specific words from a dictionary. The reading of the file is trouble free. The processing of the file ends in a segmentation fault on a 20-core, 128GB RAM Ubuntu O/S desktop. Using Python and the Polars library, I convert the data in to a Polars data frame with the following columns: ['rowid', 'txid', 'debit', 'credit', 'effectivedate', 'entereddate', 'userid', 'transaction', 'memo', 'type', 'account', 'totalamt'] The columns to be cleaned in this file are memo , type , and account . My approach is to take each of those columns and apply a filterfield method to them and the problem occurs in the loop I made to do that: two-step cleaning: first memo/account fields for memofield in memocolumns+accountcolumns: print('memofield:', memofield) data = data.withcolumns( (pl.col(memofield).mapelements(lambda x: self.filterfield(text=x, worddict=worddict))).alias('clean' + memofield) baseline ) data = data.drop(memo_field) #drop cleaned column In each loop, I'm actually creating a new clean column and then dropping the original. I know the filtering is sound b/c everything runs fine on near identical files of up to 78M rows. When I run this larger file I see the memory consumption climb steadily until the seg fault. Just before the seg fault I noticed in htop that about a dozen processes seemingly identical to the main python process are spawned but it happened too fast for me to see what exactly happened there. What I would like to know: is there a better approach than the looping/mapping I'm using; are there improvements I could make for memory management; or is this simply a case of needing more resources Update: Changed the code so that instead of creating a new column and dropping the old, I just change the column (like Pandas 'inplace'?). This allowed processing of the entire file. Each column takes ~1500 sec to be cleaned. HOWEVER, a write error now appears: Parquet cannot store strings with size 2GB or more . This does not make sense to me as the data can only shrink in size through the cleaning process.
2024-02-16 16:41:03.243000000
178,476
I'm trying to implement a search function in SwiftUI with SwiftData. I'm struggling to to query for the recognizedText in my DocumentImage . On the list where I want to filter I have several Document that hold n DocumentImage . I found out that it might not work with the #Predicate macro. But how could I achieve that? init(searchTerm: String) { if !searchTerm.isEmpty { let predicate = #Predicate Document { $0.title.localizedStandardContains(searchTerm) } _documents = Query(filter: predicate) } } [USER] final class Document { var timestamp = Date.now var title = Date.now.dateString + - + String(localized: Untitled ) [USER](deleteRule: .cascade, inverse: \DocumentImage.document) var documentImagesStorage: [DocumentImage]? = [] [USER](deleteRule: .cascade, inverse: \Tag.document) var tagsStorage: [Tag]? = [] init(documentImages: [DocumentImage]) { self.documentImagesStorage = documentImages } } [USER] final class DocumentImage { var document: Document? [USER](.externalStorage) var imageData: Data = Data() var recognizedText: [String]? var order: Int? init(imageData: Data, recognizedText: [String], order: Int = 0) { self.imageData = imageData self.recognizedText = recognizedText self.order = order } }
2024-03-21 21:04:20.020000000
31,331
Yes you can if a derived class satisfies you. For example, if you have class MyClass {}; and you cannot change its declaration, then you can do in any other file: class MyClass2 : public MyClass { public: MyClass2( MyType arg){ // any initialization of MyClass members } }; By the way, the main reason why you MAY WANT to define your constructor outside the class is because you constructor is too large to be kept in .h file and be inline.
2024-02-07 21:57:13.217000000
109,964
There are no solution for this situation. Problem is compiler doesn't know which function to use when you call func with default input parameter: func foo( s: String = ) { print( foo with s ) } func foo( f: Float = 0) { print( foo with f ) } foo() // Ambiguous use of 'foo' Both of those functions are have default value and possibly can be called foo() and compiler doesn't know which one you need to call. All you can do is to remove default input parameter for at least one of those functions. func foo( s: String) { // removed default value here print( foo with s ) } func foo( f: Float = 0) { print( foo with f ) } foo() // will call foo(0.0) Or you can rename one of those functions and you will be able to set default input parameters to both of them. func foo( s: String = ) { print( foo with s ) } func fooF( f: Float = 0) { // renamed here print( foo with f ) } fooF() // will call fooF(0.0) foo() // will call foo( )
2024-02-28 15:55:08.500000000
81,018
there are time is funcation a uesd application to run a work. then work is define a theroy to uesd a line a layer in define application are value this is called a work and time managment a soure are accept in variable and work/time solulation in work method in soures.
2024-02-10 14:06:47.890000000
100,939
Use simplifycells=True in loadmat, that will load data as dictionary structure, which is quite handy to extract the correct data stream. from scipy.io import loadmat ecgdic = loadmat('228m (10).mat',simplify_cells=True)
2024-03-08 17:41:59.643000000
28,042
from google.cloud import bigquery from math import floor, log def format_bytes(size): power = 0 if size = 0 else floor(log(size, 1024)) return f {round(size / 1024 ** power, 2)} {['B', 'KB', 'MB', 'GB', 'TB'][int(power)]} def runbigquery(query=''): client = bigquery.Client() jobconfig = bigquery.QueryJobConfig(dryrun=True, usequerycache=False) if not query: return [] queryjob = client.query(query, jobconfig=jobconfig) bytesscanned = formatbytes(queryjob.totalbytesprocessed) if 'GB' in bytesscanned or 'TB' in bytesscanned: print(query) print(f'Alert: Query data scan costs might be higher: {bytesscanned}') return [] df = client.query(query).todataframe() df['bytesscanned'] = bytesscanned return df.todict('records') This is the simple logic, I generally use in my code prevent the surprise bigquery cost! This doesn't answer exactly your question though.
2024-03-15 08:13:32.143000000
9,975
There is no scalable way to make a specific SomeType that works. It would need to be a union that covers every possible set of flags separately. Such a union is only manageable if your list of types is very small, because number of sets of flags grows exponentially with the number of possible flags. Instead, you can make a generic type that takes the set of flags as a type parameter and computes the intended type, which would be the intersection of all the pieces. It could look like this: interface Mapping { flagNameA: { propertyA1: string; propertyA2: string }; flagNameB: { propertyB1: number } } type SomeType K extends keyof Mapping = { flags: K[] } & ({ [P in K]: (x: Mapping[P]) = void } extends Record K, (x: infer MK) = void ? MK : never); This uses a technique like that in Playground link to code
2024-02-28 20:01:17.950000000
448
TransWithoutContext.js from the source code of react-i18next const reactI18nextOptions = { ...getDefaults(), ...(i18n.options && i18n.options.react) }; const useAsParent = parent !== undefined ? parent : reactI18nextOptions.defaultTransParent; return useAsParent? createElement(useAsParent, additionalProps, content) : content; My code i18n.use(initReactI18next).init({ resources: { [SupportLanguage.zhCN]: ch, [SupportLanguage.enUS]: en, }, lng: LocalStorageWrapper.getOrDefault( LanguageStorageKey, SupportLanguage.zhCN ), fallbackLng: SupportLanguage.zhCN, interpolation: { escapeValue: false, }, }); When initializing the i18n instance, I do not set the react option. But the doc said that these options have default values if we do not set them. My question is : value of i18n.options.react is undefined or default in this case.
2024-03-01 08:44:41.227000000
46,333
In the VHDL world, it is common in a clocked process to initialize a signal to a default value and then override it later (via IF or CASE statements) as necessary. I want to use the same technique in Verilog, but have been told it could be problematic. Consider this example code: alwaysff @(posedge sysrefclk or negedge rstn) if (~rst_n) begin mstate = STATE1; mysig = '0; end else begin mysig = '0; case (mstate) STATE1 : begin if (go) begin mstate = STATE2; end end STATE2 : begin mysig = '1; mstate = STATE3; end STATE3 : begin mstate = STATE1; end endcase end Here, I want STATE2 to cause mysig to become 1, and all other states cause mysig to become 0. I do not want to have to type mysig = '0; in STATE1 and STATE3, so I just set a default and only override it in STATE2. In VHDL, the above is a technique I used successfully for many years. But upon switching to Verilog and to a new linter, I get lint errors such as the following: [211] OverridenAssignment error /path/to/file/myfile.sv 14 5 Flipflop 'mysig' is assigned over the same signal in an always construct for sequential circuits (at line '6') I believe this is an overzealous lint error, but I admittedly don't know Verilog as well as I know VHDL. When I asked the person in charge of our lint tool about it, I was told that in Verilog, this might be dangerous -- specifically because non-blocking assignments can lead to inefficient or incorrect hardware structures, especially when used for initialization -- also that non-blocking assignments may be interpreted differently between synthesis and simulation tools. Is this really the case that the above initialization of mysig is dangerous Verilog? It simulates just fine. What other way could a synthesis tool interpret this besides the (obvious, IMO) way it was intended?
2024-02-21 19:01:20.767000000
6,249