source
stringclasses
1 value
text
stringlengths
152
659k
filtering_features
stringlengths
402
437
source_other
stringlengths
440
819k
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Provide an API to generate wire buffers without compressed domain names username_0: I'm using ldns in a project, but have a need for some specific requests to be generated without domain name compression. Right now there's no easy way to do that as the compression logic in `ldns_pkt2wire` is unconditional. Adding an `ldns_pkt2buffer_wire_compress` function that I can explicitly call with a NULL rbtree seems like the easiest, least invasive way to add this functionality, and fits with the existing API. What are your thoughts? Is this functionality you'd like to support in ldns, and if so does this approach seem okay? <issue_comment>username_1: Absolutely @username_0 this is completely in line with the current API, so Thanks!
{'fraction_non_alphanumeric': 0.03631647211413749, 'fraction_numerical': 0.00648508430609598, 'mean_word_length': 5.327868852459017, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26656925', 'n_tokens_mistral': 194, 'n_tokens_neox': 184, 'n_words': 116}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Devs SBAT develop dapps username_0: ### Expected Behavior Please describe the behavior you are expecting ### Current Behavior What is the current behavior? <issue_comment>username_0: - [ ] Modularize deeplink behavior - [ ] Modularize sign tx behavior with ids - [ ] Provide contact search functionality<issue_closed>
{'fraction_non_alphanumeric': 0.0743801652892562, 'fraction_numerical': 0.005509641873278237, 'mean_word_length': 4.967213114754099, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9646934', 'n_tokens_mistral': 107, 'n_tokens_neox': 99, 'n_words': 36}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Document function param required for `build` post 1.0.0. username_0: Makes readme examples work with Metalsmith 1.0.0+ so that new folks aren’t confused by silent build failures. ## Details Adds the (now required) error handler param to the `build` call. Fixes #92 (also mentioned in #87). Note that if accepted, I suggest the same change be made to the examples on metalsmith.io. <issue_comment>username_1: On 1.0.1 this does nothing for me / noop: ```js Metalsmith(__dirname) .destination('./build') .build(function(err) { if (err) throw (err); }); ``` <issue_comment>username_2: thanks!
{'fraction_non_alphanumeric': 0.10819165378670788, 'fraction_numerical': 0.02472952086553323, 'mean_word_length': 4.184, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4856712', 'n_tokens_mistral': 210, 'n_tokens_neox': 196, 'n_words': 80}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: implement dd username_0: I've been working on a clone of `dd` for fun! It's pretty well developed, though nowhere near done; there's a lot of corner cases and tests that still need to be fleshed out. links: [repository](https://gitlab.com/username_0/dd) [crate](https://crates.io/creates/dd)<issue_closed> <issue_comment>username_1: Implemented in #2474
{'fraction_non_alphanumeric': 0.10687022900763359, 'fraction_numerical': 0.017811704834605598, 'mean_word_length': 5.566666666666666, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15067942', 'n_tokens_mistral': 128, 'n_tokens_neox': 122, 'n_words': 44}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [antlr] Left Recursion username_0: When reading the book Language Implementation Patterns (ANTLR3), found a example of list, which seems to have indirect left recursion. But as I remember ANTLR4 handles LR better but still can't handle indirect left recursion (it is mentioned in the ANTLR4 book somewhere) From ANTLR3 book P26, example text is `[a, b, c]`, `[a, [b, c], d]` ````antlr grammar NestedNameList; list : '[' elements ']' ; elements : element (',' element)* ; element: NAME | list ; NAME : [a-zA-Z]+; ```` <issue_comment>username_0: ```` r: r X ; ```` result in a function ```` void r() { r(); match(X); } ```` <issue_comment>username_0: ANTLR4 P249 Chapter 14 Removing Direct Left Recursion ```` expr: expr '*' expr | expr '+' expr | INT | ID ; ````<issue_closed>
{'fraction_non_alphanumeric': 0.14186851211072665, 'fraction_numerical': 0.01730103806228374, 'mean_word_length': 3.5925925925925926, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28359341', 'n_tokens_mistral': 314, 'n_tokens_neox': 284, 'n_words': 104}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Getting Flow invariant is violated in version above 1.4.2 username_0: I'm excecuting this code in `Android`: ```kotlin return flow { . . . val flowContext = currentCoroutineContext() val loading: Job = coroutineScope { launch(flowContext) { databaseQuery().map { if (it != null) { Resource.Success<T>(it, true) } else { Resource.Loading() } }.collect { withContext(flowContext) { emit(it) } } } } . . . } ``` And when I use a version of coroutines above `1.4.2` and worker version above `2.5.0` I'll get this exception: java.lang.IllegalStateException: Flow invariant is violated: Flow was collected in [StandaloneCoroutine{Active}@d2397d6, Dispatchers.IO], but emission happened in [kotlinx.coroutines.UndispatchedMarker@5ef0a57, UndispatchedCoroutine{Active}@2381944, Dispatchers.IO]. Please refer to 'flow' documentation or use 'flowOn' instead Is this a bug or a normal behaviour since the update? The code will work using these libraries versions: org.jetbrains.kotlin:kotlin-reflect:1.5.31 org.jetbrains.kotlin:kotlin-stdlib-jdk8:1.5.31 org.jetbrains.kotlinx:kotlinx-coroutines-android:1.4.2 org.jetbrains.kotlinx:kotlinx-coroutines-core:1.4.2 androidx.work:work-runtime-ktx:2.5.0 <issue_comment>username_1: The section you need is "Context preservation" <issue_comment>username_0: @username_1 I would appreciate if you could give me an example how that code could work in the new version of courotines, thanks! <issue_comment>username_1: `.collect { emit(it) } ` <issue_comment>username_0: @username_1 I've already tried and throws this error: java.lang.IllegalStateException: Flow invariant is violated: Emission from another coroutine is detected. Child of StandaloneCoroutine{Active}@ef4b7c2, expected child of StandaloneCoroutine{Active}@3b89d3. FlowCollector is not thread-safe and concurrent emissions are prohibited. To mitigate this restriction please use 'channelFlow' builder instead of 'flow' <issue_comment>username_1: Oh, sorry, I misread the snippet. The simplest option is to use `channelFlow`: ``` return channelFlow { . . . val flowContext = currentCoroutineContext() val loading: Job = coroutineScope { launch(flowContext) { databaseQuery().map { if (it != null) { Resource.Success<T>(it, true) } else { Resource.Loading() } }.collect { send(it) } } } . . . } ```
{'fraction_non_alphanumeric': 0.08555968887385865, 'fraction_numerical': 0.019614474129184985, 'mean_word_length': 2.0183673469387755, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '12107869', 'n_tokens_mistral': 918, 'n_tokens_neox': 829, 'n_words': 225}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Implement event-driven input username_0: <issue_comment>username_1: Howdy! Any news on implementing the event-driven input? <issue_comment>username_2: I am working on this in my spare time 😄 So far I nearly have the Python equivalent of `add_event_detect`. If anyone else is working on this, or wants to, feel free to reach out (if I haven't already submitted a PR) and we can sync up. <issue_comment>username_0: Thanks for helping out, @username_2. I've put this off for so long now... <issue_comment>username_3: dead? <issue_comment>username_2: @username_3 I have this about 90% there (got stuck on some GIL issues). Hoping to pick this back up in the next couple weeks. I'll push up my branch shortly if you'd like to see where I'm at? <issue_comment>username_3: Yes I will test it when you are ready! <issue_comment>username_4: @username_2 is your event code on a branch? I couldn't seem to find it on your fork. I would love to start looking at this, as I am wanting to port my home automation code over from the python library to ruby. :) <issue_comment>username_5: I would love to test this feature too please. I have a JuiceBox that uses BCM 16. Plus I'd like to detect when a button is pushed etc. <issue_comment>username_2: Hey y'all! Sorry about the silence. I pushed up[ a WIP of where I got with this](https://github.com/username_0/rpi_gpio/compare/master...username_2:event-detect). I hit a GIL wall while not correctly executing the ruby blocks during the event setup, but it's getting there. My next move was to just duplicate functionality from the epoll gem. If anyone cares to help out, feel free to ping me. <issue_comment>username_6: Any reason to not just add a dependency on the epoll gem and then write the functionality in Ruby? The main advantage is that we don't need to copy / paste from the epoll gem or fight with GVL issues. I was able to get evented input working with just the epoll gem and a Ruby script. Here is the code I used: ```ruby require 'epoll' def watch pin, on: # Export the pin we want to watch File.binwrite "/sys/class/gpio/export", pin.to_s # It takes time for the pin support files to appear, so retry a few times retries = 0 begin # `on` should be "none", "rising", "falling", or "both" File.binwrite "/sys/class/gpio/gpio#{pin}/edge", on rescue raise if retries > 3 sleep 0.1 retries += 1 retry end # Read the initial pin value and yield it to the block fd = File.open "/sys/class/gpio/gpio#{pin}/value", 'r' yield fd.read.chomp epoll = Epoll.create epoll.add fd, Epoll::PRI loop do fd.seek 0, IO::SEEK_SET epoll.wait # put the program to sleep until the status changes yield fd.read.chomp end ensure # Unexport the pin when we're done File.binwrite "/sys/class/gpio/unexport", pin.to_s end pin = 5 watch pin, on: 'both' do |value| p value end ``` The motion sensor is mounted on top of the Raspberry Pi, and it's connected to pin 5. Here is a video demo: ![event](https://user-images.githubusercontent.com/3124/83680162-73d0df00-a595-11ea-8aba-6ba795e0ceff.gif) (Better quality version [here](https://www.youtube.com/watch?v=Gi8hMl6NOCM&feature=youtu.be)). <issue_comment>username_2: At first, event detection was appearing to be a relatively straightforward port from the python library. I think it's safe to say, it is not 😆 Also, there are some internal pin registration and validation mechanisms in place that would be easier to tie into from the C-side 🌊. Ultimately it's up to @username_0 on adding the `epoll` dependency. I'm all for it 👍 Your solution is _way_ more elegant than my poorly written C 😄 <issue_comment>username_6: Can you point me to that code? Maybe we can expose it to Ruby, then use more pure-Ruby solutions. I could turn my code in to a patch for this library, but a) I don't know how the Python version behaves (like what the API is like) and b) I'm not sure the goals of this library (like is this library supposed to be a 1:1 port of the Python version? Or a more Ruby centric version? Or..?) <issue_comment>username_2: I'm no maintainer of this (although wouldn't mind 😄 joining), but as a beginner it was much easier to follow along with tutorials where this library's signature matched up with the Python one. When I first dove into this, I went along with that mentality trying to provide as much in parallel whenever possible while also providing more Ruby-centric aliases. <issue_comment>username_0: @username_2 @username_6 Finally working on adding this 😃 I'll try going the `epoll` gem route and leverage @username_6's code. <issue_comment>username_0: I believe I'm all done implementing this on the develop branch. I've done a little bit of testing so far and I'll do some more, but everyone else feel free to test it out and let me know if you find any issues! <issue_comment>username_4: Awesome work @username_0 I'll check it, thanks for all your work. <issue_comment>username_2: I only did a small test (hooked a button up to an LED, but this worked really well! The bounce rate adjustment worked perfectly and I was able to verify both falling and rising edge detection 😄 I'll test this more heavily over the coming weeks. Thank you for your excellent work on this 🥇 <issue_comment>username_0: Merged my implementation into master and included in gem version 0.5.0. Took me long enough!<issue_closed>
{'fraction_non_alphanumeric': 0.0657080451401529, 'fraction_numerical': 0.01383327266108482, 'mean_word_length': 3.910634495084897, 'pattern_counts': {'":': 0, '<': 21, '<?xml version=': 0, '>': 22, 'https://': 3, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '38391', 'n_tokens_mistral': 1686, 'n_tokens_neox': 1566, 'n_words': 834}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Automatic SV username_0: This is the idea pretty much: ![photoshop_2018-03-13_09-49-40](https://user-images.githubusercontent.com/35473621/37329230-95e2d808-26a5-11e8-9de7-e4dd8e9d87f7.png) Quick slider velocity changes: A small panel will be shown just like in the picture. It will allow for a quick manual SV changes. Open lock: User is able to change the size of the slider to what they want and SV would be automatically applied to that slider based on its size in the timeline and length in the editor. Closed lock: For consistent sliders, the mapper is able to close the lock to make the sliders the same size (Basically the same as you create sliders in the current version) Every time the mapper creates a slider while the lock is open, a new inherited point would be created. Once the mapper changed the SV in the panel, **on a new slider** an inherited point would be created. I hope you understand. I'm not that good with explaining. (And English, sorry if I made any mistakes!) <issue_comment>username_1: The UI would not be as you proposed, but this will definitely be possible. <issue_comment>username_0: Yeah I'm not a designer. But you get what I meant which is good. Everytime I tried to map changing SV was really tiring, especially when I wanted custom SV (0,2x, 4x etc) <issue_comment>username_1: Going to close this in favour of https://github.com/ppy/osu/issues/7882, which is covering this flow.<issue_closed>
{'fraction_non_alphanumeric': 0.056932350971198926, 'fraction_numerical': 0.04085733422638982, 'mean_word_length': 4.279151943462898, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9124294', 'n_tokens_mistral': 460, 'n_tokens_neox': 416, 'n_words': 222}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Backup the existing configuration file username_0: Can be useful to keep a backup of the file in case of rollback <issue_comment>username_1: Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? <issue_comment>username_2: jenkins test this please <issue_comment>username_3: Since ansible doesn't restrict the number of backup files at all, it might make sense to be able to turn this feature off.
{'fraction_non_alphanumeric': 0.03535353535353535, 'fraction_numerical': 0.006734006734006734, 'mean_word_length': 5.685393258426966, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19490097', 'n_tokens_mistral': 143, 'n_tokens_neox': 140, 'n_words': 87}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Tab does not get persistent automatically username_0: You know this [Atom's pending pane items](http://blog.atom.io/2016/03/17/atom-1-6-and-1-7-beta.html) thingy. Well... after editing in a tablr tab, it still is kept as pending, and if you open a different file, it replaces it. It should get pinned as normal behavior for normal tabs. Workaround: double-click on the tab. <issue_comment>username_1: @username_0 thanks for the report, I may have missed something important regarding pending pane items. I'll take a look at that.<issue_closed> <issue_comment>username_0: With last update, tab is kept when you press "Open with table editor", not when you edit something inside. <issue_comment>username_1: You mean editing a setting in the CSV opening form? As is, as nothing is saved until you pick a choice in this form I though it made more sense to terminate the pending state when making a choice, but yeah I can also terminate the pending state once the user changed something in the form. <issue_comment>username_0: I mean inside the csv itself. Atom works that way: pending until you write anything in the tab. Sometimes I just open a csv to see its contents, and expect it to be non-persistent because I did not edit it. But to see its contents, I press the "Open with table editor" button. <issue_comment>username_1: Oh, right! I see, let's do that then, that's ok for me.
{'fraction_non_alphanumeric': 0.06118143459915612, 'fraction_numerical': 0.013361462728551337, 'mean_word_length': 4.494208494208494, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15081267', 'n_tokens_mistral': 392, 'n_tokens_neox': 371, 'n_words': 220}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Support for PDF417 username_0: I know you probably get asked this quite frequently, but is there any possibility of PDF417 support in iOS soon? If not, care to explain why? Maybe I could give it a shot and offer a pull request. Thanks! <issue_comment>username_1: Hey @username_0 did you ever give this a shot? <issue_comment>username_0: Unfortunately I did not find the time. Fortunately, I did find a port to JS for ZXING PDF417 scanning. This is really perfect because I was planning to build my app with cordova, so now I just have to understand how to use the damn thing as the docs are non-existant. For the curious: https://github.com/PeculiarVentures/js-zxing-pdf417 <issue_comment>username_2: hey @username_0 did you ever find out a way to get the ZXING PDF417 port to work properly? By looking at the demo and such it looks like your just supposed to upload a file for it to decode.. Not use your camera to detect the barcode... <issue_comment>username_3: @username_0 is js-zxing live stream recognition or we have to take a picture and than make de recognition. <issue_comment>username_4: PDF_417 is supported in 6.0.6.<issue_closed>
{'fraction_non_alphanumeric': 0.0500848896434635, 'fraction_numerical': 0.025466893039049237, 'mean_word_length': 4.751219512195122, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 1, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10044254', 'n_tokens_mistral': 333, 'n_tokens_neox': 317, 'n_words': 183}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add artifact creation test handler username_0: Depends on #1878 so review/merge that one first This adds a new endpoint for the apitest restAPI in which plugin developers can create new artifacts <issue_comment>username_1: could you pull from master? <issue_comment>username_0: Done! <issue_comment>username_1: Looks good. Do you think it's worth adding a test for when the artifact can't be created? <issue_comment>username_0: Good call! There was a bug in oauth which prevented to send a useful error to the consumer. That may be why we were sometimes not seeing the error messages on the plugin.
{'fraction_non_alphanumeric': 0.04559748427672956, 'fraction_numerical': 0.014150943396226415, 'mean_word_length': 5.434343434343434, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20747812', 'n_tokens_mistral': 173, 'n_tokens_neox': 162, 'n_words': 92}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Failed to decode vars: invalid character '\'' looking for beginning of value username_0: Copied and pasted from the example using changelog with --vars option. ``` C:\Users\jerry\git-go\go-bin-rpm>changelog md --out=CHANGELOG.md --vars='{"name":"changelog"}' Failed to decode vars: invalid character '\'' looking for beginning of value ``` <issue_comment>username_1: ```sh [username_1@pc2 changelog] $ changelog md --out=CHANGELOG.md --vars='{"name":"changelog2"}' [username_1@pc2 changelog] $ changelog md --out=CHANGELOG.md --vars='{"name":"changelog"}' ``` so weird. Can you give me more details ? `go env` / `systeminfo | findstr /B /C:"OS Name" /C:"OS Version"` <issue_comment>username_0: I am grateful to you for publishing your wonderful tools, but you never responded to this: https://github.com/username_1/go-github-release/issues/20 I think the tools are amazing, but I can't promote them to the community if you don't have time or interest in supporting them. Please let me know. <issue_comment>username_1: ``` Microsoft Windows [Version 6.3.9600] (c) 2013 Microsoft Corporation. All rights reserved. C:\Users\vagrant>changelog md --out=CHANGELOG.md --vars='{"name":"changelog"}' Failed to decode vars: invalid character '\'' looking for beginning of value C:\Users\vagrant>changelog md --out=CHANGELOG.md --vars='{"name":"changelog"}' Failed to decode vars: invalid character '\'' looking for beginning of value C:\Users\vagrant>changelog md --out=CHANGELOG.md --vars='{"name":"changelog"} Failed to decode vars: invalid character '\'' looking for beginning of value C:\Users\vagrant>changelog md --out=CHANGELOG.md --vars="{\"name\":\"changelog\" }" Changelog file does not exist. C:\Users\vagrant> ``` looks likes an issue with windows.<issue_closed>
{'fraction_non_alphanumeric': 0.14022739577693558, 'fraction_numerical': 0.011911207363291824, 'mean_word_length': 4.451327433628318, 'pattern_counts': {'":': 7, '<': 6, '<?xml version=': 0, '>': 12, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25989086', 'n_tokens_mistral': 607, 'n_tokens_neox': 595, 'n_words': 189}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: update @react-native-community/picker to @react-native-picker/picker username_0: <issue_comment>username_1: Eager to get this merged. Causing a bunch of annoying warnings on my tests. PR looks good to me 👍 <issue_comment>username_2: didn't realize they moved the repo - any idea why? let's just stick with 1.6.0 for now <issue_comment>username_0: Here is long discussion in which they decided to move all the repos to their individual repos. https://github.com/react-native-community/discussions-and-proposals/blob/master/partners/0001-organization-repository-policy.md <issue_comment>username_2: thanks - can you change it to 1.6.0 so we don't introduce any unnecessary bugs for this small pr? <issue_comment>username_0: @username_2 Done. <issue_comment>username_3: Actually ```@react-native-picker/picker``` doesn't have an 1.6.0 version ![image](https://user-images.githubusercontent.com/13150168/98379026-92943f80-2025-11eb-8528-3adb5a06e904.png) <issue_comment>username_1: I see a 1.6.0 ![image](https://user-images.githubusercontent.com/59175439/98527713-43434e80-2273-11eb-91da-c0d18f0f7190.png) https://github.com/react-native-picker/picker/tags?after=v1.6.6 <issue_comment>username_3: Looking into 1.6.x versions of package.json refers to old package name ```@react-native-community/picker```. https://github.com/react-native-picker/picker/commit/8fd46b69eaf3089f584233fb40ce01abc0506610#diff-7ae45ad102eab3b6d7e7896acd08c427a9b25b346470d7bc6507b6481575d519R2 CI test fails because doesn't exists an 1.6.x version ```@react-native-picker/picker``` it starts at 1.8.2. <issue_comment>username_2: @username_1 those are github release tags. not the same as versions published to npm. <issue_comment>username_2: thanks for the heads up on this @username_0 - handled in 8.0.4
{'fraction_non_alphanumeric': 0.10583742498636116, 'fraction_numerical': 0.10420076377523187, 'mean_word_length': 5.457746478873239, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 12, 'https://': 5, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29437327', 'n_tokens_mistral': 728, 'n_tokens_neox': 624, 'n_words': 156}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>username_0: From the release notes: Possibly breaking for TypeScript users If you use TypeScript and don't already include the dom library in your tsconfig, you need to do this now. see https://github.com/sindresorhus/ky/pull/295 <issue_comment>username_1: I'm wondering if there is a way to addd shim by @nuxt/http package? /cc @username_3 @username_2 <issue_comment>username_2: I think all Nuxt TS users will already have 'dom' in their `tsconfig.json` - or if not, then they should... https://typescript.nuxtjs.org/guide/setup.html#configuration <issue_comment>username_3: I've exactly same POV than @username_2 and would have answered exactly the same :)
{'fraction_non_alphanumeric': 0.08092485549132948, 'fraction_numerical': 0.014450867052023121, 'mean_word_length': 5.078947368421052, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7266866', 'n_tokens_mistral': 212, 'n_tokens_neox': 200, 'n_words': 84}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Missing required key 'Events' username_0: Hi, I have `Missing required key 'Events' ...` when try to run `sls s3deploy...` This is my configuration: ``` provider: name: aws runtime: nodejs8.10 ``` The plugin is correctly added to `plugins` and in `customs` I have: ``` iamRoleStatements: - Effect: Allow Action: - s3:GetBucketNotification - s3:PutBucketNotification Resource: arn:aws:s3:::${self:provider.environment.S3_BUCKET_ASSETS_NAME} ``` My function is: ``` assetsLog: handler: src/functions/assets/log.handler name: ${self:service}-assets-log events: - existingS3: bucket: ${self:provider.environment.S3_BUCKET_ASSETS_NAME} event: s3:ObjectCreated:* rules: - prefix: src/ - suffix: .txt ``` No problems with `sls deploy`... I need to run `s3deploy` with `--aws-profile` param like that: `sls s3deploy --aws-profile xxxxx --verbose` So the error is: ``` $ sls s3deploy --aws-profile xxxxx --verbose Serverless: beforeFunctions --> building ... Serverless: beforeFunctions <-- Complete, built 1 events. Serverless: functions --> prepare to be executed by s3 buckets ... policyId exS3-v2-aicreo-dashboard-sls-assets-log-aicreo-dashboard-sls-assets-dev Serverless: functions <-- built 0 events across 1 buckets. Serverless: beforeS3 --> Serverless: beforeS3 <-- Serverless: s3 --> initiate requests ... Unhandled rejection Error: aicreo-dashboard-sls-assets-dev Missing required key 'Events' in params.NotificationConfiguration.LambdaFunctionConfigurations[0] at module.exports.logError (/usr/local/lib/node_modules/serverless/lib/classes/Error.js:92:11) at initializeErrorReporter.then.catch.e (/usr/local/lib/node_modules/serverless/bin/serverless:64:3) at tryCatcher (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/util.js:16:23) at Promise._settlePromiseFromHandler (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/promise.js:512:31) at Promise._settlePromise (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/promise.js:569:18) at Promise._settlePromise0 (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/promise.js:614:10) at Promise._settlePromises (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/promise.js:690:18) at _drainQueueStep (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/async.js:138:12) at _drainQueue (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/async.js:131:9) at Async._drainQueues (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/async.js:147:5) at Immediate.Async.drainQueues [as _onImmediate] (/usr/local/lib/node_modules/serverless/node_modules/bluebird/js/release/async.js:17:14) at processImmediate (timers.js:632:19) at process.topLevelDomainCallback (domain.js:120:23) ``` Thank you in advance! <issue_comment>username_1: Similar issue. Here is more details on the stack. Notice Events: undefined below. C:\work\dev\aws\serverless\aws-node-upload-s3-postprocess>sls s3deploy Serverless: Load command config Serverless: Load command config:credentials Serverless: Load command create Serverless: Load command install Serverless: Load command package Serverless: Load command deploy Serverless: Load command deploy:function Serverless: Load command deploy:list Serverless: Load command deploy:list:functions Serverless: Load command invoke Serverless: Load command invoke:local Serverless: Load command info Serverless: Load command logs Serverless: Load command metrics Serverless: Load command print Serverless: Load command remove Serverless: Load command rollback Serverless: Load command rollback:function Serverless: Load command slstats Serverless: Load command plugin Serverless: Load command plugin Serverless: Load command plugin:install Serverless: Load command plugin Serverless: Load command plugin:uninstall Serverless: Load command plugin Serverless: Load command plugin:list Serverless: Load command plugin Serverless: Load command plugin:search Serverless: Load command config Serverless: Load command config:credentials Serverless: Load command rollback Serverless: Load command rollback:function Serverless: Load command s3deploy Serverless: Load command s3eventremove Serverless: Invoke s3deploy Serverless: beforeFunctions --> building ... Serverless: beforeFunctions <-- Complete, built 1 events. Serverless: functions --> prepare to be executed by s3 buckets ... Serverless: [AWS lambda 200 0.289s 0 retries] getPolicy({ FunctionName: 'dev-aws-node-upload-s3-postprocess-postprocess' }) policyId exS3-v2-dev-aws-node-upload-s3-postprocess-postprocess-signed-uploads-downloads-target Serverless: functions <-- built 0 events across 1 buckets. Serverless: beforeS3 --> Serverless: [AWS s3 200 0.329s 0 retries] getBucketNotificationConfiguration({ Bucket: 'signed-uploads-downloads-target' }) Serverless: beforeS3 <-- Serverless: s3 --> initiate requests ... Serverless: [AWS s3 undefined 0.004s 0 retries] putBucketNotificationConfiguration({ Bucket: 'signed-uploads-downloads-target', NotificationConfiguration: { TopicConfigurations: [ [length]: 0 ], QueueConfigurations: [ [length]: 0 ], LambdaFunctionConfigurations: [ { Id: '3ce0aeab-dff0-4bbd-8bb3-223e34bc0d3e', LambdaFunctionArn: 'arn:aws:lambda:us-east-1:984554769236:function:dev-aws-node-upload-s3-postprocess-postprocess', Events: [ 's3:ObjectCreated:*', [length]: 1 ], Filter: { Key: { FilterRules: [ { Name: 'Suffix', Value: '.json' }, [length]: 1 ] } } }, { Id: 'exS3-v2--b509b5b332cfefe55f264d0a69339edc', LambdaFunctionArn: 'arn:aws:lambda:us-east-1:984554769236:function:dev-aws-node-upload-s3-postprocess-postprocess', Events: undefined, Filter: { Key: { FilterRules: [ { Name: 'suffix', Value: '.json' }, [length]: 1 ] } } }, [length]: 2 ] } }) Error -------------------------------------------------- signed-uploads-downloads-target Missing required key 'Events' in params.NotificationConfiguration.LambdaFunctionConfigurations[1] For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable. Stack Trace -------------------------------------------- Error: signed-uploads-downloads-target Missing required key 'Events' in params.NotificationConfiguration.LambdaFunctionConfigurations[1] at module.exports.logError (C:\Users\monst\AppData\Roaming\npm\node_modules\serverless\lib\classes\Error.js:92:11) at initializeErrorReporter.then.catch.e (C:\Users\monst\AppData\Roaming\npm\node_modules\serverless\bin\serverless:64:3) at runCallback (timers.js:705:18) at tryOnImmediate (timers.js:676:5) at processImmediate (timers.js:658:5) at process.topLevelDomainCallback (domain.js:120:23) From previous event: at C:\Users\monst\AppData\Roaming\npm\node_modules\serverless\bin\serverless:62:9 at Object.<anonymous> (C:\Users\monst\AppData\Roaming\npm\node_modules\serverless\bin\serverless:65:4) at Module._compile (internal/modules/cjs/loader.js:689:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10) at Module.load (internal/modules/cjs/loader.js:599:32) at tryModuleLoad (internal/modules/cjs/loader.js:538:12) at Function.Module._load (internal/modules/cjs/loader.js:530:3) at Function.Module.runMain (internal/modules/cjs/loader.js:742:12) at startup (internal/bootstrap/node.js:283:19) at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3) <issue_comment>username_1: I found the issue with my deployment. It was spaces in the YAML<issue_closed> <issue_comment>username_0: Finally I found the issue with my deployment... The mistake in my function was on this line: ``` event: s3:ObjectCreated:* ``` The right way: ``` events: - s3:ObjectCreated:* ``` I hope can help someone!
{'fraction_non_alphanumeric': 0.12004312926800048, 'fraction_numerical': 0.03198754043368875, 'mean_word_length': 3.6045228902371758, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 16, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 7, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22777318', 'n_tokens_mistral': 2785, 'n_tokens_neox': 2559, 'n_words': 573}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Specify AMI Name when creating image by ImportImage. username_0: Is it possible to specify AMI Name when creating image using ImportImage. ImportImageRequest does not have "Name" parameter while CreateImageRequest have it. ``` CreateImageRequest createImageRequest = new CreateImageRequest(); createImageRequest.withInstanceId("i-xxxxxxxxxxxxxxxxx") .withName("myaminame") .withDescription("this is my ami"); ``` Or is it possible to rename it later? Thanks. <issue_comment>username_1: Howdy! Would you be able to ask this question on StackOverflow ([See: Getting Help](https://github.com/aws/aws-sdk-java#getting-help)) ? Unfortunately we're only arbiters of the SDK itself. We're not so familiar with the details of each service.<issue_closed> <issue_comment>username_0: Hello username_1! thanks for your advice. I opened [a question on StackOverflow](https://stackoverflow.com/questions/45788383/how-to-specify-ami-name-when-creating-image-using-importimage). Now I close this one. Thanks.
{'fraction_non_alphanumeric': 0.09108159392789374, 'fraction_numerical': 0.011385199240986717, 'mean_word_length': 5.169590643274854, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 1, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24764265', 'n_tokens_mistral': 311, 'n_tokens_neox': 288, 'n_words': 103}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [Easy] code reformat and minor reorganization username_0: ### Summary In this pull request, auto-reformat, some manual reorganization, and renaming of modules and filenames in the code to google C++ style (as arrow also following the same style guide). https://google.github.io/styleguide/cppguide.html No functional changes. These changes depend on this pull request #39 ### Change Details * Auto reformat of code to google c++ style. * Renaming of files according to the style guide (.cc, .h, and small cases). * Added license header - Apache License, Version 2.0 to all the source files. * Added lint checker files * Some minor module reorganization ### Tests Since there are no functional changes, ran the existing unit tests, ctest --output-on-failure <issue_comment>username_1: Love it and pulling the trigger on merging this. Apologies to @NicholasCorrado as it may require retouching #39, but hopefully it is not too onerous. <issue_comment>username_2: CircleCI failed after merging this. @NicholasCorrado @username_0 can you take a look? Were some of the previous changes overwritten? <issue_comment>username_0: Hi Yannis, Yes, I am looking into it! It seems did not overwrite the Circle CI config. However, I could see some problems with the arrow libs based on the build logs. Let me check <issue_comment>username_0: @username_2 We have changed the arrow version in the file install_arrow.sh. As part of the pull request #39, so I think we can try by refreshing the arrow cache again and see. https://github.com/UWHustle/hustle/blob/master/install_arrow.sh#L5 Let me try to refresh it. <issue_comment>username_0: @username_2 created a PR to get the new arrow cache to hold the stable version. #43
{'fraction_non_alphanumeric': 0.0633763320246775, 'fraction_numerical': 0.011217049915872126, 'mean_word_length': 4.309523809523809, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30419922', 'n_tokens_mistral': 523, 'n_tokens_neox': 491, 'n_words': 246}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [Feature Request] Use data attribute over classes username_0: Is there any way to use `data-ph-no-capture` instead? Using CSS classes does the job, but brings suboptimal experiences in a sized application and a diverse skill-set environment. Far too often people make the mistake of removing the classes because it is "unused" from a styling perspective without taking into consideration usage like the one today, after all, that is what classes are meant to be used for. So I would like to use the `data-` to prevent mistakes from the past if that is possible.
{'fraction_non_alphanumeric': 0.04297520661157025, 'fraction_numerical': 0.001652892561983471, 'mean_word_length': 4.45945945945946, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9506708', 'n_tokens_mistral': 148, 'n_tokens_neox': 140, 'n_words': 94}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Markdown Meta + Head username_0: Hello, I'm trying to get the "meta" key of the frontmatter in markdown files hoisted up to the route level, currently a `meta` yaml keyword in a `.md` file is ignored, and a ``` <route> meta: description: "Description" </route> ``` is required elsewhere in the markdown file; perhaps I'm missing something? How could you control the layout that generates the markdown without a separate <route> component? One could provide a wrapper .vue for vite-plugin-md that hoists the meta up, but I have a feeling I'm missing something. Thanks. <issue_comment>username_1: You need to set `routeBlockLang: 'yaml'` in `vite.config.ts`'s Pages() plugin options. or use lang attr like: ``` <route lang="yaml"> meta: customMeta: "value" </route> ``` <issue_comment>username_0: Yes, sorry, I do have `routeBlockLang: 'yaml'` configured; what I'm after is that the `meta` attribute in the frontmatter of the markdown file to get hoisted up into the route _instead_ of having to both define frontmatter and use the `<route>` component: I'm after all configuration in the markdown frontmatter. <issue_comment>username_1: @username_2 should I add a markdown frontmatter parser to vite-plugin-pages? or we can have a resolver from vite-plugin-md? WDYT <issue_comment>username_2: For me, I am doing another way around, I defined meta in the frontmatter where `vite-plugin-md` could infer them to the head with `enabledHead: true`. Then I apply frontmatter to the route's meta for other route-based logics. You can how I do it on my site here: https://github.com/username_2/username_2.me/blob/9f0434ea0c06699f1e9f2b31c2a107dc6d3b50e9/vite.config.ts#L58-L60 https://github.com/username_2/username_2.me/blob/9f0434ea0c06699f1e9f2b31c2a107dc6d3b50e9/vite.config.ts#L70 Hope that works for you <issue_comment>username_0: Thank you, that fits exactly where I thought I was missing something.<issue_closed>
{'fraction_non_alphanumeric': 0.0818273092369478, 'fraction_numerical': 0.033634538152610444, 'mean_word_length': 4.694285714285714, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 14, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8870845', 'n_tokens_mistral': 673, 'n_tokens_neox': 624, 'n_words': 253}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: README: update 'tl cat' example to work again username_0: The mutable 'master' reference has mutated since it was added to the readme, so unsurprisingly the example prints an error now. Update the example to something that is expected to verify correctly, and hopefully provide a better example of usage. The example project chosen doesn't tag releases, so we're stuck with an opaque commit hash. <issue_comment>username_1: Thanks!
{'fraction_non_alphanumeric': 0.04842105263157895, 'fraction_numerical': 0.004210526315789474, 'mean_word_length': 5.025316455696203, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20873152', 'n_tokens_mistral': 127, 'n_tokens_neox': 120, 'n_words': 67}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: SemanticHighlighting not shown in Markdown preview in TypeScript code username_0: Issue Type: <b>Bug</b> When I use `"editor.semanticHighlighting.enabled": true,` TypeScript looks great, but now the Markdown code looks bad in comparison. Please see the screenshot. ![Screenshot from 2020-02-24 14-31-46](https://user-images.githubusercontent.com/12832280/75160748-7f00ad00-5712-11ea-963d-d53655328848.png) VS Code version: Code 1.42.1 (c47d83b293181d9be64f27ff093689e8e7aed054, 2020-02-11T14:50:36.977Z) OS version: Linux x64 4.18.0-147.5.1.el8_1.x86_64 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i7-4770S CPU @ 3.10GHz (8 x 3681)| |GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>oop_rasterization: disabled_off<br>protected_video_decode: unavailable_off<br>rasterization: disabled_software<br>skia_renderer: disabled_off<br>surface_control: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: unavailable_off<br>viz_display_compositor: enabled_on<br>viz_hit_test_surface_layer: disabled_off<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|1, 1, 0| |Memory (System)|7.50GB (2.44GB free)| |Process Argv|--no-sandbox --unity-launch /home/david/sites/Link to typing objects cheat-sheet.md| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (9)</summary> Extension|Author (truncated)|Version ---|---|--- vscode-deno|axe|2.0.4 spellright|ban|3.0.50 vscode-eslint|dba|2.1.1 prettier-vscode|esb|3.20.0 shell-format|fox|7.0.1 debugger-for-chrome|msj|4.12.6 LiveServer|rit|5.6.1 shellcheck|tim|0.9.0 quokka-vscode|Wal|1.0.279 </details> <!-- generated by issue reporter --> <issue_comment>username_1: @username_2 The markdown extension needs a command or api to get the semantic highlighting colors in order to support this <issue_comment>username_2: @username_1 Assigning back to you as it's a feature request for markdown. I created https://github.com/microsoft/vscode/issues/91375 for the missing to programmatically get semantic tokens. <issue_comment>username_1: See #56356 for api feature request to get colors as they appear in the editor Try this extension to get closer to VS Code's highlighting (but still without semantic highlighting): https://marketplace.visualstudio.com/items?itemName=bierner.markdown-shiki <issue_comment>username_0: @username_1 I have been looking at bierner.markdown-shiki and it is great. Is there a way I can convert my MD to a PDF with this highlighting you know of? Cheers.
{'fraction_non_alphanumeric': 0.11555392516507704, 'fraction_numerical': 0.07263389581804842, 'mean_word_length': 5.800498753117207, 'pattern_counts': {'":': 1, '<': 34, '<?xml version=': 0, '>': 34, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15954466', 'n_tokens_mistral': 1092, 'n_tokens_neox': 964, 'n_words': 228}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Index page keeps scrolling when the mobile menu is open username_0: ## Description When the mobile menu is open, the page will keep scrolling if you scroll ### Expected Behavior The page should be locked to the scroll position where it was when the mobile menu was opened ### Actual Behavior The page keeps scrolling, leading to bad UX ### Steps to Reproduce 1. Decrease screen width so that mobile view is visible 2. Click on the hamburger menu in the top right corner and scroll in any direction 3. Exit from the menu and note that you are not in the same position that you were when the menu opened ### Environment Seen on `dev.qhacks.io` - Version: - Platform: Chrome, MacOS<issue_closed>
{'fraction_non_alphanumeric': 0.04774535809018567, 'fraction_numerical': 0.005305039787798408, 'mean_word_length': 3.8089171974522293, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8406003', 'n_tokens_mistral': 217, 'n_tokens_neox': 194, 'n_words': 116}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: demo version doesn't work after build on android 10 username_0: **Godot version:** **AdMob Plugin version:** **Issue description:** <!-- What happened and what was expected. --> I downloaded the demo version, built it according to the instructions, installed it, and after clicking on the icon, a black window flashes and that's it <issue_comment>username_0: found a solution, used the wrong version of godot<issue_closed>
{'fraction_non_alphanumeric': 0.08779443254817987, 'fraction_numerical': 0.008565310492505354, 'mean_word_length': 5.077922077922078, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11561717', 'n_tokens_mistral': 131, 'n_tokens_neox': 116, 'n_words': 60}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [SQLLINE-281] Make tmp dir removed after build finished username_0: The PR improves test to have tmp dirs removed after test/build finished fixes #281 <issue_comment>username_1: Merged as 36025e4; fixes #281. I also force-pushed several older commits, adding author names to the commit message of 3561eaf and 0a228fd.
{'fraction_non_alphanumeric': 0.056179775280898875, 'fraction_numerical': 0.0702247191011236, 'mean_word_length': 5.375, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4314383', 'n_tokens_mistral': 125, 'n_tokens_neox': 104, 'n_words': 47}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: VS Code, Flutter: New Project, always uses com.example.project_name username_0: Is there a way to set the com.example.project_name when the Flutter project is initially created with the command Flutter: New Project? This is used in several different files and I need it to be something like com.my_website.project_name instead. Is it correct to update all instances of com.example.project_name? ![image](https://user-images.githubusercontent.com/50309714/68521046-137df700-026b-11ea-92f4-ae1f69a52a3e.png) <issue_comment>username_1: There are some settings that begin `dart.flutterCreate...` that let you control this, see: https://dartcode.org/docs/settings/#dartfluttercreateandroidlanguage Let me know if anything isn't clear or this doesn't work as expected.<issue_closed> <issue_comment>username_0: Thanks. This will help next time I create a new project. To fix the project I have already created, is it correct to update all the instances of com.example.crystal in the image? <issue_comment>username_1: Honestly I'm not sure - I would expect so, but it's not something I'm totally familiar with. This [StackOverflow question] seems to have conflicting answers about it https://stackoverflow.com/questions/51534616/how-to-change-package-name-in-flutter If it doesn't work, the easiest way may be to create a new project and then copy your Dart files over to it (assuming you haven't customised the native mobile parts).
{'fraction_non_alphanumeric': 0.06983050847457627, 'fraction_numerical': 0.03254237288135593, 'mean_word_length': 5.0, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24517195', 'n_tokens_mistral': 447, 'n_tokens_neox': 400, 'n_words': 178}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: 'Etag' should be 'ETag' in Response.php username_0: Hi everybody, as in subject. Please take a look: https://github.com/symfony/symfony/blob/master/src/Symfony/Component/HttpFoundation/Response.php#L859<issue_closed> <issue_comment>username_1: Well, header names are case insensitive in HTTP (and Symfony handles them case-insensitively). So there is no bug there
{'fraction_non_alphanumeric': 0.0960591133004926, 'fraction_numerical': 0.012315270935960592, 'mean_word_length': 5.4603174603174605, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '21557744', 'n_tokens_mistral': 129, 'n_tokens_neox': 122, 'n_words': 38}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Additional chat client notices username_0: ## Feature Request I'd like to add a few new message notices to listen for in the `twitch-chat-client` package: - `bad_vip_max_vips_reached` - `bad_vip_achievement_incomplete` - `msg_channel_blocked` - `msg_duplicate` - `msg_emotonly` - `msg_followersonly` - `msg_followersonly_followed` - `msg_followersonly_zero` - `msg_followersonly` - `msg_verified_email` - `msg_timedout` <!-- For feature requests, we don't impose any style. But please leave the headline! Please allow for at least 24 hours until someone can comment your issue before you send a pull request. --> <issue_comment>username_1: Sure, I don't see why they should be left out. If you want to implement this, please make sure that for notices that can be responses to user-initiated commands, they should be resolving/rejecting the command's promise (see [this](https://github.com/twurple/twurple/blob/0f98ae6cb10fe60d4b3a9d08b027c05d80c06be9/packages/chat/src/ChatClient.ts#L1204-L1207) for an example). Additionally, 4.x is now feature frozen and will only receive bug fixes. Please implement new features only on the `main` branch.<issue_closed>
{'fraction_non_alphanumeric': 0.08216926869350863, 'fraction_numerical': 0.029580936729663106, 'mean_word_length': 4.855769230769231, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28870608', 'n_tokens_mistral': 425, 'n_tokens_neox': 392, 'n_words': 130}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Comprehensions string construction support username_0: @username_1 Would you folks be open to extending the spec of comprehensions to support the construction of strings? Given and string is effectively an array of char bytes it like it fits the existing spec with regard to the construction of an array. Obviously the `import "strings"` built in package is able to achieve this functionality but it seem natural to include it in the language core. Proposed syntax follows: ``` strAry = ["bar", "buz"] joinedStr: "\( str for str in strAry )" ``` Which would evaluate to: ``` joinedStr: "barbuz" ``` I would expect to the following to be valid: ``` import "strings" strAry = ["bar", "buz"] joinedStr: "\( str for str in strAry )" joinedStr: strings.Join(strAry, "") ``` <issue_comment>username_1: Personally I would expect there to be spaces between the strings. Is there precedence for this interpretation in other languages? It seems to be very specific functionality to make it in to the spec. Most of the recent features were neutral to the size of the spec, or even shrunk it a bit. This will have a fairly large impact with, seemingly, very little benefit. Note also that using `strings.Join` in your example is shorter, and in my view, also clearer. I would be more inclined to implement a feature that automatically inserts import statements (like goimports). That said, we have been thinking about a reduce functionality. This cannot be done conveniently with a builtin, as CUE doesn't have lambdas (although they exist internally). Picking some arbitrary syntax for now, you would be able to write ``` joinedStr: for y in strAny with x="" reduce x+y ``` But also here, using `strings.Join` would be shorter and clearer. <issue_comment>username_0: I thing the introduction of `reduce` would suffice my need. It could also be used to eliminate the existing comprehension syntax since arrays and structs could be constructed in the manor too. ### detailed benefits and use case I haven't comprised formal proof but my intuition is cuelang is largely a context-free grammar and could be modeled as a context push down-automoton with the exception of function calls such as `len()` or `string.Join()`. A function call such as `len()` or `string.Join()` can be viewed as the definitive boundary where truing complete functionality may occur. If indeed cluelang can be considered context-free, again proof is needed, then the time and space complexity it theoretically bounded to n^2 and n^2 where n is the length of the input. [Relavant paper](https://reader.elsevier.com/reader/sd/pii/S0019995868910875?token=F50F3180BFB955D89686C4C04639C094349F99413D22910FFC231C8952243F820BB9A680224B3BEC43A742C3793DED72) Perhaps comprehensions don't fall into the context-free category too, which would throw a wrench in the benefits described in the following, but provided they do and my intuition is correct; the language could be applied variety of different ways very effectively and efficiently. For example, capturing IOT time series data across a diverse set of devices recording data in variety of different formats but ultimately being able to map data from one format another and ultimately being able to have a deterministic way building a consistent data set to use as input for a function. This is especially powerful when you consider data expressed as cuelang being captured and processed across a distributed since the values can be evaluated in any order, even evaluated twice, and ultimately yield the same result. For something something as common as joining strings it would be nice if crossing the Turing-complete function call boundary was necessary to represent them. <issue_comment>username_1: Builtin functions are typically not worse than `O(n^2)`, but certainly not exponential, and will not introduce Turing completeness by themselves. But, there are references, tuples, and conditionals (via comprehensions), which, together make the language TC if unchecked, although it pretty unpractical in its usage. I haven't done so yet, but my plan is to prohibit primitive recursion. For instance, the test cases include an implementation of Fibonacci. This would then no longer be possible. Either way, pertaining to this particular request, I suggest labeling it MaybLater and reconsider after query extensions and reduce are implemented. <issue_comment>username_0: That sounds reasonable.<issue_closed> <issue_comment>username_0: @username_1 I don't believe I have permission to add labels. <issue_comment>username_2: This issue has been migrated to https://github.com/cue-lang/cue/issues/113. For more details about CUE's migration to a new home, please see https://github.com/cue-lang/cue/issues/1078.
{'fraction_non_alphanumeric': 0.05213074058750517, 'fraction_numerical': 0.021307405875051717, 'mean_word_length': 4.708382526564344, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '533639', 'n_tokens_mistral': 1311, 'n_tokens_neox': 1177, 'n_words': 685}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Set external node ip username_0: This is a followup to #609, which provides a routable IP address: ```console lima-rancher-desktop:~# ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:55:55:de:ac:46 brd ff:ff:ff:ff:ff:ff inet 192.168.5.15/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fec0::5055:55ff:fede:ac46/64 scope site dynamic valid_lft 84211sec preferred_lft 12211sec inet6 fe80::5055:55ff:fede:ac46/64 scope link valid_lft forever preferred_lft forever lima-rancher-desktop:~# ip addr show lima0 3: lima0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:55:55:06:42:0a brd ff:ff:ff:ff:ff:ff inet 192.168.205.2/24 scope global lima0 valid_lft forever preferred_lft forever inet6 fe80::5055:55ff:fe06:420a/64 scope link valid_lft forever preferred_lft forever ``` The problem is that `k3s` is using the `eth0` address for external IP addresses: ```console $ kubectl get svc -n kube-system traefik NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE traefik LoadBalancer 10.43.146.174 192.168.5.15 80:30206/TCP,443:32104/TCP 23h ``` We can select the interface to use when we start up: ```console $ grep flannel /etc/init.d/k3s command_args="server --https-listen-port 6443 --flannel-iface lima0 >>/var/log/k3s 2>&1" ``` I think we should maybe not rely on the default `lima0` name, but set it explicitly in our `lima.yaml` to e.g. `rd0`: ```yaml networks: - lima: shared interface: rd0 ``` Adding `--flannel-iface` to an already initialized node doesn't work; it has to be specified during initial startup. So we will need to reset the node during upgrades (I tested with just `rm -rf /var/lib/rancher/k3s/server` before I restarted the server). Another open question is: how far back is `--flannel-iface` supported? Does it work with `v1.16.7`? We need to test, and if it doesn't, determine how we deal with it. <issue_comment>username_0: @dweomer confirmed that it works all the way back to 1.16.7: https://github.com/k3s-io/k3s/blob/v1.16.7+k3s1/pkg/cli/cmds/agent.go#L93-L97
{'fraction_non_alphanumeric': 0.09776055124892334, 'fraction_numerical': 0.08311800172265288, 'mean_word_length': 3.391304347826087, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 8, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 4, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25291253', 'n_tokens_mistral': 992, 'n_tokens_neox': 837, 'n_words': 266}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: VitaSDK issues (✖) and psvsdk fixes (✔) username_0: ✖ No SDK versioning => no possible evolution for newer firmware/toolchain ✔ Freeze the while vita build environment (tools+headers) using a **tagged** docker image ✖ No functional tests => possible regression on code changes/fix ✔ Enforce checking at CI level against "golden" files ✖ Require you to compile a vanilla arm-gcc just to define some default flags(`-D__vita__`) ✔ Wrap the official ARM GCC to add the required flags (see `psv-gcc`) ✖ Required dependencies (libelf) may conflict with your host ✔ Build in a docker to get an isolated build environment ✖ Can only be (officially) usable with a CMake wrapper ✔ Keep you free to use any build process (shell, makefile, cmake...) ✖ Heterogeneous toolchain naming (`vita-pack-vpk`, `vita-elf-create` ...)? ✔ All binaries follow the `psv-$type` format (`psv-sfo`, `psv-velf` ...) ✖ No offline documentation ✔ All tools and formats have a manual (see `man psv-sfo` or `man sfo`) ✖ Heterogeneous source (return code, indentation, ...) ✔ Enforce formatting rule at CI level using clang-format ✖ Heterogeneous naming (Modules, Libraries) ✔ Sources and documentations are *mostly* made from scratch to be unified ✖ Platform specific issues (OSX, Windows) ✔ Use Linux (via docker) as common platform (Windows can use Subsystem for Linux) Any ideas are welcomes
{'fraction_non_alphanumeric': 0.08728522336769759, 'fraction_numerical': 0.0006872852233676976, 'mean_word_length': 3.869565217391304, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6327956', 'n_tokens_mistral': 479, 'n_tokens_neox': 450, 'n_words': 208}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Persistence (client-side) username_0: Sorry if this should be obvious, but is there a way to make the session persist (client-side)? Currently if the user closes their browser, the session is gone. I can’t find anywhere that the `Expires` option is added to the session cookie.
{'fraction_non_alphanumeric': 0.06389776357827476, 'fraction_numerical': 0.003194888178913738, 'mean_word_length': 5.1568627450980395, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2259911', 'n_tokens_mistral': 83, 'n_tokens_neox': 80, 'n_words': 46}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Allow to push to multiple sources username_0: **Feature request** We have dev project and live addon project. The code source is same. We need an option to push somehow to dev project while developing, but there is no `clasp push` option to change destination. _Possible_ API (pseudocode): `clasp push --destination_clasp_config=.devproject.clasp.json` <issue_comment>username_1: Hello @username_0 , maybe this command can be useful for you: `clasp setting scriptId new-id` https://github.com/google/clasp#setting Or if you want to push multiple sources together, you can try this library that uses clasp: https://github.com/username_1/multi-clasp2 <issue_comment>username_0: Thanks, I'll try it and post any results here.
{'fraction_non_alphanumeric': 0.07474226804123711, 'fraction_numerical': 0.007731958762886598, 'mean_word_length': 4.713235294117647, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7955613', 'n_tokens_mistral': 231, 'n_tokens_neox': 220, 'n_words': 95}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: problems to run niftynet username_0: Hello, I am trying to test NiftyNet for the first time but I am unable to do it. I have configured the instalation according to this site (source code repository): https://niftynet.readthedocs.io/en/dev/installation.html I have sicessfuly downloaded the model, however, once I execute te command "python net_segment.py inference -c ~/niftynet/extensions/dense_vnet_abdominal_ct/config.ini" I get the follwing errors: .... -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5) INFO:niftynet: Initialising Dataset from 1 subjects... 2019-10-01 13:53:56.311601: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-10-01 13:53:56.312103: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.725 pciBusID: 0000:01:00.0 2019-10-01 13:53:56.312156: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0 2019-10-01 13:53:56.312167: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0 2019-10-01 13:53:56.312177: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0 2019-10-01 13:53:56.312186: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0 2019-10-01 13:53:56.312195: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0 2019-10-01 13:53:56.312205: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0 2019-10-01 13:53:56.312215: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2019-10-01 13:53:56.312256: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-10-01 13:53:56.312735: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-10-01 13:53:56.313199: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-10-01 13:53:56.313229: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-10-01 13:53:56.313233: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2019-10-01 13:53:56.313240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2019-10-01 13:53:56.313345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-10-01 13:53:56.313816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-10-01 13:53:56.314271: **I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6821 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:01:00.0, compute capability: 7.5)** INFO:niftynet: Restoring parameters from /home/yunior/niftynet/models/dense_vnet_abdominal_ct/models/model.ckpt-3000 **2019-10-01 13:53:56.630423: W tensorflow/core/common_runtime/colocation_graph.cc:1016] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices.** Current candidate devices are [ /job:localhost/replica:0/task:0/device:CPU:0]. See below for details of this colocation group: Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] IteratorGetNext: CPU GPU XLA_CPU XLA_GPU OneShotIterator: CPU IteratorToStringHandle: CPU GPU XLA_CPU XLA_GPU Colocation members, user-requested devices, and framework assigned devices, if any: worker_0/validation/OneShotIterator (OneShotIterator) /device:GPU:0 worker_0/validation/IteratorToStringHandle (IteratorToStringHandle) /device:GPU:0 worker_0/validation/IteratorGetNext (IteratorGetNext) /device:GPU:0 2019-10-01 13:53:57.360882: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2019-10-01 13:53:57.991115: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-10-01 13:53:57.998596: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-10-01 13:53:58.001047: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-10-01 13:53:58.001075: W ./tensorflow/stream_executor/stream.h:1995] attempting to perform DNN operation using StreamExecutor without DNN support INFO:niftynet: cleaning up... INFO:niftynet: stopping sampling threads ...... my configuration is as follows CPU conf. intel I7 (8 cores) and 64GB RAM GPU conf. GeForce RTX 2070, 8GB, 2304 cores In addition I have installed the gpu-version of tensorflow to use de GPU por calculations I can imaging that errors are related to memory issues in the GPU. I wonder whether is there a way to use the memory on the CPU as well. Could you please give me a feedback. Note I am not an expert using python thanks in advance <issue_comment>username_1: As per Tensorflow issue [#24496](https://github.com/tensorflow/tensorflow/issues/24496) it seems to be a tensorflow problem. Could you please try and run [this](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/classification.ipynb) tensorflow example and let us know if the same error appears there. <issue_comment>username_0: Thank you very much for the reply. I have tried this and I got no errors at all. The script did the predictions and this are the final message: ... Test accuracy: 0.8813 (28, 28) (1, 28, 28) [[3.5098125e-04 1.3001217e-15 9.9916017e-01 4.8920496e-11 4.2484555e-04 5.2356322e-12 6.4001571e-05 5.9205704e-17 5.7315066e-11 2.8146843e-15]] I have an additional comment that might help to figure out the problem with NiftyNet. I faced problems with tf at the beguining. The thing is that I have 1.14.0 version of tf and apparently NiftyNet have troubles with this version. As a simple solution the program suggested to use tf.compat.v1.Session in several subscripts of the software. Therefore I used: import tensorflow.compat.v1 as tf tf.disable_v2_behavior() instead of import tensorflow as tf Then errors were with tensorflow session were fixed Could it be the source of the current problem? Thank you in advance <issue_comment>username_0: Hi, i did some progress, i think. I have upgraded nvidia drivers and cuda toolkit. At leas I do not see the previous errors anymore. Now I have nvidia-418, cuda-10.1 and tf 1.14. However I have a new error (see below) ...... Traceback (most recent call last): File "net_segment.py", line 5, in <module> from niftynet import main File "/home/yunior/NiftyNet/niftynet/__init__.py", line 62, in <module> import niftynet.utilities.user_parameters_parser as user_parameters_parser File "/home/yunior/NiftyNet/niftynet/utilities/user_parameters_parser.py", line 22, in <module> from niftynet.utilities.user_parameters_default import \ File "/home/yunior/NiftyNet/niftynet/utilities/user_parameters_default.py", line 10, in <module> from niftynet.engine.image_window_dataset import SMALLER_FINAL_BATCH_MODE File "/home/yunior/NiftyNet/niftynet/engine/image_window_dataset.py", line 18, in <module> from niftynet.layer.base_layer import Layer File "/home/yunior/NiftyNet/niftynet/layer/base_layer.py", line 11, in <module> from niftynet.engine.application_variables import RESTORABLE File "/home/yunior/NiftyNet/niftynet/engine/application_variables.py", line 10, in <module> from tensorflow.contrib.framework import list_variables File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/__init__.py", line 37, in <module> from tensorflow.contrib import cudnn_rnn File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/cudnn_rnn/__init__.py", line 38, in <module> from tensorflow.contrib.cudnn_rnn.python.layers import * File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/cudnn_rnn/python/layers/__init__.py", line 23, in <module> from tensorflow.contrib.cudnn_rnn.python.layers.cudnn_rnn import * File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/cudnn_rnn/python/layers/cudnn_rnn.py", line 20, in <module> from tensorflow.contrib.cudnn_rnn.python.ops import cudnn_rnn_ops File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/cudnn_rnn/python/ops/cudnn_rnn_ops.py", line 22, in <module> from tensorflow.contrib.rnn.python.ops import lstm_ops File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/rnn/__init__.py", line 91, in <module> from tensorflow.contrib.rnn.python.ops.lstm_ops import * File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow/contrib/rnn/python/ops/lstm_ops.py", line 298, in <module> @ops.RegisterGradient("BlockLSTM") File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 2489, in __call__ _gradient_registry.register(f, self._op_type) File "/home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow_core/python/framework/registry.py", line 61, in register (self._name, name, function_name, filename, line_number)) **KeyError: "Registering two gradient with name 'BlockLSTM'! (Previous registration was in register /home/yunior/.conda/envs/my_env/lib/python3.7/site-packages/tensorflow_core/python/framework/registry.py:66)"** Please, could anybody suggest a tentative solution? Thanks <issue_comment>username_0: Hi guys, I really need NiftyNet running in my PC. However after more than a week I am not able to do it. Could somebody guiveme a feedback please? I have been trying to run the example posted [here](https://niftynet.readthedocs.io/en/dev/) with no success. I have tried several configuration of nvidia drivers, cuda versions, cudnn and tensorflow but no progress at all. I currently have Ubuntu 18.04, Nvidia 4.18, cuda 10.0, cudnn 7.3.0. I see the following messages in the terminal when executed the program. NiftyNet version 0.5.0+185.gb5f3ba1e.dirty [CUSTOM] -- num_classes: 9 -- output_prob: False -- label_normalisation: False -- softmax: True -- min_sampling_ratio: 0 -- compulsory_labels: (0, 1) -- rand_samples: 0 -- min_numb_labels: 1 -- proba_connect: True -- evaluation_units: foreground -- do_mixup: False -- mixup_alpha: 0.2 -- mix_match: False -- weight: () -- inferred: () -- sampler: () -- label: ('label',) -- image: ('ct',) -- name: net_segment [CONFIG_FILE] -- path: /home/yunior/niftynet/extensions/dense_vnet_abdominal_ct/config.ini [CT] -- csv_file: -- path_to_search: ./data/dense_vnet_abdominal_ct/ -- filename_contains: ('CT',) -- filename_not_contains: () -- filename_removefromid: -- interp_order: 1 -- loader: None -- pixdim: () -- axcodes: ('A', 'R', 'S') -- spatial_window_size: (144, 144, 144) [LABEL] -- csv_file: -- path_to_search: ./data/dense_vnet_abdominal_ct/ -- filename_contains: ('Label',) -- filename_not_contains: () -- filename_removefromid: -- interp_order: 0 -- loader: None -- pixdim: () -- axcodes: ('A', 'R', 'S') -- spatial_window_size: (144, 144, 144) [SYSTEM] -- cuda_devices: 0 -- num_threads: 1 -- num_gpus: 1 -- model_dir: /home/yunior/niftynet/models/dense_vnet_abdominal_ct -- dataset_split_file: ./dataset_split.csv -- event_handler: ('model_saver', 'model_restorer', 'sampler_threading', 'apply_gradients', 'output_interpreter', 'console_logger', 'tensorboard_logger', 'performance_logger') -- iteration_generator: iteration_generator -- queue_length: 36 -- action: inference [NETWORK] -- name: dense_vnet -- activation_function: relu -- batch_size: 1 -- smaller_final_batch_mode: pad -- decay: 0.0 -- reg_type: L2 -- volume_padding_size: (0, 0, 0) -- volume_padding_mode: minimum -- volume_padding_to_size: (0,) -- window_sampling: resize -- force_output_identity_resizing: False -- queue_length: 5 -- multimod_foreground_type: and -- histogram_ref_file: ./histogram_ref_file.txt -- norm_type: percentile -- cutoff: (0.01, 0.99) -- foreground_type: otsu_plus -- normalisation: False -- rgb_normalisation: False -- whitening: False [Truncated] File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 591, in __call__ return self.call(inp, filter) File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 208, in __call__ name=self.name) File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1440, in conv3d dilations=dilations, name=name) File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper op_def=op_def) File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op op_def=op_def) File "/home/yunior/NN_env/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1801, in __init__ self._traceback = tf_stack.extract_stack() UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node worker_0/DenseVNet/conv_bn/conv_/conv (defined at /home/yunior/NiftyNet/niftynet/layer/convolution.py:100) ]] [[node worker_0/post_processing/ExpandDims (defined at /home/yunior/NiftyNet/niftynet/layer/post_processing.py:36) ]] Thank you in advance <issue_comment>username_2: i have niftynet 0.6, CUDA 10.0, tensorflow-gpu 1.13.2 and numpy 1.16 using geforce RTX 2060 6GB vram with nvidia driver 440.33.01 tensorflow tries to allocate 5 GB spatial_window_size = (64, 64, 512) with dense_vnet network i've tried config.gpu_options.allow_growth = True but it doesn't seem to work. I get the same "Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR" any solution so far? I am not sure if legacy drivers will work better, maybe the v390 nvidia driver is compatible? I wonder if this memcpy and CUDNN internal error is related to the newer drivers/cards I bought a GTX 1080 Ti w/ 11GB ram, will see if this one supports niftynet <issue_comment>username_0: Hello, I am not an expert in Python programming and therefore I don't know the pretty way to do it. As in your case I also tried to use "config.gpu_options.allow_growth = True" but for whatever reason it did'n work for me neither. However, because it is not that problematic for me, I type the following command before running niftynet: export TF_FORCE_GPU_ALLOW_GROWTH=true This solved my problem Hope this help youPlease in case some one want to share the easy and permanet way to do it please share it. Best
{'fraction_non_alphanumeric': 0.10273304050756467, 'fraction_numerical': 0.06314055636896047, 'mean_word_length': 4.464333333333333, 'pattern_counts': {'":': 0, '<': 22, '<?xml version=': 0, '>': 24, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26830222', 'n_tokens_mistral': 6352, 'n_tokens_neox': 5610, 'n_words': 1538}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How do I install torch and other libs in my submission? username_0: I have built a code submission similar to yours. However, my code does need to use PyTorch. But I am not sure how to do that in CodaLab environment. Could you give more info on how to modify the metadata file to install torch and other libs before running the program? Thank you <issue_comment>username_1: @username_0 I'll post my answer to [codelab forum thread](https://competitions.codalab.org/forums/17190/4643/) you opened. <issue_comment>username_0: My issue has been resolved. I am closing this now. Thanks for your help.<issue_closed>
{'fraction_non_alphanumeric': 0.06018518518518518, 'fraction_numerical': 0.020061728395061727, 'mean_word_length': 4.846846846846847, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28407933', 'n_tokens_mistral': 185, 'n_tokens_neox': 176, 'n_words': 93}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Question: Is convolution with signed 8bit source supported? username_0: I want to do convolution based on the following format: source(s8), weight(s8), bias(s32) ---> dest(s32) However, it seems source only supports unsigned 8bit right now. I am following the example of simple_net_int8.cpp, but when the memory format of conv_src_md from u8 to s8 does not give me correct output. Is signed 8 bit source convolution supported? If yes, can you support a sample code? <issue_comment>username_1: Hi @username_0, Can you please try using the latest (master) version of Intel MKL-DNN? We recently extended support with `s8 (*) s8 --> s32` convolutions. Here are few implementations: https://github.com/intel/mkl-dnn/blob/master/src/cpu/cpu_engine.cpp#L152 https://github.com/intel/mkl-dnn/blob/master/src/cpu/cpu_engine.cpp#L169 <issue_comment>username_0: #include <iostream> #include <cstdint> #include <numeric> #include "mkldnn.hpp" using namespace mkldnn; using namespace std; std::vector<int32_t> run_int8_conv(memory::data_type src_data_type) { auto cpu_engine = engine(engine::cpu, 0); std::vector<primitive> net; std::vector<primitive> net_weight; memory::dims conv_src_tz = { 1, 3, 64, 64 }; memory::dims conv_weights_tz = { 8, 3, 5, 5 }; memory::dims conv_bias_tz = { 8 }; memory::dims conv_dst_tz = { 1, 8, 60, 60 }; memory::dims conv_strides = { 1, 1 }; auto conv_padding = { 0, 0 }; auto conv_src_md = memory::desc( { conv_src_tz }, src_data_type, memory::format::any); auto conv_weights_md = memory::desc( { conv_weights_tz }, memory::data_type::s8, memory::format::any); auto conv_bias_md = memory::desc( { conv_bias_tz }, memory::data_type::s32, memory::format::any); auto conv_dst_md = memory::desc( { conv_dst_tz }, memory::data_type::s32, memory::format::any); std::vector<int8_t> conv_src(std::accumulate(conv_src_tz.begin(), conv_src_tz.end(), 1, std::multiplies<uint32_t>())); std::vector<int8_t> conv_weights(std::accumulate(conv_weights_tz.begin(), conv_weights_tz.end(), 1, std::multiplies<uint32_t>())); std::vector<int32_t> conv_bias(std::accumulate(conv_bias_tz.begin(), conv_bias_tz.end(), 1, std::multiplies<uint32_t>())); std::vector<int32_t> conv_dst(std::accumulate(conv_dst_tz.begin(), conv_dst_tz.end(), 1, std::multiplies<uint32_t>())); // Fill src, weights and bias with constant values. std::fill(conv_src.begin(), conv_src.end(), 1); std::fill(conv_weights.begin(), conv_weights.end(), 1); std::fill(conv_bias.begin(), conv_bias.end(), 0); auto conv_desc = convolution_forward::desc(prop_kind::forward, convolution_direct, conv_src_md, conv_weights_md, conv_bias_md, conv_dst_md, conv_strides, conv_padding, conv_padding, padding_kind::zero); auto conv_prim_desc = convolution_forward::primitive_desc(conv_desc, cpu_engine); auto user_src_memory = memory({ { { conv_src_tz }, src_data_type, memory::format::nhwc }, cpu_engine }, conv_src.data()); auto user_weights_memory = memory({ { { conv_weights_tz }, memory::data_type::s8, memory::format::oihw }, cpu_engine }, conv_weights.data()); auto conv_bias_memory = memory( { { { conv_bias_tz }, memory::data_type::s32, memory::format::x }, cpu_engine }, conv_bias.data()); auto user_dst_memory = memory( { { { conv_dst_tz }, memory::data_type::s32, memory::format::nhwc }, cpu_engine }, conv_dst.data()); auto conv_src_memory = user_src_memory; if (memory::primitive_desc(conv_prim_desc.src_primitive_desc()) != user_src_memory.get_primitive_desc()) { conv_src_memory = memory(conv_prim_desc.src_primitive_desc()); net.push_back(reorder(user_src_memory, conv_src_memory)); } auto conv_weights_memory = user_weights_memory; if (memory::primitive_desc(conv_prim_desc.weights_primitive_desc()) [Truncated] } stream(stream::kind::eager).submit(net_weight).wait(); stream(stream::kind::eager).submit(net).wait(); return conv_dst; } int main(int argc, char **argv) { try { auto conv_dst_u8 = run_int8_conv(memory::data_type::u8); cout << "result[0][0][0][0] " << conv_dst_u8[0] << std::endl; auto conv_dst_s8 = run_int8_conv(memory::data_type::s8); cout << "result[0][0][0][0] " << conv_dst_s8[0] << std::endl; } catch (error &e) { std::cerr << "status: " << e.status << std::endl; std::cerr << "message: " << e.message << std::endl; } return 0; } <issue_comment>username_0: https://gist.github.com/username_0/d5a105041e88ba47b47bf2990d26ebc2 In the Gist above, I run signed int8 convolution on: src : 1x3x64x64, all values filled with 1. weight: 8x3x5x5, all values filled with 1. bias: 8, all values filled with 0. I expect the first value of the output (conv_dst@(0,0,0,0)) should be 75, which is true when I set the memory of src to be memory::data_type::s8. However, I get 0 when I set the memory of src to be memory::date_type::u8. I am using the latest version on the master branch right now (commit hash 1687299). My CPU is Xeon E5-2667. My compiler is gcc 7.3.0 My OS is Linux CentOS 7.3 I compiled MKL-DNN with default configuration with small MKL library https://github.com/intel/mkl-dnn/releases/download/v0.17.2/mklml_lnx_2019.0.1.20181227.tgz <issue_comment>username_1: Thanks for a small reproducer! I was able to reproduce the issue. Let me take a look into this and come to you... <issue_comment>username_1: Oh, here is the thing... This is intentional. If you change weights fill from `1` to `2` you would get the same (expected) results: 150. The reason is that during weights reorder we divide all the data by two to overcome possible overflows. In the real world examples (e.g. inference using Intel Caffe that brings minor accuracy drop, but still acceptable). If we would not do that, the accuracy might be much bigger. Executive summary. The int8 convolutions (on <= Skylake) are defined as: ``` u8 (*) s8 case: dst_s32 <- (s32_with_interim_s16) src_u8 (*) wei_s8 s8 (*) s8 case: dst_s32 <- 2 * (s32) ((src_s8 + 128) (*) (wei_s8 / 2)) - 128 (*) wei_s8 ``` In your example: `wei_s8 = {1, ..., 1}`, hence `wei_s8 / 2 = {0, ..., 0}`, hence the result is 0. --- Let me try to elaborate why Intel MKL-DNN has such weird behavior (be ready, the explanation is long). For `u8 (*) s8 -> s32` convolution the real chain of operations on Skylake is: ``` L01 dst_s32 = 0; L02 for (k = 0 .. dst_size / 2) { L03 dst_s32 += (s32)( L04 (s16)src_u8[2k + 0] * wei_s8[2k + 0] L05 + L06 (s16)src_u8[2k + 1] * wei_s8[2k + 1] L07 ); L08 } ``` The intermediate casting to (s16) is caused by ISA. See VPMADDUBSW instruction (some comments can be found in [1](https://en.wikichip.org/wiki/x86/avx512vnni#Motivation)). Note, that in addition in line (L05) might overflow. For instance `(s16)255 * 127 + (s16)255 * 127 = -766 (= (s16)64770)`. Intel MKL-DNN does protect users from this potential overflow. To avoid it either the data should be *good* or input should actually be u7, not u8. Our experience with Intel Caffe int8 inference showed that data distribution is typically good enough and an overflow doesn't happen (speculation: that's because mean(src_u8) << 127). That why we didn't do any changes to this sequence. // BTW, with VPDPBUSD the problem will be gone [1]. To support a broader set of topologies we had to extend convolution with `s8 (*) s8 -> s32` case. The problem here is that there is no such instructions in ISA. We had to emulate s8 * s8 via u8 * s8. For that instead of computing: ``` dst_s32 <- src_s8 (*) wei_s8 ``` we actual compute: ``` dst_s32 <- (src_s8 + 128) (*) wei_s8 - 128 (*) wei_s8 = new_src_u8 (*) wei_s8 - compensation_s32 ``` Note, that `new_src_u8 (*) wei_s8` follows the same accumulation chain as mentioned above and has the same problem with a potential overflow. But it turned out that this overflow is no more just a potential but typically happens. That makes accuracy go down to almost zero. That's most likely because the data distribution for `new_src_u8` is shifted away from 0 (mean(new_src_u8) = mean(src_s8) + 128 ~= 128). To make the accuracy good on Skylake again we decided to shrink the weights by factor of two. So the real compute is: ``` dst_s32 <- 2 * (s32) ((src_s8 + 128) (*) (wei_s8 / 2)) - 128 (*) wei_s8 ``` That helped. Note that for HW that supports `VPDPBUSD` there is no need in this magic. Phew... So your HW is SandyBridge. Alas, we don't have an optimized int8 support for CPUs below Skylake server. The implementation would emulate s8 * s8 operations using double precision gemm (dgemm). That should be slower than even f32 convolution. But that should be good enough to make tests with int8. The sequence of operations repeat Skylake's ones. Whether this good or not -- I don't know (we've never thought about thoroughly). But we definitely need to document the behavior, to avoid the confusion. Also we don't have plans to optimize int8 operations for <Skylake (at least for now). Even though this is possible. Thanks for rising this! And sorry for a long explanation. Please let me know if you need further details or if the explanation is not clear. --- [1]. About VNNI on Wiki-chip: https://en.wikichip.org/wiki/x86/avx512vnni [2]. About INT8 inference on modern Intel HW: https://ai.intel.com/lowering-numerical-precision-increase-deep-learning-performance/ <issue_comment>username_0: Thank for your reply. The behavior on int8 conv definitely should be documented. I was planning to use mkl-dnn to accelerate our signed int8 convolution. But the acceleration code must be effectively the same as the naive implementation, so I cannot use mkl-dnn due to the precision lost in int8 convolution. I will write intrinsic to accelerate my implementation instead. Last question, do you have any possible recommend way to optimize s8*s8->s32 on Skylake? <issue_comment>username_1: If your implementation cannot accept the potential overflow in int16 (as described above), that the only way to handle that would be: ``` src_s16 <-- up-cast src_s8 wei_s16 <-- up-cast wei_s8 tmp_s32 <-- VPMADDWD(src_s16, wei_s16) dst_s32 <-- VPADD(dst_s32, tmp_s32) ``` But in theory, this sequence would be a little slower than using `vfmadd231ps` in case of f32. <issue_comment>username_1: BTW, this sequence should also work for SandyBridge+ (with XMM registers / _m128i) and Haswell+ (with YMM registers / _m256i).<issue_closed> <issue_comment>username_0: Appreciate for your answer. I will try to implement this.
{'fraction_non_alphanumeric': 0.10831032332769217, 'fraction_numerical': 0.03785517057094504, 'mean_word_length': 3.362082362082362, 'pattern_counts': {'":': 0, '<': 63, '<?xml version=': 0, '>': 31, 'https://': 7, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 15, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14130674', 'n_tokens_mistral': 3933, 'n_tokens_neox': 3597, 'n_words': 1210}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Changing volume via slider causes 100% CPU username_0: Changing volume via slider causes 100% CPU. This just happens on a tablet of mine, not on my desktop. <issue_comment>username_1: @username_0 Can you provide details on the tablet? Make/model/etc. <issue_comment>username_0: It is a Dell 7140 tablet. It kinda just happens, when Chrome is open and plays a audio, for example on twitcht.tv, when I then try to change the slider of Chrome in EarTrumpet, it creates 100% cpu load and everything freezes, including EarTrumpet. Restarting EarTrumpet resolves sthe issue, and it works normally for a while, until it happens again.
{'fraction_non_alphanumeric': 0.0513595166163142, 'fraction_numerical': 0.0256797583081571, 'mean_word_length': 5.138888888888889, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22596355', 'n_tokens_mistral': 197, 'n_tokens_neox': 172, 'n_words': 99}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Events in this case to act slowly username_0: Hi ![image](https://user-images.githubusercontent.com/6462878/131444754-05a70ca1-b08d-4f9b-b408-cd1ba8b31782.png) ```php <?php use parallel\{Runtime,Channel}; $ch1 = Channel::Make('a',Channel::Infinite); $r1 = new Runtime; $r1->Run(static function()use($ch1){ $r2 = new \parallel\Runtime; $ch2 = \parallel\Channel::Make('b',Channel::Infinite); $r2->Run(static function()use($ch2){ while(true){ $i = 100; while($i--) $ch2->Send(rand(1,999999999)); sleep(1); } }); $events = new \parallel\Events; $events->SetBlocking(false); $events->AddChannel($ch1); $events->AddChannel($ch2); while(true){ $fetch = 0; while($event = $events->Poll()){ // <--------- Very slow $events->AddChannel($event->object); $fetch++; } echo "Fetch(".date('i:s').") > $fetch\n"; sleep(1); } }); while(true){ $ch1->Send(1); usleep(500000); } ``` <issue_comment>username_1: There is `Channel::open()` you can use to connect to a globally created channel with `Channel::make()`
{'fraction_non_alphanumeric': 0.18994413407821228, 'fraction_numerical': 0.05746209098164405, 'mean_word_length': 2.4545454545454546, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 14, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 1, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16055284', 'n_tokens_mistral': 464, 'n_tokens_neox': 421, 'n_words': 73}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Should the statement in models.py be `raise Exception("Unknown aggregator: ", aggregator_type)`? username_0: As __inif__ in line 210 of models.py defined: ```python3 super(SampleAndAggregate, self).__init__(**kwargs) if aggregator_type == "mean": self.aggregator_cls = MeanAggregator elif aggregator_type == "seq": self.aggregator_cls = SeqAggregator elif aggregator_type == "maxpool": self.aggregator_cls = MaxPoolingAggregator elif aggregator_type == "meanpool": self.aggregator_cls = MeanPoolingAggregator elif aggregator_type == "gcn": self.aggregator_cls = GCNAggregator else: raise Exception("Unknown aggregator: ", self.aggregator_cls) ``` The last statement should be `raise Exception("Unknown aggregator: ", aggregator_type)`?
{'fraction_non_alphanumeric': 0.09271523178807947, 'fraction_numerical': 0.005518763796909493, 'mean_word_length': 3.1040723981900453, 'pattern_counts': {'":': 5, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19550428', 'n_tokens_mistral': 267, 'n_tokens_neox': 253, 'n_words': 62}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Version clowder.yaml schema username_0: In order to be able to make non-backwards compatibly changes to the clowder.yaml schema, but still be able to read old saved versions, files need to be versioned to allow for migrations<issue_closed> <issue_comment>username_0: Punting on this for now
{'fraction_non_alphanumeric': 0.04923076923076923, 'fraction_numerical': 0.006153846153846154, 'mean_word_length': 6.086956521739131, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15258317', 'n_tokens_mistral': 88, 'n_tokens_neox': 88, 'n_words': 43}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [BUG] Suggestions shown without error username_0: # Steps to reproduce Suggestions shown although command runs fine. This seems to occur only on Linux - not Windows or macOS. ```powershell pwsh --version PowerShell 6.2.0-preview.4 Suggestion [4,General]: The most similar commands are: popd, sp, spps, ps, pip2, pip3, pip, pppd, apg, ps2ps. ``` # Expected behavior ```powershell pwsh --version PowerShell 6.2.0-preview.4 ``` # Actual behavior ```powershell pwsh --version PowerShell 6.2.0-preview.4 Suggestion [4,General]: The most similar commands are: popd, sp, spps, ps, pip2, pip3, pip, pppd, apg, ps2ps. ``` # Environment data <!-- provide the output of $PSVersionTable --> ```none Name Value ---- ----- PSVersion 6.2.0-preview.4 PSEdition Core GitCommitId 6.2.0-preview.4 OS Linux 4.18.0-13-generic #14-Ubuntu SMP Wed Dec 5 09:04:24 UTC 2018 Platform Unix PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…} PSRemotingProtocolVersion 2.3 SerializationVersion 1.1.0.1 WSManStackVersion 3.0 ``` <issue_comment>username_1: Perhaps this happens for all native commands. <issue_comment>username_2: @username_0: Does it indeed happen for all external programs in `$env:PATH`, as @username_1 suggest? I can't reproduce the problem on Ubuntu 18.04. What specific version are you on? <issue_comment>username_1: @username_3 Please look this feedback. <issue_comment>username_0: I figured it out. It's caused by the following entry in my profile: ```powershell Get-Command -Name pspg -CommandType Application -ErrorAction SilentlyContinue ``` On my Linux machine the command `pspg` is not available. The suggestions (popd, sp, spps, ps, pip2, pip3, pip, pppd, apg, ps2ps) actually do refer to `pspg` - not the command I entered manually. <issue_comment>username_1: Can we close the issue? <issue_comment>username_0: This is obviously a bug. Suggestions are only meaningful in relation to manually entered commands - not commands in a script or profile file. In my case I enter any command and in response I get a suggestion even though the command was successful. And the suggestion does not even relate to the command I entered. <issue_comment>username_2: @username_0: Let me try to summarize the bugs / undesired behavior: * Suggestions are triggered not only in an _interactive_ session, but unexpectedly also from _scripts_. * Use of `Get-Command` with a nonexistent command unexpectedly triggers suggestions too; arguably, only _direct invocation_ should do that (yes: `nosuch`; no: `Get-Command nosuch`). * Additionally, the suggestion appears even with `-ErrorAction SilentlyContinue`, because the suggestion mechanism is apparently based on errors recorded in `$Error`; therefore, only `-ErrorAction Ignore`, which suppressed adding to `$Error`, is currently effective in silencing the suggestion. * A specific bug occurs when a failed lookup occurs anywhere in `$PROFILE` _and_ no unrelated errors are added to `$Error` _after that_ during the execution `$PROFILE`: in an _interactive session_, display of the suggestion is then _delayed until after `$PROFILE` has finished loading_, and is only printed whenever the _next command is executed_, whatever it is, and whether it is an external program or not (e.g., `Get-Date` would trigger it too). <issue_comment>username_0: Perfect summary, well done. <issue_comment>username_3: @username_2 is correct that the current "suggestions framework" (which existed before my fuzzy matching suggestion feature) has a trigger based on an ErrorRecord being produced. So in that sense, this is "by-design". `-ErrorAction Ignore` is the correct way to suppress this. Currently, the console host only knows if the session is interactive or not based on a command line switch and doesn't differentiate execution of `$profile` from when the user can start typing. Since profile execution and suggestions are both in the console host, it seems that it should be easy to pass some data to suppress suggestions while profile is executing. I'll take a look. <issue_comment>username_3: @username_0 I'm actually not able to get this to repro. I have this in my `$profile`: ```powershell gcm fsdf -erroraction silentlycontinue ``` I start pwsh-preview --version: ```output PowerShell 6.2.0-preview.4 ``` If I execute that script directly `. $profile`, I do get the suggestion shown. <issue_comment>username_4: @username_3 is the behaviour the same if you first open `pwsh` and then at the prompt enter `pwsh --version`? <issue_comment>username_3: @username_4 I don't see the suggestion output that way either. Does it repro for you? <issue_comment>username_4: Not on Windows, at least. <issue_comment>username_2: Here's how you can reproduce the bug reliably on all platforms: * Add the following, intentionally nonexistent command _at the end_ of your `$PROFILE`: `nosuch` * Start a new interactive session - you'll see NO suggestion at that point (though you'll see the error). * Run _any_ command (e.g., `whoami`), at which the `nosuch`-related suggestion finally appears. <issue_comment>username_3: @username_2 thanks, that does repro it! <issue_comment>username_3: Looking at the code, this isn't so straight forward to fix and isn't just profile. Simple repro: ```powershell "gcm asldfj -erroraction silentlycontinue" > test.ps1 ./test.ps1 ``` I would not expect the suggestion to be there, but the way the code currently works is that the console checks if the last command had any output and if so, it calls to evaluate suggestions. Suggestions first checks of `$?` is `$false` otherwise no suggestion. If something failed, it goes through the suggestion filters where one of them looks at the last ErrorRecord which is what the fuzzy match command uses. Ideally, script invocation shouldn't show suggestions, but the console host doesn't know anything about what it is executing and just sends it to PowerShell to run. <issue_comment>username_3: After discussing with @JamesWTruher and @PaulHigin it seems the best approach currently is to put the fuzzy suggestion behind an experimental flag because the proper fix is too complicated: 1. Currently, suggestions are initiated by the host (consolehost in this case) and should be part of the ErrorRecord so that it works correctly remotely and in other hosts 2. ErrorRecord has a member called RecommendedAction that should contain the suggestion 3. The code to invoke getting suggestions needs to be moved out of consolehost and closer to where the ErrorRecord is created 4. Formatter for ErrorRecord needs to be updated to show the RecommendedAction member
{'fraction_non_alphanumeric': 0.06731325998841922, 'fraction_numerical': 0.015489287782281412, 'mean_word_length': 3.8792372881355934, 'pattern_counts': {'":': 0, '<': 20, '<?xml version=': 0, '>': 21, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 1, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3420996', 'n_tokens_mistral': 1971, 'n_tokens_neox': 1814, 'n_words': 920}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Column metadata is not propagated for single collection types. username_0: Column metadata for collections should read from the element metadata and is currently using the field column metadata. This causes single collection based types e.g. Optional to ignore the `@Column` annotation.<issue_closed> <issue_comment>username_1: FWIW This change is incomplete and breaks the JDO TCK tests for Optional, fixed in #165 and #166
{'fraction_non_alphanumeric': 0.04793028322440087, 'fraction_numerical': 0.017429193899782137, 'mean_word_length': 5.96969696969697, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28162232', 'n_tokens_mistral': 112, 'n_tokens_neox': 104, 'n_words': 61}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: rewrite quantify and dequantify username_0: Since we now have the utils from #11, `quantify` and `dequantify` can be extended to also support coordinates. I'm also hoping to fix #9, either by changing the check in `Variable.data` (I need to think more about the implications of that) or by stating that this will load non-dask arrays into memory. <issue_comment>username_0: I'll still have to update the docstrings, but other than that this should be ready for review. The failing tests are due to a design question: should we raise for `quantify` if no unit was given (neither by args nor by attrs)? And what about `dequantify` is called on a object without quantities (maybe `dequantify` was already called)? I have been answering both questions with "no", but I didn't think too much before I wrote the code in `conversion`. I could be convinced either way: we might want to make sure people know the operation didn't do anything, but we might also want to allow something like ```python obj = ds.pint.dequantify() ... # more code ds.pint.dequantify() # or always dequantify in a function, e.g. before saving to a file ``` <issue_comment>username_1: Similarly I think yes. The user thought there were units on the data, but were mistaken, so should be informed. If they want to cover both possibilities then they perhaps they should use a try except? I am imagining the end-goal use of this package as always assuming the user is following this pattern: ``` # Load data using xarray # Quantify using pint-xarray # Go about your xarray analysis business all as normal, invoking pint-xarray ideally as little as possible # Perhaps dequantify if you need to save the data ``` This approach would also make round-tripping simpler. <issue_comment>username_2: Again, I would say no...that instead of erroring, `dequantify` should do nothing on an object without quantities. I'd imagine a not-too-uncommon situation is for a user to end up with a "mixed Dataset" with some Quantity and some non-Quantity variables. We'd want a safe way for them to get to a Dataset that can be written out, and the easiest way to do so is just dequantify the Quantity variables and leave the others alone without the user having to go through and only dequantify the Quantity ones. <issue_comment>username_1: That's a good point. Quirks with CF conventions should be handled by CF-specific code instead of pre-empted. Perhaps the better way to think about this (and what you seem to be proposing) is that `.Quantify` should map directly onto the `__init__` method of `ureg.Quantity`, and follow similar behaviour. <issue_comment>username_2: Indeed! That's a much clearer way of phrasing it. <issue_comment>username_0: the update of the docstrings is done and I fixed / removed the failing tests. Something else we should probably discuss is whether or not to remove the "units" attribute in `quantify`. If we decide to do so, this isn't difficult: we just have to toggle the `delete` parameter to `extract_unit_attributes`. I'm leaning towards removing since that way the attributes cannot be out-of-sync, but we could also add a keyword argument to control that. <issue_comment>username_3: Yes :+1: The units are now on the array itself so I think this is sensible. And as you point out, it won't be out of sync. Also `DataArray.units == DataArray.attrs["units"]` until we change it in xarray so keeping attrs["units"] would be really confusing <issue_comment>username_0: @username_2, does that address your concerns in #9? <issue_comment>username_2: Yes it does! Thanks for adding that. While it would be nice to not have to worry about the issue in the first place, making `MemoryCachedArray` wrappable by Pint with xarray still able to handle that properly could create a big mess. I'd much rather just recommend using Dask. <issue_comment>username_0: then this should be ready for review and merge <issue_comment>username_0: same here, I'll update the documentation <issue_comment>username_0: done, I think? I updated the docstrings and modified the attribute extraction function to never change `attrs` in-place. Instead, there's a new attribute strip function that returns a copy without the attributes. <issue_comment>username_0: that's a simple fix, so no problem. We have lots of these in `xarray`'s tests, too (especially `Dataset` tests).
{'fraction_non_alphanumeric': 0.05607264472190692, 'fraction_numerical': 0.004540295119182747, 'mean_word_length': 4.612738853503185, 'pattern_counts': {'":': 0, '<': 15, '<?xml version=': 0, '>': 15, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24381876', 'n_tokens_mistral': 1166, 'n_tokens_neox': 1120, 'n_words': 674}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Added Cake.Watch in addins.xml username_0: Added Cake.Watch <issue_comment>username_1: @username_0 I have just rebased your branch, and I will merge once the CI builds complete. One thing I noticed was this: ![image](https://cloud.githubusercontent.com/assets/1271146/20248782/199dfa92-a9e3-11e6-86be-7a7ced4ce090.png) You are missing the `code` element within your `example` tag: https://github.com/cake-addin/cake-watch/blob/master/Cake.Watch/WatchAlias.cs#L17 Have a look here as an example: https://github.com/cake-contrib/Cake.Gem/blob/develop/Source/Cake.Gem/GemAliases.cs#L28 If you can fix that up, and push a new version of your nuget package, it will flow through onto the site. Thanks for submitting this PR!
{'fraction_non_alphanumeric': 0.09768637532133675, 'fraction_numerical': 0.05141388174807198, 'mean_word_length': 4.299319727891157, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16133005', 'n_tokens_mistral': 295, 'n_tokens_neox': 258, 'n_words': 78}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Nuclear installed capacity for region Sweden too high. username_0: The installed capacity of nuclear (9.1 GW) in Sweden is too high. Reactor Ringhals 2 was permanently shut down around new years, and the remaining reactors' effects are: Ringhals 1: 881 MW Ringhals 3: 1 063 MW Ringhals 4: 1 130 MW (source: https://group.vattenfall.com/se/var-verksamhet/ringhals/produktion ) Oskarshamn 3: 1 450 MW or 1 400 MW ( https://www.okg.se/sv/Produktionsinformation/ and https://www.okg.se/sv/Om-OKG/ or https://sv.wikipedia.org/wiki/Oskarshamns_kärnkraftverk#Oskarshamn_3 ) Forsmark 1: 990 MW Forsmark 2: 1 120 MW Forsmark 3: 1 167 MW ( https://group.vattenfall.com/se/var-verksamhet/forsmark/produktion ) That totals 7 751 MW or 7 801 MW depending on which number of O3 is correct. A fair bit less than 9.1 GW. <issue_comment>username_1: I found more up to date numbers for Swedens capacities on the source page (nuclear = 8586 MW from data source) and more up to date wind data, I will push a fix to the file for it. <issue_comment>username_1: I included your updated sources / Data in my update of Swedens Capacities.<issue_closed>
{'fraction_non_alphanumeric': 0.07705334462320068, 'fraction_numerical': 0.04995766299745978, 'mean_word_length': 4.094827586206897, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 5, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29405993', 'n_tokens_mistral': 464, 'n_tokens_neox': 386, 'n_words': 158}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Related fund links aren't working username_0: When I click on any of the related fund links, the URL populates like this: https://giving.cu.eduhttps//giving.cu.edu/fund/gynecologic-oncology-fellowship-fund but the CMS also doesn't allow me to just input: /fund/gynecologic-oncology-fellowship-fund <img width="874" alt="Screen Shot 2021-10-29 at 11 45 26 AM" src="https://user-images.githubusercontent.com/91230278/139479839-670bb4c9-9f7a-4597-8e76-b297e4e19170.png"> <issue_comment>username_1: Yep, this is related to an issue of keeping fund names/links synced between the CMS and other data sources. That's why providing the whole URL is done vs. looking up as an autocomplete field before. I updated the code on staging and now they point to the right place, in my testing.<issue_closed>
{'fraction_non_alphanumeric': 0.08383233532934131, 'fraction_numerical': 0.07065868263473053, 'mean_word_length': 4.76551724137931, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15949455', 'n_tokens_mistral': 308, 'n_tokens_neox': 250, 'n_words': 98}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Windows安装报错 username_0: 环境:Windows 10 Python版本: 3.7.3 报错截图如下: ![](https://i.loli.net/2019/04/10/5cacdf38b2376.png) <issue_comment>username_0: 解决了,应该是依赖blist的问题,换成https://www.lfd.uci.edu/~gohlke/pythonlibs/#blist提供的编译好的blist后安装成功。 <issue_comment>username_1: 谢谢,blist 应该是 panwid 的一个依赖。 ``` ➜ [pingtop] pingtop git:(master) pip show panwid Name: panwid Version: 0.2.5 Summary: Useful widgets for urwid Home-page: https://github.com/tonycpsu/panwid Author: <NAME> Author-email: <EMAIL> License: UNKNOWN Location: /Users/username_1/.virtualenvs/pingtop/lib/python3.7/site-packages Requires: six, blist, urwid, urwid-utils, raccoon, orderedattrdict Required-by: pingtop ``` 我先关闭了,还有问题请打开。<issue_closed>
{'fraction_non_alphanumeric': 0.13962765957446807, 'fraction_numerical': 0.03856382978723404, 'mean_word_length': 5.547826086956522, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 3, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29487945', 'n_tokens_mistral': 369, 'n_tokens_neox': 371, 'n_words': 52}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: set FinBit in last StreamFrame username_0: Right now, we're sending out an empty StreamFrame just to set the FinBit. <issue_comment>username_0: It probably makes sense to do this when refactoring the `Stream.Read`for #84. <issue_comment>username_0: So we probably shouldn't fix it. <issue_comment>username_0: So we're actually doing it right.<issue_closed>
{'fraction_non_alphanumeric': 0.0741687979539642, 'fraction_numerical': 0.015345268542199489, 'mean_word_length': 6.538461538461538, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3576683', 'n_tokens_mistral': 117, 'n_tokens_neox': 112, 'n_words': 47}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: `volta list --default` doesn't show defaults username_0: # Expected ``` $ volta install node success: installed and set [email protected] as default $ volta list --default node v14.15.4 ``` # Actual ``` $ volta install node success: installed and set [email protected] as default $ volta list --default node ⚡️ No Node runtimes installed! You can install a runtime by running `volta install node`. See `volta help install` for details and more options. ``` <issue_comment>username_1: Hi @username_0, thanks for reporting this! I was already planning to take a look at the `volta list` output, so I'll make sure to include that in my investigation. <issue_comment>username_1: Just confirmed this is fixed in #778 (which I had left outstanding for some time 🤦 ). Merged that so it should be in the next release!<issue_closed>
{'fraction_non_alphanumeric': 0.08380520951302378, 'fraction_numerical': 0.02491506228765572, 'mean_word_length': 3.857142857142857, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28121361', 'n_tokens_mistral': 277, 'n_tokens_neox': 262, 'n_words': 120}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: fix: handle empty union types. username_0: Union types can end up having no legal branch, e.g. with `null=`. Emit an `any` type for those cases - `any` is TypeScript's closest representation for a Null Type / Bottom Type. Fixes #141. <issue_comment>username_1: LGTM.
{'fraction_non_alphanumeric': 0.09477124183006536, 'fraction_numerical': 0.016339869281045753, 'mean_word_length': 4.385964912280702, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10423151', 'n_tokens_mistral': 101, 'n_tokens_neox': 95, 'n_words': 41}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: 最新代码:java.lang.ClassNotFoundException: android.content.Context username_0: 下载最新的unidbg代码,由于so里面调用了java类,所以自行创建了对应的java类,然后加上: `vm.setDvmClassFactory(new ProxyClassFactory());` 出现如下错误: `[20:04:31 497] WARN [com.github.unidbg.linux.android.dvm.jni.ProxyJni] (ProxyJni:296) - callObjectMethod java.lang.ClassNotFoundException: android.content.Context at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at com.github.unidbg.linux.android.dvm.jni.ProxyClassLoader.loadClass(ProxyClassLoader.java:23) at com.github.unidbg.linux.android.dvm.jni.ProxyJni.callObjectMethod(ProxyJni.java:287) at com.github.unidbg.linux.android.dvm.DvmMethod.callObjectMethod(DvmMethod.java:70) at com.github.unidbg.linux.android.dvm.DalvikVM$20.handle(DalvikVM.java:372) at com.github.unidbg.linux.ARM32SyscallHandler.hook(ARM32SyscallHandler.java:103) at com.github.unidbg.arm.backend.UnicornBackend$6.hook(UnicornBackend.java:271) at unicorn.Unicorn$NewHook.onInterrupt(Unicorn.java:128) at unicorn.Unicorn.emu_start(Native Method) at com.github.unidbg.arm.backend.UnicornBackend.emu_start(UnicornBackend.java:296) at com.github.unidbg.AbstractEmulator.emulate(AbstractEmulator.java:382) at com.github.unidbg.AbstractEmulator.eFunc(AbstractEmulator.java:471) at com.github.unidbg.arm.AbstractARMEmulator.eFunc(AbstractARMEmulator.java:215) at com.github.unidbg.Module.emulateFunction(Module.java:154) at com.github.unidbg.linux.android.dvm.DvmObject.callJniMethod(DvmObject.java:128) at com.github.unidbg.linux.android.dvm.DvmClass.callStaticJniMethodObject(DvmClass.java:277) at com.bytedance.frameworks.core.encrypt.Candy.getCandyDataWithKeyForJava(Candy.java:92) at com.bytedance.frameworks.core.encrypt.Candy.main(Candy.java:108) [20:04:31 500] WARN [com.github.unidbg.linux.ARM32SyscallHandler] (ARM32SyscallHandler:446) - handleInterrupt intno=2, NR=-1073744072, svcNumber=0x113, PC=unidbg@0xfffe01c4, syscall=null java.lang.UnsupportedOperationException: android/content/Context->getPackageManager()Landroid/content/pm/PackageManager; at com.github.unidbg.linux.android.dvm.JniFunction.callObjectMethod(JniFunction.java:192) at com.github.unidbg.linux.android.dvm.jni.ProxyJni.callObjectMethod(ProxyJni.java:298) at com.github.unidbg.linux.android.dvm.DvmMethod.callObjectMethod(DvmMethod.java:70) at com.github.unidbg.linux.android.dvm.DalvikVM$20.handle(DalvikVM.java:372) at com.github.unidbg.linux.ARM32SyscallHandler.hook(ARM32SyscallHandler.java:103) at com.github.unidbg.arm.backend.UnicornBackend$6.hook(UnicornBackend.java:271) at unicorn.Unicorn$NewHook.onInterrupt(Unicorn.java:128) at unicorn.Unicorn.emu_start(Native Method) at com.github.unidbg.arm.backend.UnicornBackend.emu_start(UnicornBackend.java:296) at com.github.unidbg.AbstractEmulator.emulate(AbstractEmulator.java:382) at com.github.unidbg.AbstractEmulator.eFunc(AbstractEmulator.java:471) at com.github.unidbg.arm.AbstractARMEmulator.eFunc(AbstractARMEmulator.java:215) at com.github.unidbg.Module.emulateFunction(Module.java:154) at com.github.unidbg.linux.android.dvm.DvmObject.callJniMethod(DvmObject.java:128) at com.github.unidbg.linux.android.dvm.DvmClass.callStaticJniMethodObject(DvmClass.java:277)` 请问如何解决这个问题,有点茫然 @username_1 <issue_comment>username_1: 使用ProxyClassFactory会自动反射调用java类,提示不存在的创建同样包名的类 <issue_comment>username_0: 这个我看其他的issue提到过,但是android.content.Context自己创建不知道该怎么写,如果是将安卓Context类源码全部复制过来好像也不太现实... <issue_comment>username_0: @username_1 可否提供一个自己创建android.content.Context的例子 <issue_comment>username_1: 创建最简单的空白类,后续运行缺啥补啥 <issue_comment>username_0: @username_1 我创建了android.content.Context里面有一个函数是getPackageManager,所以我又创建了android.content.pm.PackageManager空白类,代码如下: `package android.content; import android.content.pm.PackageManager; public class Context { public Object getPackageManager(){ return new PackageManager(); } } ` `package android.content.pm; public class PackageManager { } ` 现在是报了这么一个错: `[22:02:19 747] WARN [com.github.unidbg.linux.ARM32SyscallHandler] (ARM32SyscallHandler:446) - handleInterrupt intno=2, NR=-1073744072, svcNumber=0x113, PC=unidbg@0xfffe01c4, syscall=null java.lang.IllegalStateException: obj is null: android.content.Context@55e4af at com.github.unidbg.linux.android.dvm.jni.ProxyJni.callObjectMethod(ProxyJni.java:291) at com.github.unidbg.linux.android.dvm.DvmMethod.callObjectMethod(DvmMethod.java:70) at com.github.unidbg.linux.android.dvm.DalvikVM$20.handle(DalvikVM.java:372) at com.github.unidbg.linux.ARM32SyscallHandler.hook(ARM32SyscallHandler.java:103) at com.github.unidbg.arm.backend.UnicornBackend$6.hook(UnicornBackend.java:271) at unicorn.Unicorn$NewHook.onInterrupt(Unicorn.java:128) at unicorn.Unicorn.emu_start(Native Method) at com.github.unidbg.arm.backend.UnicornBackend.emu_start(UnicornBackend.java:296) at com.github.unidbg.AbstractEmulator.emulate(AbstractEmulator.java:382) at com.github.unidbg.AbstractEmulator.eFunc(AbstractEmulator.java:471) at com.github.unidbg.arm.AbstractARMEmulator.eFunc(AbstractARMEmulator.java:215) at com.github.unidbg.Module.emulateFunction(Module.java:154) at com.github.unidbg.linux.android.dvm.DvmObject.callJniMethod(DvmObject.java:128) at com.github.unidbg.linux.android.dvm.DvmClass.callStaticJniMethodObject(DvmClass.java:277) at com.bytedance.frameworks.core.encrypt.Candy.getCandyDataWithKeyForJava(Candy.java:94) at com.bytedance.frameworks.core.encrypt.Candy.main(Candy.java:110) [22:02:19 751] WARN [com.github.unidbg.AbstractEmulator] (AbstractEmulator:401) - emulate RX@0x400071c5[libmtguard.so]0x71c5 exception sp=unidbg@0xbffff668, msg=obj is null: android.content.Context@55e4af, offset=22ms ` 不知道应该怎么往下了 <issue_comment>username_1: context为null <issue_comment>username_0: @username_1 我上传了下工程,麻烦给具体怎么写,或者能不能直接帮构建下缺失的类,摸索了一天是在找不到方法 `百度网盘分享链接:https://pan.baidu.com/s/1a-uTuuKTqyXGqbbUeQn6hA 提取码:8d91` <issue_comment>username_2: hello,解决了吗,我也遇到这个问题,求指教 <issue_comment>username_1: com.bytedance.frameworks.core.encrypt.Candy.getCandyDataWithKeyForJava(Candy.java:94) 这里的调用代码发来看下 <issue_comment>username_0: @username_1 我上传百度云,地址: `https://pan.baidu.com/s/1kppFBZFM6-Nvk1bwqv7Kag 提取码:8o9c ` 谢谢帮忙看看! <issue_comment>username_3: @username_0 题主我和你看的是同个so,我直接传入的DvmObject contexts =vm.resolveClass("android/content/Context").newObject(null),但是调用一直返回null,不知道是哪里缺了什么,可以留个联系方式一起交流下吗,十分感谢!!!
{'fraction_non_alphanumeric': 0.1169045830202855, 'fraction_numerical': 0.046731780616078133, 'mean_word_length': 7.1568627450980395, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 14, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6829010', 'n_tokens_mistral': 2976, 'n_tokens_neox': 2835, 'n_words': 223}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Build improvements username_0: - Build everything inside Docker. Do not require protoc and npm to be installed for building an image. - Configure container to run node as unprivileged user. <issue_comment>username_1: This looks clean. - Can I use that for Chat application too ([ChatUI](https://github.com/cloudstateio/samples-ui-chat) and [Chat nonUI services](https://github.com/cloudstateio/samples-js-chat))? - "Do not require protoc and npm to be installed for building an image": so where is protogen happening? <issue_comment>username_0: `protoc` from `grpc-tools` is used. <issue_comment>username_2: LGTM Thanks!
{'fraction_non_alphanumeric': 0.09242424242424242, 'fraction_numerical': 0.006060606060606061, 'mean_word_length': 5.235849056603773, 'pattern_counts': {'":': 1, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2264951', 'n_tokens_mistral': 192, 'n_tokens_neox': 188, 'n_words': 73}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Automatic power off after around 10 minutes of inactivity username_0: Hello, Is it a normal behaviour? Is it the same on the latest firmware? Thanks, Nik <issue_comment>username_1: I don't have this happening to mine - and I leave it on for hours <issue_comment>username_2: do you maybe mean that the screen turns off? not the whole unit right? I think its just to prevent burning in of the screen or something, doesn't bother me! <issue_comment>username_0: @username_1 Thanks! That's very good to know. @username_2 I'm not sure (it's from the other person's words) but it requires power reset to turn back on. I'm asking because I'd like to buy one and the owner has told me he has this weird issue. He has presented it as something normal, a quirk of the older OS. He hasn't upgraded to the latest firmware. But I highly doubt it is normal behaviour. <issue_comment>username_2: Can not expect that's normal behaviour, but I have the latest firmware and do not have this problem at all and can leave it on for as ever long I want. The screen goes to sleep after some time, but no issue there. <issue_comment>username_0: It looks like a defect. I'll look for another seller. Thanks!<issue_closed>
{'fraction_non_alphanumeric': 0.04979919678714859, 'fraction_numerical': 0.008032128514056224, 'mean_word_length': 4.441048034934497, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3535436', 'n_tokens_mistral': 347, 'n_tokens_neox': 326, 'n_words': 203}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Try to fix bundle building. username_0: It might be that ubuntu has shifted the location of some packages in the remote repository as most of the 404 files are in `pool/...` not in `dist/<distname>` The error message say to try to update or --fix-missing. So let's try `update` <issue_comment>username_0: I'm expecting most of the test to fail, as #3397 will/did take care of it. This should only fix "Create Bundle / Bundle ubuntu-18.04 (pull_request)" and unless it creates more errors, it should be enough for the above one to pass in order to merge/squash. <issue_comment>username_0: Failures are the same as the ones that #3397 fixed while the test were running. Merging as this should finish fixing the test suite. <issue_comment>username_0: oh, actually can't merge, I need at least 1 review.
{'fraction_non_alphanumeric': 0.0665083135391924, 'fraction_numerical': 0.023752969121140142, 'mean_word_length': 4.30188679245283, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19815530', 'n_tokens_mistral': 254, 'n_tokens_neox': 231, 'n_words': 130}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Responsive nav username_0: Fixes #1 Hi there! I came across this issue and had some suggestions on how to adapt the site to be responsive. (Sorry, I only noticed after working on the PR that I was supposed to ask for assignment before working on an issue... I don't see any discussion on it yet so hopefully this is OK.) Some screenshots of the responsive layout: **MOBILE** --- <img width="413" alt="Screen Shot 2022-01-09 at 1 27 04 PM" src="https://user-images.githubusercontent.com/84106309/148693670-85c184d9-9bef-488d-bf5e-986f8aecfc81.png"> <img width="413" alt="Screen Shot 2022-01-09 at 1 27 12 PM" src="https://user-images.githubusercontent.com/84106309/148693672-2a68626f-29d0-48c5-9859-bc4f84b821be.png"> **TABLET** --- <img width="876" alt="Screen Shot 2022-01-09 at 1 27 32 PM" src="https://user-images.githubusercontent.com/84106309/148693678-04aeeef3-2928-4f23-89a1-2338e6639329.png"> <issue_comment>username_1: Hey @username_0 , thanks for implementing your suggestions. There are mainly 3 things 1) **RightSideNav**: containing links to link various sections within the component 2) **LeftSIdeNav** : containing main links to various components 3) **The main section** : the section where the content is rendered (component ) So these are certain things I need you to work on :- 1) **`For widths below 1000px to 768px`** : Currently the RightSIdeNav isn't visible, so , I want it to be visible above the main section, like a dropdown, currently having the value of the very first section of the component loaded on the screen. 2) **`For width below 768px`**: I want the LeftSideNav to render as a [Drawer](https://mui.com/components/drawers/). And the RightSideNav could remain the same, like shown above the page component loaded. It could be a dropdown or you could add a navbar having hamburger icon for it. 3) Also, for mobile widths I found that the Component designs were not rendering appropriately. So fix all of these if you encounter anywhere in any of the components. ![Screenshot from 2022-01-10 19-39-07](https://user-images.githubusercontent.com/40212568/148779596-8519ca85-b5a7-4cc3-ba56-8b3a7018267a.png) Also, handle the text content alignment and layout while making it responsive :smile: , it shouldn't break. <issue_comment>username_0: Hi, thanks very much for the feedback. Unfortunately, something has come up and I won't have the time available to implement your instructions. I wish you the best of luck with this project!
{'fraction_non_alphanumeric': 0.08361839604713037, 'fraction_numerical': 0.08969973394146712, 'mean_word_length': 3.716845878136201, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 5, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19657871', 'n_tokens_mistral': 935, 'n_tokens_neox': 766, 'n_words': 326}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: `janet build` shows `error: could not read file` username_0: I installed Janet a few days ago. Everything worked fine for a couple of days, but since today I can't build any projects - my own or examples such as [Little Server](https://github.com/bakpakin/littleserver). I see the following when I run `janet build`: ``` littleserver|master > janet build error: could not read file in file/read in chunks [boot.janet] on line 2378, column 24 in run-context [boot.janet] on line 2147, column 12 in dofile [boot.janet] on line 2390, column 5 in cli-main [boot.janet] on line 2850, column 9 ``` The REPL is working fine: ``` littleserver|master > janet Janet 1.11.3-9afcec77 linux/x64 repl:1:> (+ 1 2) 3 repl:2:> ``` I'm using Pop!_OS, which is based on Ubuntu: ``` littleserver|master > lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.5 LTS Release: 18.04 Codename: bionic ``` Do you know what the problem could be? How can I provide more information? Thank you. <issue_comment>username_1: You want 'jpm build' not janet 'build'<issue_closed> <issue_comment>username_0: Well... I feel stupid now, but perhaps it's good that this issue exists as a reference for other like-minded people. :smiley: Thank you! <issue_comment>username_1: Don't feel bad, I've used janet for a while and have definitely made the same mistake.
{'fraction_non_alphanumeric': 0.0910344827586207, 'fraction_numerical': 0.03379310344827586, 'mean_word_length': 3.969178082191781, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 11, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29868558', 'n_tokens_mistral': 550, 'n_tokens_neox': 477, 'n_words': 199}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Unmatched event overload for "change" username_0: | ^ 172 | callback("foo"); 173 | }); 174 | } ``` Any ideas what I'm doing wrong? The above three types all define the signature: ```ts // e.g. in card-expiry.d.ts: on( eventType: 'change', handler: (event: StripeCardExpiryElementChangeEvent) => any ): StripeCardExpiryElement; ``` <issue_comment>username_0: Thanks for getting back to me. Any suggestions regarding the type casting solution? I'll have to `ts-ignore` it for the mean time.
{'fraction_non_alphanumeric': 0.10981697171381032, 'fraction_numerical': 0.018302828618968387, 'mean_word_length': 2.7625, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16871253', 'n_tokens_mistral': 197, 'n_tokens_neox': 176, 'n_words': 60}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: HPCC-25093 Document "Preserve File Parts" and DFUPlus "Wrap" username_0: <issue_comment>username_1: Process of PR-14592, label: wrap starts now. The reason of this test is: Forced to re-test. Commit ID: 499f6811e938d42eb57edcb12efb9fa7d6bf545d Estimated completion time is ~0.19 hour(s) centos 7.6.1810 (Linux 3.10.0-957.1.3.el7.x86_64) GCC: gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) Host: ip-10-20-0-63.ca-central-1.compute.internal <issue_comment>username_1: Automated Smoketest: :white_check_mark: OS: centos 7.6.1810 (Linux 3.10.0-957.1.3.el7.x86_64) GCC: gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5) Host: ip-10-20-0-63.ca-central-1.compute.internal Sha: 499f6811e938d42eb57edcb12efb9fa7d6bf545d Build: success Milestone:Install hpccsystems-platform-community_7.12.29-closedown0.el7.x86_64.rpm HPCC Start: OK HPCC Stop: OK HPCC Uninstall: OK Time stats: | Prep time | Build time | Package time | Install time | Start time | Test time | Stop time | Summary | |---|---|---|---|---|---|---|---| | 9 sec (00:00:09) | 670 sec (00:11:10) | 230 sec (00:03:50) | 20 sec (00:00:20) | 16 sec (00:00:16) | 0 sec (00:00:00) | 15 sec (00:00:15) | 960 sec (00:16:00) |
{'fraction_non_alphanumeric': 0.1588477366255144, 'fraction_numerical': 0.18271604938271604, 'mean_word_length': 3.864, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19027665', 'n_tokens_mistral': 638, 'n_tokens_neox': 516, 'n_words': 125}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Unable to use Debugger for Python username_0: Hi, I just installed this package for ST3 and i am running into an error. What i have done so far: - Debugger: Open in project - Debugger: Install Adapters and in the integrated console of ST3 i got this ` An exception occured in the main_loop Traceback (most recent call last): File "C:\Users\Responsable\AppData\Roaming\Sublime Text 3\Packages\Debugger\modules\core\core.py", line 25, in _exception_handler raise context['exception'] File "C:\Users\Responsable\AppData\Roaming\Sublime Text 3\Packages\Debugger\modules\libs\asyncio\events.py", line 127, in _run self._callback(*self._args) File "C:\Users\Responsable\AppData\Roaming\Sublime Text 3\Packages\Debugger\modules\commands\debugger.py", line 31, in run_main main.show() File "C:\Users\Responsable\AppData\Roaming\Sublime Text 3\Packages\Debugger\modules\debugger\debugger_interface.py", line 306, in show self.panel.show() AttributeError: 'DebuggerInterface' object has no attribute 'panel' ` #### So, do you know what i did wrong? Or what's going on? <issue_comment>username_1: You probably need to open it in a sublime text project. If you ignore the message it doesn't appear to say it again when you run Open in project a second time. This should be cleaned up in the next version <issue_comment>username_2: I am having the similar issue too. this time it says `console_panel` ![image](https://user-images.githubusercontent.com/485799/65571256-aa7e4080-df96-11e9-83d3-ea7d2d50fcf5.png) <issue_comment>username_1: This should be fixed in the latest<issue_closed>
{'fraction_non_alphanumeric': 0.08931804465902234, 'fraction_numerical': 0.03138201569100785, 'mean_word_length': 4.5083056478405314, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25666637', 'n_tokens_mistral': 557, 'n_tokens_neox': 522, 'n_words': 177}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Remove unnecessary dependency on the IHttpContextAccessor username_0: Because the IClientResolveContributor and IIpResolveContributor are both called in a method RateLimitMiddleware.ResolveIdentityAsync(HttpContext httpContext) that has access to the current HttpContext, the need can be fulfilled by passing it as a parameter to the IClientResolveContributor.ResolveClientAsync() and IIpResolveContributor.ResolveIp() methods. This approach also prevents any potential performance or usage mistakes of using IHttpContextAccessor or the caching of HttpContext that may find its way into the code base in later iterations. Since HttpContext will never be used in any other capacity in ClientHeaderResolveContributor or IpConnectionResolveContributor (or subsequent implementations), this implementation limits the misuse of HttpContext. <issue_comment>username_0: Who normally handles the wiki changes? I would have changed those documents too, but was unsure how to do it. <issue_comment>username_1: I've changed the wiki as well. Thanks @username_0 <issue_comment>username_2: When upgrading from 3.2.2 to 4.0.1 and removing the ContextAccessor, I receive the following error during startup. ``` crit: Microsoft.AspNetCore.Hosting.Diagnostics[6] Application startup exception System.InvalidOperationException: Unable to resolve service for type 'AspNetCoreRateLimit.IProcessingStrategy' while attempting to activate 'AspNetCoreRateLimit.ClientRateLimitMiddleware'. at Microsoft.Extensions.Internal.ActivatorUtilities.ConstructorMatcher.CreateInstance(IServiceProvider provider) at Microsoft.Extensions.Internal.ActivatorUtilities.CreateInstance(IServiceProvider provider, Type instanceType, Object[] parameters) at Microsoft.AspNetCore.Builder.UseMiddlewareExtensions.<>c__DisplayClass5_0.<UseMiddleware>b__0(RequestDelegate next) at Microsoft.AspNetCore.Builder.ApplicationBuilder.Build() at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken) ``` Just using the code as documented in the docs: ``` using (var scope = webHost.Services.CreateScope()) { // get the ClientPolicyStore instance var clientPolicyStore = scope.ServiceProvider.GetRequiredService<IClientPolicyStore>(); // seed client data from appsettings await clientPolicyStore.SeedAsync(); } await webHost.RunAsync(); ``` ref: https://github.com/stefanprodan/AspNetCoreRateLimit/wiki/ClientRateLimitMiddleware#setup Needed to rollback to version 3.2.2 Any idea how to avoid this critical error during startup?
{'fraction_non_alphanumeric': 0.05557573556120596, 'fraction_numerical': 0.006538321830730113, 'mean_word_length': 4.508, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3181157', 'n_tokens_mistral': 658, 'n_tokens_neox': 646, 'n_words': 230}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: ci-operator/step-registry/openshift/e2e/aws/proxy: Skip failing tests username_0: [A recent proxy job][1] failed the following test-cases: ``` [sig-arch] Managed cluster should should expose cluster services outside the cluster [Suite:openshift/conformance/parallel] [sig-builds][Feature:Builds] build have source revision metadata started build should contain source revision information [Suite:openshift/conformance/parallel] [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel] [sig-cluster-lifecycle][Feature:Machines] Managed cluster should have machine resources [Suite:openshift/conformance/parallel] [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Suite:openshift/conformance/parallel] [sig-imageregistry][Feature:ImageInfo] Image info should display information about images [Suite:openshift/conformance/parallel] [sig-network] Internal connectivity for TCP and UDP on ports 9000-9999 is allowed [Suite:openshift/conformance/parallel] [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4] [Skipped:azure] [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-network][Feature:Router] The HAProxy router should respond with 503 to unrecognized hosts [Suite:openshift/conformance/parallel] [sig-network][Feature:Router] The HAProxy router should serve routes that were created from an ingress [Suite:openshift/conformance/parallel] [sig-network][Feature:Router] The HAProxy router should set Forwarded headers appropriately [Suite:openshift/conformance/parallel] [sig-network][Feature:Router] The HAProxy router should support reencrypt to services backed by a serving certificate automatically [Suite:openshift/conformance/parallel] ``` This commit adds a `TEST_SKIPS` framework, taking advantage of: ```console $ openshift-tests run --help | grep 'dry-run\|--file' If you specify the --dry-run argument, the names of each individual test that is part of the suite will be printed, one per line. You may filter this list and pass it back to the run command with the --file argument. You may also pipe a list of test names, one per line, on standard input by passing "-f -". --dry-run Print the tests to run without executing them. -f, --file string Create a suite from the newline-delimited test names in this file. ``` `grep` [uses basic regular expressions by default][2]. The YAML `>-` is [trimmed line folding][3], so we get one long line with no trailing newline: ```console $ yaml2json <ci-operator/step-registry/openshift/e2e/aws/proxy/openshift-e2e-aws-proxy-workflow.yaml | jq -r '.workflow.env[0].default' Image append should create images by appending them\| Image info ...should contain source revision information\| oc adm must-gather runs successfully for audit logs ``` The spaces from line unfolding are unfortunate, but because the original lines have the `sig-*`, etc. prefixes followed by a space before the test-case title, it's not a problem. [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ocp-4.6-e2e-aws-proxy/1308949438701506560 [2]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/grep.html [3]: https://yaml.org/spec/1.2/spec.html#id2779048 <issue_comment>username_0: Just vanilla flakes in [the proxy rehearsal][1]: ``` Failing tests: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly] [Suite:openshift/conformance/parallel] [Suite:k8s] ``` [1]: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/12233/rehearse-12233-pull-ci-openshift-installer-master-e2e-aws-proxy/1309509672574652416 <issue_comment>username_1: Changes look good. Skips are known to be working. /lgtm
{'fraction_non_alphanumeric': 0.10151551599711331, 'fraction_numerical': 0.024536925667548712, 'mean_word_length': 5.348091603053435, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7146520', 'n_tokens_mistral': 1286, 'n_tokens_neox': 1177, 'n_words': 402}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Update README based on First evaluation feedback username_0: Documentation Feedback from evaluation * The model card is still a bit sparse on details so it would have been nice to include more details about the dataset and the application of such model in this domain. * Social impact: CLIP has proved to be effective in a lot of different tasks so it would be interesting to see how such models can be used in a given demo and what applications it could enable. * It would be nice to include more details about the dataset and about how these different applications can be useful. <issue_comment>username_0: merged and updated<issue_closed>
{'fraction_non_alphanumeric': 0.02643171806167401, 'fraction_numerical': 0.002936857562408223, 'mean_word_length': 4.829059829059829, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14663992', 'n_tokens_mistral': 154, 'n_tokens_neox': 151, 'n_words': 106}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: reply VASurfaceAttribMemoryType to vaQuerySurfaceAttributes() username_0: Other drivers, such as intel-vaapi-driver and amdgpu gallium, according to [vadumpcaps](https://github.com/fhvwy/vadumpcaps), reply when `calling vaQuerySurfaceAttributes()` the available `VASurfaceAttribMemoryType` to all the profile/entrypoints, but media-driver only replies it to the postprocessor profile, not to the rest of the encoders and decoders. <issue_comment>username_1: @username_0 I will check it. Thanks. <issue_comment>username_1: @username_0 I have submitted #699 for fixing this issue. Thanks.<issue_closed>
{'fraction_non_alphanumeric': 0.0770440251572327, 'fraction_numerical': 0.012578616352201259, 'mean_word_length': 7.063291139240507, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '21401595', 'n_tokens_mistral': 185, 'n_tokens_neox': 179, 'n_words': 64}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: which version should i use if my php version < 7.1 and i need overload ? username_0: I need to use overload method to overwrite vars from .env file instead vars in OS env; so I upgrade symfony/dotenv from ^3.4 to ^4.2; then I deployed to test env, but our php version is 7.0.30 in test env; ```php Fatal error: Uncaught TypeError: Return value of Symfony\Component\Dotenv\Dotenv::populate() must be an instance of Symfony\Component\Dotenv\void, none returned in /var/deploy/oem-sdk/vendor/symfony/dotenv/Dotenv.php:145 ``` then I found Type Void is supported in php 7.1; so is upgrade my php the only way to sovle this? or do we have a version supported overload with php version lt php7.1? <issue_comment>username_1: So first of all I would suggest you to use the same php version locally on testing and also on production :blush: Otherwise there can always be interesting surprises like the error you mentioned above. And yes, Symfony 4.2 requires php >= 7.1.3 https://github.com/symfony/symfony/blob/4.2/src/Symfony/Component/Dotenv/composer.json#L19 So you need to downgrade the symfony packages or upgrade your php version. <issue_comment>username_2: You either need to upgrade to PHP +7.1.3 (which I'd highly recommend) or implement the method in your own code. I am going to close here as there is nothing to change on our side. Thank you for understanding.<issue_closed> <issue_comment>username_0: Thanks for your advice. Finally, I upgrade my php version on testing same to my local.
{'fraction_non_alphanumeric': 0.06727037516170763, 'fraction_numerical': 0.021345407503234153, 'mean_word_length': 4.226351351351352, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9435508', 'n_tokens_mistral': 471, 'n_tokens_neox': 437, 'n_words': 223}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Problem with the new Include keyword in dnscrypt-proxy 1.9.2 username_0: Why are issues so quickly closed? I understand having 256 issues on the list once they've been resolved is a bother except that resolved issues may appear to not be as resolved as they seem: https://github.com/username_1/dnscrypt-proxy/issues/568 I added a new pertinent comment but the issue remains closed. Thanks for having a look. As you like it, of course. <issue_comment>username_1: Can you clarify your question/issue/improvement request? <issue_comment>username_1: I am confused. Is your question about the cache plugin and its `min-ttl` option? Or the `Include` keyword in the config file? <issue_comment>username_0: You are confused? Less than I am. I've never handled problems here but with you, @username_1 yet I've pointed out the context for clarity. i'm sorry but I don't speak Chinese. <issue_comment>username_1: I guess you are pointing out that the release notes had a pasto: `--with-ttl` instead of `--min-ttl`. This has been fixed, thanks!<issue_closed>
{'fraction_non_alphanumeric': 0.06788990825688074, 'fraction_numerical': 0.014678899082568808, 'mean_word_length': 4.712041884816754, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '5251353', 'n_tokens_mistral': 314, 'n_tokens_neox': 293, 'n_words': 154}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Twitter authorization does not work username_0: While the signatures are correct, the authorization fail right at the beginning. It might be due to the use of HTTP/2 (h2). <issue_comment>username_0: @username_1 Could you confirm that Twitter authorization is working with the current release? <issue_comment>username_1: Will do. <issue_comment>username_1: Tested with 2016.02.22-RC and Android 5.0.1 (Samsung S4): I have to admit I did not perform this for a longer time..however it does not seem to work. I can start the process, Chrome opens and I am directed to the Twitter API login page, I can succesfully login and can read on the webpage "You are directed back to the application. This might take some time"(Free translation from German). But Chrome stays open and does not return to c:geo (also after waiting for several minutes). Manually closing the browser I get back to c:geo but are not authorized. I guess this is a different problem as you see with HTTP/2? I will have a look into the logs later today. <issue_comment>username_0: Yes, in my case the request seems to fail without even opening the browser and return a `401 Unauthorized` prematurely. <issue_comment>username_1: Tried with the default browser instead of Chrome and this works normal. I get authorized and posting works. So I would say: No problem on `release` but a problem on `master` or related to Android 6 <issue_comment>username_0: When you say "problem on master", have you also tried on `master` and get it to fail?<issue_closed> <issue_comment>username_1: @username_0 I am now back on `master` for my daily usage and did not have any problem with Twitter so far.
{'fraction_non_alphanumeric': 0.051176470588235295, 'fraction_numerical': 0.017058823529411765, 'mean_word_length': 4.632450331125828, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 10, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22562569', 'n_tokens_mistral': 461, 'n_tokens_neox': 438, 'n_words': 265}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Improve offset calculation for scale.offset option username_0: The offsets of time scales are currently calculated based on tick intervals, but it causes too much margin or short of margin depending on scale configuration or datasets. The proposed code calculates the offsets based on timestamps of labels and data. It also fixes offset calculation in case that a chart has a single data point. Current version: https://jsfiddle.net/q4fo65aa/ Proposed version: https://jsfiddle.net/sfhwd1by/ <issue_comment>username_0: Removed check for label values, and now it only uses `scale._timestamps.data` for offset calculation. <issue_comment>username_1: @username_0 the fiddles in the description don't seem to be working for me <issue_comment>username_1: @username_0 could you fix the fiddles so that we can review this PR? They're giving me the error below: Refused to execute script from 'https://raw.githubusercontent.com/chartjs/chartjs.github.io/master/dist/master/Chart.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. <issue_comment>username_1: @username_0 just a reminder to fix the fiddles when you get a chance. I'd love to get this change merged <issue_comment>username_1: @username_0 have you abandoned this PR? I'd would really like to see it get checked assuming it fixes an issue, but I can't tell what the issue is since the fiddles are broken I'll may close this PR due to inactivity if you're no longer interested in seeing it merged <issue_comment>username_2: Closing due to inactivity <issue_comment>username_0: @username_1 @username_2 @simonbrunel Sorry for not responding earlier. I fixed the fiddles and confirmed that the problems still remain. Can you reopen this if possible? <issue_comment>username_0: As equally sized bars were introduced in 2.7.2, I made some changes to support it. - Offsets are calulated based on the intervals between the first two data points and the last two data points (`barThickness: 'flex'`) or the minimum interval of all data (`barThickness: undefined`). - Add test for both cases of `barThickness: 'flex'` and `undefined`. @username_1 @username_2 @simonbrune Can you review this again? <issue_comment>username_1: Documentation for these features added in https://github.com/chartjs/Chart.js/commit/119a86f3995e373865ce33a1687cb834ecef6eb3 and awaiting deployment <issue_comment>username_1: @username_0 this PR will need to be rebased. There are also some outstanding comments <issue_comment>username_0: Closing this for now because accessing `barThickness` from the time scale code is not a good design as @simonbrunel pointed out. Note that the improvement of the offset calculation for a single data point has been added in #5933.
{'fraction_non_alphanumeric': 0.056252239340738086, 'fraction_numerical': 0.020781082049444642, 'mean_word_length': 5.260089686098655, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 13, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11248249', 'n_tokens_mistral': 789, 'n_tokens_neox': 736, 'n_words': 373}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: I2C and UART support in gpiozero. username_0: Need support for I2C and UART for reading data from other sensors like Gyroscope(generally use I2C and SPI communication ).<issue_closed> <issue_comment>username_1: Too vague. There's an issue template for feature requests, which you haven't used, and plenty of issues discussing the same thing.
{'fraction_non_alphanumeric': 0.05570291777188329, 'fraction_numerical': 0.013262599469496022, 'mean_word_length': 5.872727272727273, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2279132', 'n_tokens_mistral': 104, 'n_tokens_neox': 100, 'n_words': 50}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Release conflict username_0: Hi again I'm having troubles again I am using the FailOpen feature if i try to do a release at same time of an heartbeat it crashes i think this happens because the lock try to release with a diferent guid value (heartbeat reach the database and release reach the database right after) i was able to reproduce this error because my middleware have a some time of delay we are doing a workarround by checking the time if it's close to an heartbeat wait a bit to release <issue_comment>username_1: Again, I'm just guessing, a stack trace would help orient me to where the problem is occuring. <issue_comment>username_0: I can't show you the error stack right now let me try to re-explain the problem. It's not a crash, it's a 'malfunction' - Context: I can't use the aws.Dynamodb.DocumentClient in my app because i'm creating a front-end app (and i'm worried about exposing apiKeys and accessTokens) So, the workarround is recreating every request you need in the package in a REST way (maybe there is a better solution but it's the one that we are using right now), there is a delay in every request. - Problem: If i try to do a release (activated by a user) after the heartbeart request lauchs and before it returns, the release crashs because it thinks another user had catch this lock. This error is not catchable in the way that you have said, just because it's not an heartbeat error, its a release error - Possible Solution - If the release function is aware of the heartbeat, it will wait if the heartbeat request is sended - Can't do a release if the heartbeat doesn't return Thanks for your time, and if you couldn't understand i will try to reproduce the error (it's realy hard to reproduce right now) <issue_comment>username_1: Ok. I think I'm maybe understanding a bit more. It should not matter which request is in progress (heartbeat or release) because DynamoDB behaves as if the request is atomic. As such, there is one other place, inside fail open lock release code, where a request can fail. This isn't apparent because this error is expected and accounted for within the module. However, if you are replacing the DocumentClient, then perhaps you're not providing the expected error? [Lock.prototype._releaseFailOpen](https://github.com/username_1/dynamodb-lock-client/blob/master/index.js#L476) can observe `ConditionalCheckFailedException`, which would also be part of normal operation. I would expect `ConditionalCheckFailedException` if the heartbeat updated the `guid` before `_releaseFailOpen` reached DynamoDB. If the `guid` is not what is expected, when `_releaseFailOpen` is called, then the correct behavior when observing `ConditionalCheckFailedException` is to succeed (the heartbeat should have been stopped as part of lock release logic already). Eventually, the lock will be released when the `leaseDurationMs` elapses. <issue_comment>username_0: when I'm refering to a request, it's a http request (I'm think that maybe some operations can happen in an unexpected order, i'm wrong?) I understand what you are saying and its a correct way for most cases when i try to release, the heartbeat is stoped, but what stays in dynamodb is an open lock this is not a error in code, is a malfunction because of the delays i will try to reproduce the error later today, i'm having troubles to explain this <issue_comment>username_0: We were having this malfunction with only one user, and were getting this error when releasing. ConditionalCheckFailedException should only be a for when other user get the lock right? Even if its one user using the lock will stay open? if that's the expected behavior we don't have troubles. But if you have a bigger lease time the next user that try to get the lock will wait to much time without need Thanks for your help <issue_comment>username_1: You could observe this even with a single user. This is because [heartbeat changes the `guid` every time](https://github.com/username_1/dynamodb-lock-client/blob/master/index.js#L353). You're correct that a bigger lease time leads to next user having to wait longer. I think that once #22 is addressed, you'll be able to retry the release without having to wait the full lease time. However, currently, due to #22, retrying release is unsafe. <issue_comment>username_1: With the release of [v0.6.2](https://github.com/username_1/dynamodb-lock-client/releases/tag/v0.6.2) it should now be safe for you to retry `lock.release()` if you do not wish to wait for full lease duration. <issue_comment>username_1: My previous statement that it was unsafe to retry `lock.release()` for a fail open lock was incorrect. The unsafe condition only occured if `heartbeatTimeoutMs` was not provided for `FailOpen` lock, in which case, `lock.release()` would execute `FailClosed` release protocol. If `heartbeatTimeoutMs` was provided for `FailOpen` lock, retrying `lock.release()` was safe all along. <issue_comment>username_0: A sugestion, semi-related to this issue. have you considered the TTL feature from dynamodb to help dealing with the failing release? TTL being 2times the leasing time for example <issue_comment>username_1: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html<issue_closed>
{'fraction_non_alphanumeric': 0.05150214592274678, 'fraction_numerical': 0.005784661317409965, 'mean_word_length': 4.5949895615866385, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 13, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14895493', 'n_tokens_mistral': 1445, 'n_tokens_neox': 1358, 'n_words': 795}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to view 3D model/preview of G-code like in screenshots? username_0: **What doesn't work?** In this screenshot, the bency g-code can be previewed with a 3D-model: https://raw.githubusercontent.com/TimonGaebelein/OctoprintDash/master/screenshots/file_details.png However, I cannot seem to produce the same effect on my installation, and I cannot find any setting to enable this. How do I do this? Or is it just a fake part of the screenshot? Latest version of OctoDash (v2.1.2) and OctoPrint, on an Rpi 4. <issue_comment>username_1: You need the Cura thumbnail plugin in Octoprint: https://plugins.octoprint.org/plugins/UltimakerFormatPackage/ And using Cura as a slicer, you just use the "Print with octoprint" button and it happens automatically. <issue_comment>username_2: If you use PrusaSlicer or SuperSlicer as a slicer, you can use https://plugins.octoprint.org/plugins/prusaslicerthumbnails/. See the plugin documentation to know how to enable thumbnail feature. <issue_comment>username_0: Thank you, I got it working with PrusaSlicer after compiling 2.3.0.<issue_closed>
{'fraction_non_alphanumeric': 0.0709849157054126, 'fraction_numerical': 0.011535048802129548, 'mean_word_length': 4.968253968253968, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20313884', 'n_tokens_mistral': 363, 'n_tokens_neox': 343, 'n_words': 142}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Doesn't work with NVM username_0: Even though node is in the path it is not working with NVM. Could also be related to my node version which is `5.6.0`. ```console Dylans-MacBook-Pro:~ dylanpiercey$ which node /Users/dylanpiercey/.nvm/versions/node/v5.6.0/bin/node Dylans-MacBook-Pro:~ dylanpiercey$ node -v v5.6.0 ``` ![image](https://cloud.githubusercontent.com/assets/4985201/14368104/ee0be298-fcd8-11e5-99bb-e0a8ea8cc111.png) <issue_comment>username_1: fm.. we can run stylefmt if we have the node environment. I think, it occurs in sublime-stylefmt or sublime text itself. <issue_comment>username_0: @username_1 other sublime plugins that rely on nodejs work fine with my current configuration if that helps.<issue_closed> <issue_comment>username_0: @username_1 I just realized that I didn't post this wrong place (thought it was in sublime-stylefmt). Thank you for creating an issue in the correct place and for your patience!
{'fraction_non_alphanumeric': 0.08979591836734693, 'fraction_numerical': 0.04693877551020408, 'mean_word_length': 4.770588235294118, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27538854', 'n_tokens_mistral': 355, 'n_tokens_neox': 325, 'n_words': 113}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: TypeDoc -v or --version does not output the version of TypeDoc username_0: If I specify the -v or --version option I expect to see the version of TypeDoc, instead I see TypeDoc {{ VERSION }} My npm installed version of TypeDoc is 0.5.9 <issue_comment>username_1: Related to #402 <issue_comment>username_2: I'm going to close it as a duplicate, it looks like it's the same issue. Will be fixed in the next release.<issue_closed>
{'fraction_non_alphanumeric': 0.07051282051282051, 'fraction_numerical': 0.019230769230769232, 'mean_word_length': 4.390804597701149, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15413610', 'n_tokens_mistral': 143, 'n_tokens_neox': 133, 'n_words': 69}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: bls + mcl - remove bls-go-binary from go.mod username_0: # Request For Comment I propose the removal of `bls-go-binary` listing inside `go.mod`. To be replaced with `Makefile` logic that fetches and builds specific releases of `bls` + `mcl` from their upstream `github.com/herumi/*` repos. # Why? - In blockchain, code is law. - Never depend on something you didn't compile yourself in your CI/CD pipeline. Even if you trust who compiled it, you are compromised as soon as they are. - Avoid potential auditability issues. - Prebuilt binaries shrink the spectrum of Embeddable possobilities. <issue_comment>username_1: yeah that's not going to work because bls-go-binary is a go API to the C++ builds of bls+mcl so gosdk must integrate bls-go-binary. I have a branch where we replace bls+mcl with MIRACL, so this is a moot issue anyway. We can close this issue out unless you have more questions <issue_comment>username_0: ok `MIRACL` sounds like a good solution, thanks for pointing that out closing.<issue_closed>
{'fraction_non_alphanumeric': 0.06809701492537314, 'fraction_numerical': 0.002798507462686567, 'mean_word_length': 4.061320754716981, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4453484', 'n_tokens_mistral': 328, 'n_tokens_neox': 311, 'n_words': 151}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Possible missing documentation for "Used by" dependents on library.json manifiest username_0: Hi, First, thanks for awesome job on the last version of PlatformIO registry. It is a so good improvement. One question, maybe is missing the documentation for the field "used by" on the [current documentation](https://docs.platformio.org/en/latest/librarymanager/config.html#dependencies)? Or how I can fill this field: ![screenshot20220106_014218](https://user-images.githubusercontent.com/423856/148310016-9cffd209-39d9-462a-9de9-05f264358f13.jpg)<issue_closed> <issue_comment>username_1: Hi, Thanks for the kind words! 😊 "Used by" is automatically calculated based on https://docs.platformio.org/en/latest/librarymanager/config.html#dependencies So, if someone publishes package A that depends on package B, we track this.
{'fraction_non_alphanumeric': 0.08665906499429875, 'fraction_numerical': 0.06043329532497149, 'mean_word_length': 5.183098591549296, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6240915', 'n_tokens_mistral': 295, 'n_tokens_neox': 251, 'n_words': 85}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: vosk-api python module build error username_0: Hi. While trying python3 setup.py install [ 40%] Building CXX object CMakeFiles/_vosk.dir/voskPYTHON_wrap.cxx.o /usr/bin/c++ -D_vosk_EXPORTS -I/opt/vosk/vosk-api/python/../src -I/opt/vosk/kaldi/src -I/opt/vosk/kaldi/tools/openfst/include -I/usr/include/python3.5m -O3 -DFST_NO_DYNAMIC_LINKING -std=c++11 -O3 -DNDEBUG -fPIC -o CMakeFiles/_vosk.dir/voskPYTHON_wrap.cxx.o -c /opt/vosk/vosk-api/python/build/temp.linux-x86_64-3.5/voskPYTHON_wrap.cxx In file included from /opt/vosk/vosk-api/python/build/temp.linux-x86_64-3.5/voskPYTHON_wrap.cxx:3119:0: /opt/vosk/vosk-api/python/../src/kaldi_recognizer.h:35:14: error: ‘LookaheadFst’ in namespace ‘fst’ does not name a template type fst::LookaheadFst<fst::StdArc, int32> *decode_fst_; ^~~~~~~~~~~~ CMakeFiles/_vosk.dir/build.make:62: recipe for target 'CMakeFiles/_vosk.dir/voskPYTHON_wrap.cxx.o' failed make[3]: *** [CMakeFiles/_vosk.dir/voskPYTHON_wrap.cxx.o] Error 1 make[3]: Leaving directory '/opt/vosk/vosk-api/python/build/temp.linux-x86_64-3.5' CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/_vosk.dir/all' failed make[2]: *** [CMakeFiles/_vosk.dir/all] Error 2 make[2]: Leaving directory '/opt/vosk/vosk-api/python/build/temp.linux-x86_64-3.5' CMakeFiles/Makefile2:116: recipe for target 'CMakeFiles/_vosk.dir/rule' failed make[1]: *** [CMakeFiles/_vosk.dir/rule] Error 2 make[1]: Leaving directory '/opt/vosk/vosk-api/python/build/temp.linux-x86_64-3.5' Makefile:131: recipe for target '_vosk' failed make: *** [_vosk] Error 2 <issue_comment>username_1: It doesn't seem you have Kaldi build from our branch. Follow the instructions in README if you want to build from source: https://github.com/alphacep/vosk-api#python-module-build <issue_comment>username_0: Thank's for answer. The same error. I clone git repo to /opt/vosk/kaldi and make tools and src with succes. Then i clone vosk-api to /opt/vosk/vosk-api (directory python does not exist in kaldi repo and "cd python" is impossible), set the KALDI_ROOT to /opt/vosk/kaldi, change dir to /opt/vosk/vosk-api/python and try to run "pyton3 setup.py install". As result i have error. /opt/vosk/vosk-api/python/../src/kaldi_recognizer.h:35:14: error: ‘LookaheadFst’ in namespace ‘fst’ does not name a template type fst::LookaheadFst<fst::StdArc, int32> *decode_fst; ^~~~~~~~~~~~ CMakeFiles/_vosk.dir/build.make:62: recipe for target 'CMakeFiles/_vosk.dir/voskPYTHON_wrap.cxx.o' failed Debian GNU/Linux 9 Python 3.5.3 <issue_comment>username_1: You can type `git status && git remote -v` inside kaldi folder and paste here. <issue_comment>username_0: git status && git remote -v On branch master Your branch is up-to-date with 'origin/master'. Untracked files: (use "git add <file>..." to include in what will be committed) tools/OpenBLAS-0.3.7.tar.gz nothing added to commit but untracked files present (use "git add" to track) origin https://github.com/alphacep/kaldi.git (fetch) origin https://github.com/alphacep/kaldi.git (push) <issue_comment>username_1: You need to checkout lookahead branch with `git checkout lookahead` as instruction says. <issue_comment>username_0: Great! Now that error did goes away. Thank's alot.<issue_closed> <issue_comment>username_1: Ok, it is the same as #7
{'fraction_non_alphanumeric': 0.13057607090103399, 'fraction_numerical': 0.02895125553914328, 'mean_word_length': 4.217257318952234, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 13, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6479920', 'n_tokens_mistral': 1386, 'n_tokens_neox': 1317, 'n_words': 311}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Load DICOM images - logic username_0: The question ".. Would you like to load it?" is still confusing. I click no and then it is doing something anyway. I would like to do it like this: -> Rename "Load DICOM images" to "Open DICOM folder" -> It does this: In an XML already exists, just open it. If no XML exists, create one first and then open it. Do not ask the user anything. Then: -> Create a second menu item called "Refresh DICOM folder" -> It does this: create a new XML (even if one exists already) and open it. I think we also need a menu item called "Close DICOM folder". <issue_comment>username_1: I am currently working on this issue. <issue_comment>username_1: Created the three File menu items mentioned above<issue_closed> <issue_comment>username_0: The question ".. Would you like to load it?" is still confusing. I click no and then it is doing something anyway. I would like to do it like this: -> Rename "Load DICOM images" to "Open DICOM folder" -> It does this: In an XML already exists, just open it. If no XML exists, create one first and then open it. Do not ask the user anything. Then: -> Create a second menu item called "Refresh DICOM folder" -> It does this: create a new XML (even if one exists already) and open it. I think we also need a menu item called "Close DICOM folder". This also writes the XML to disk. <issue_comment>username_0: Reopened this as there are a few issues left. **1** Upon clicking "refresh DICOM" do not ask the user to select a directory again. This should just recreate the same XML for the same directory. Refresh should be unclickable when no DICOM folder is open. **2** I get some error reports as I open, close, refresh Error in WriteXMLfromDICOM.get_study_series: 'FileDataset' object has no attribute 'SeriesDescription' Error in WriteXMLfromDICOM.build_dictionary: cannot unpack non-iterable NoneType object Error in WriteXMLfromDICOM.open_dicom_to_xml: 'NoneType' object is not iterable Error in WriteXMLfromDICOM.create_XML_file: 'NoneType' object has no attribute 'iter' XML file creation time = 9.939924478530884 Error in TreeView.makeDICOMStudiesTreeView at line 141: stat: path should be string, bytes, os.PathLike or integer, not NoneType XML file creation time = 2.408388376235962 Error in WeaselXMLReader.setSeriesExpandedState: 'NoneType' object has no attribute 'set' <issue_comment>username_2: 1 issue reported on the 23rd March: - When user presses "Close DICOM Folder", the "Refresh DICOM Folder" and "Close DICOM Folder" button should be greyed out<issue_closed>
{'fraction_non_alphanumeric': 0.06126331811263318, 'fraction_numerical': 0.01750380517503805, 'mean_word_length': 4.268537074148297, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 17, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8946724', 'n_tokens_mistral': 789, 'n_tokens_neox': 754, 'n_words': 390}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: FIX_md_parser: v2 username_0: closes #283, #350 This PR summarize suggested logics mentioned in the close PR #344. It includes fixes on two parts: 1. fundamental fix for md_parser logic so that we won't encounter the same issue currently mentioned in issue #283. Therefore our example sheet doesn't need to be updated. 2. add validation option to ``import_sample_info`` function. User can use it to inspect metadata entered inside the spreadsheet. 3. add mutation logic when instantiating ``Sample`` so if user is not aware of invalid ``.`` in key fields, program shows warning(but not error) and mutates the metadata automatically to make experiments running. This PR is ready for review, thanks. <issue_comment>username_0: unknown error appears. close it for now and work on it tomorrow <issue_comment>username_0: error comes from git confusion happened in rebase. Solved and reopen PR <issue_comment>username_0: close based on the discussion with @CJ-Wright and @sbillinge. Replace with new PR
{'fraction_non_alphanumeric': 0.05459770114942529, 'fraction_numerical': 0.019157088122605363, 'mean_word_length': 4.870786516853933, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3426502', 'n_tokens_mistral': 280, 'n_tokens_neox': 263, 'n_words': 149}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Generates unreadble .csv files for scene detection username_0: Using the command "scenedetect -i goldeneye.mp4 -o scenes_list.csv -d content -si -df 4" generates unreadable .csv files. <issue_comment>username_1: I was wondering about the same thing. There's a typo in the tutorial. It shoud be ”-co" for csv files and "-o" for mkv files. <issue_comment>username_2: See issue [#29](https://github.com/Breakthrough/PySceneDetect/issues/29) for the same concern and response <issue_comment>username_0: There is another issue with the split that is done with the use of mkvmerge --split. The timecodes supplied are not used at all and can be verified from (https://github.com/mbunkus/mkvtoolnix/wiki/Splitting-imprecise ).<issue_closed>
{'fraction_non_alphanumeric': 0.08712613784135241, 'fraction_numerical': 0.013003901170351105, 'mean_word_length': 4.923076923076923, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27213923', 'n_tokens_mistral': 239, 'n_tokens_neox': 230, 'n_words': 94}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Change the default of --packages-dir to false username_0: It's currently `true`. We'd like to get this change in SDK 1.23 <issue_comment>username_1: @username_0 You've set milestone to 1.20 in https://github.com/dart-lang/sdk/issues/27399, which number is correct? ;) <issue_comment>username_0: @username_1 our ever vigilant community 😉 <issue_comment>username_2: On it.<issue_closed>
{'fraction_non_alphanumeric': 0.10739856801909307, 'fraction_numerical': 0.0405727923627685, 'mean_word_length': 6.0, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23273912', 'n_tokens_mistral': 144, 'n_tokens_neox': 129, 'n_words': 44}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: CI dying from types-requests username_0: I cannot reproduce it locally. Can you @Imipenem ? <issue_comment>username_0: ``` name: Run augurpy Tests on: - push - pull_request jobs: tests: name: ${{ matrix.session }} ${{ matrix.python-version }} / ${{ matrix.os }} runs-on: ${{ matrix.os }} strategy: fail-fast: false matrix: include: - { python-version: 3.8, os: ubuntu-latest, session: "pre-commit", } - { python-version: 3.8, os: ubuntu-latest, session: "safety", } - { python-version: 3.8, os: ubuntu-latest, session: "mypy", } - { python-version: 3.8, os: ubuntu-latest, session: "tests", } - { python-version: 3.8, os: windows-latest, session: "tests", } - { python-version: 3.8, os: macos-latest, session: "tests", } - { python-version: 3.8, os: ubuntu-latest, session: "typeguard", } - { python-version: 3.8, os: ubuntu-latest, session: "xdoctest", } - { python-version: 3.8, os: ubuntu-latest, session: "docs-build", } env: NOXSESSION: ${{ matrix.session }} steps: - name: Check out the repository uses: actions/[email protected] - name: Set up Python ${{ matrix.python-version }} uses: actions/[email protected] with: python-version: ${{ matrix.python-version }} - name: Install Poetry run: | pipx install poetry poetry --version - name: Install nox nox-poetry rich run: | [Truncated] pipx inject nox nox-poetry pipx inject nox rich nox --version - name: Download coverage data uses: actions/[email protected] with: name: coverage-data - name: Combine coverage data and display human readable report run: nox --force-color --session=coverage - name: Create coverage report run: nox --force-color --session=coverage -- xml -i - name: Upload coverage report uses: codecov/[email protected] ``` Working configuration with pipx <issue_comment>username_0: Regression introduced by pyparsing 3.0.5 We use pipx to isolate the runtime environments <issue_comment>username_0: https://github.com/theislab/augurpy/commit/e36d2e0e2adaeb1c1a7a1a9e3a42c211a7755cf3<issue_closed>
{'fraction_non_alphanumeric': 0.08206521739130435, 'fraction_numerical': 0.01576086956521739, 'mean_word_length': 0.819574888779041, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23891832', 'n_tokens_mistral': 974, 'n_tokens_neox': 848, 'n_words': 195}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Cleanup from prealpha feedback username_0: - Add an upper limit of 100 to the number of Gateways that will be stored in RouteGatewayStatus. - Clarify support level of filters defined in ForwardTo. - Increase max weight to 1 billion. - Clarify how 1 backend with a weight of 0 should be handled. - Replace "undefined" with "unspecified" - resource: fooroutes -> kind: FooRoute Fixes https://github.com/kubernetes-sigs/service-apis/issues/435 /cc @username_1 @hbagdi @jpeach <issue_comment>username_1: /lgtm
{'fraction_non_alphanumeric': 0.07622504537205081, 'fraction_numerical': 0.021778584392014518, 'mean_word_length': 4.257142857142857, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3611596', 'n_tokens_mistral': 176, 'n_tokens_neox': 164, 'n_words': 65}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: assertAttributeEquals username_0: how to check assertAttributeEquals in codeception? ```php $offersCompositeMock = Stub::make(OffersComposite::class); $responseComposite = new ResponseComposite(); $responseComposite->add($offersCompositeMock); $I->assert ????? //PhpUnit $this->assertAttributeEquals($responseComposite->children, $offersCompositeMock); ```<issue_closed>
{'fraction_non_alphanumeric': 0.12311015118790497, 'fraction_numerical': 0.0021598272138228943, 'mean_word_length': 4.2727272727272725, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 7, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10721885', 'n_tokens_mistral': 133, 'n_tokens_neox': 125, 'n_words': 21}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: DateOnly and TimeOnly are displayed as complex objects in open-api schemas rather simple string username_0: <!-- More information on our issue management policies can be found here: https://aka.ms/aspnet/issue-policies Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting **non-security** bugs and feature requests. If you believe you have an issue that affects the SECURITY of the platform, please do NOT create an issue and instead email your issue details to <EMAIL>. Your report may be eligible for our [bug bounty](https://www.microsoft.com/en-us/msrc/bounty-dot-net-core) but ONLY if it is reported through email. For other types of questions, consider using [StackOverflow](https://stackoverflow.com). --> ### Describe the bug `DateOnly` and `TimeOnly` are being displayed as complex objects in open api schemas for Minimal API endpoints rather than being treated as simple strings. It will make sense for open api to treat them as `string -$date-only` and `string -$time-only` `app.MapGet("/events", (DateOnly dateOnly, TimeOnly timeonly) => $" date and time values ... ")` ### Exceptions (if any) <!-- None --> ``` csharp ### Further technical details .NET SDK (reflecting any global.json): Version: 6.0.100-rc.2.21465.13 Commit: 0d1cdfa6a0 Runtime Environment: OS Name: Windows OS Version: 10.0.19043 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\6.0.100-rc.2.21465.13\ ``` <issue_comment>username_0: This may need a change in Swashbuckle and N-Swag for these new primitives.
{'fraction_non_alphanumeric': 0.09272300469483569, 'fraction_numerical': 0.025821596244131457, 'mean_word_length': 3.451697127937337, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 7, 'https://': 3, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23476084', 'n_tokens_mistral': 541, 'n_tokens_neox': 487, 'n_words': 206}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Different format for listing suggested packages on install username_0: The format for listing suggested packages features a lot of repetition and is a tad verbose. I find it hard to read, so I'd like to suggest a different format. For example, here's the tail of the output when starting a new [Craft 3 Beta] project (https://github.com/craftcms/cms): ``` $ composer create-project craftcms/craft . -s beta ... zendframework/zend-feed suggests installing zendframework/zend-cache (Zend\Cache component, for optionally caching feeds between requests) zendframework/zend-feed suggests installing zendframework/zend-db (Zend\Db component, for use with PubSubHubbub) zendframework/zend-feed suggests installing zendframework/zend-http (Zend\Http for PubSubHubbub, and optionally for use with Zend\Feed\Reader) zendframework/zend-feed suggests installing zendframework/zend-servicemanager (Zend\ServiceManager component, for easily extending ExtensionManager implementations) zendframework/zend-feed suggests installing zendframework/zend-validator (Zend\Validator component, for validating email addresses used in Atom feeds and entries ehen using the Writer subcomponent) league/flysystem suggests installing league/flysystem-aws-s3-v2 (Allows you to use S3 storage with AWS SDK v2) league/flysystem suggests installing league/flysystem-aws-s3-v3 (Allows you to use S3 storage with AWS SDK v3) league/flysystem suggests installing league/flysystem-azure (Allows you to use Windows Azure Blob storage) league/flysystem suggests installing league/flysystem-cached-adapter (Flysystem adapter decorator for metadata caching) league/flysystem suggests installing league/flysystem-copy (Allows you to use Copy.com storage) league/flysystem suggests installing league/flysystem-dropbox (Allows you to use Dropbox storage) league/flysystem suggests installing league/flysystem-eventable-filesystem (Allows you to use EventableFilesystem) league/flysystem suggests installing league/flysystem-rackspace (Allows you to use Rackspace Cloud Files) league/flysystem suggests installing league/flysystem-sftp (Allows you to use SFTP server storage via phpseclib) league/flysystem suggests installing league/flysystem-webdav (Allows you to use WebDAV storage) league/flysystem suggests installing league/flysystem-ziparchive (Allows you to use ZipArchive adapter) pixelandtonic/imagine suggests installing ext-imagick (to use the Imagick implementation) pixelandtonic/imagine suggests installing ext-gmagick (to use the Gmagick implementation) craftcms/cms suggests installing ext-imagick (Adds support for more image processing formats and options.) craftcms/cms suggests installing ext-intl (Adds rich internationalization support.) ``` If these suggestions were instead organised by the suggesting package, and left out the package descriptions (perhaps unless a verbose flag is set), we'd end up with something like this: ``` zendframework/zend-feed suggests installing: - zendframework/zend-cache - zendframework/zend-db - zendframework/zend-http - zendframework/zend-servicemanager - zendframework/zend-validator league/flysystem suggests installing: - league/flysystem-aws-s3-v2 - league/flysystem-aws-s3-v3 - league/flysystem-azure - league/flysystem-cached-adapter - league/flysystem-copy - league/flysystem-dropbox - league/flysystem-eventable-filesystem - league/flysystem-rackspace - league/flysystem-sftp - league/flysystem-webdav - league/flysystem-ziparchive pixelandtonic/imagine suggests installing: - ext-imagick - ext-gmagick craftcms/cms suggests installing: - ext-imagick - ext-intl ``` We could even condense this even further using a space-delimited list (similar to apt): ``` zendframework/zend-feed suggests installing: zendframework/zend-cache zendframework/zend-db zendframework/zend-http zendframework/zend-servicemanager zendframework/zend-validator league/flysystem suggests installing: league/flysystem-aws-s3-v2 league/flysystem-aws-s3-v3 league/flysystem-azure league/flysystem-cached-adapter league/flysystem-copy league/flysystem-dropbox league/flysystem-eventable-filesystem league/flysystem-rackspace league/flysystem-sftp league/flysystem-webdav league/flysystem-ziparchive pixelandtonic/imagine suggests installing: ext-imagick ext-gmagick craftcms/cms suggests installing: ext-imagick ext-intl ``` <issue_comment>username_1: The problem is not in the format but in the usage. Packages should only suggest packages if that *improves their functionality*, like performance or reliability. It's meant to be a "I can work without this package but SO MUCH BETTER WITH". Flysystem suggesting 20 adapters of which you'll likely not *want* to include 18 or 19 is the cause of the spam, not the suggest markup. Also, you definitely do not want to hide the reason by default. Without the reason everyone will just ignore it, just like with apt, because noone tells you why you should bother. <issue_comment>username_0: Perhaps the distinction should be made between *recommendation* and *suggestion* then - again like apt? If these lists were far shorter, I'd agree that including the descriptions would be a good thing. <issue_comment>username_2: Considering it is both possible to omit this list from being output during install/update, or to call it as a stand-alone command, I see no reason to really change anything here. <issue_comment>username_3: I think the output is fine but yeah on initial install it's kinda useless as it spams with suggestions of everything.. maybe we should only output if you had already one package installed, or like during install output max 5-10 rows and then a line saying for more use the suggests command. Something to consider. <issue_comment>username_1: Looking at https://packagist.org/packages/league/flysystem as a prime example of my point of 'suggestions' being abused: it would be a good idea perhaps to just list suggested **platform** packages. What you frequently see on that end is packages that simply have more, faster or better functionality if Imagick, BCmath, native encryption etc. are available. For Flysystem actual functionality is added to the *package itself* if `ext-fileinfo` is present. For regular packages, in my experience, suggestions usually boil down to listing reverse dependencies. I just don't give a damn that there's a Flysystem module for Azure when I'm using S3. <issue_comment>username_3: Fixed by https://github.com/composer/composer/commit/44d1e15294058129b4b27008d3d0ffb25a896c15<issue_closed>
{'fraction_non_alphanumeric': 0.060460939838371745, 'fraction_numerical': 0.009428314875785692, 'mean_word_length': 4.801215277777778, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 1, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28835024', 'n_tokens_mistral': 1940, 'n_tokens_neox': 1775, 'n_words': 720}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [BUG] ui shows healthy, but logs indicate otherwise username_0: ## Describe the bug The UI shows an encrypted volume healthy. However, the following is in the logs every few minutes: ![image](https://user-images.githubusercontent.com/1883296/148711160-77213454-6e1b-420b-8050-216ac5dc1e01.png) ``` Jan 10 03:09:13 capital k3s[406909]: E0110 03:09:13.678937 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = failed to run cryptsetup args: [luksOpen /dev/longhorn/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 -d /dev/stdin] output: error: exit status 5 Jan 10 03:09:13 capital k3s[406909]: E0110 03:09:13.678999 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 03:11:15.67898719 +0100 CET m=+4044.397200917 (durationBeforeRetry 2m2s). ``` ## To Reproduce No idea. ## Expected behavior The UI to reflect an underlying error. ## Environment - Longhorn version: 1.2.3 - Installation method (e.g. Rancher Catalog App/Helm/Kubectl): helm - Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: k3s - Number of management node in the cluster: 3 - Number of worker node in the cluster: 0 - Node config - OS type and version: ubuntu - CPU per node: 12-16 - Memory per node: 32-64 - Disk type(e.g. SSD/NVMe): NVMe - Network bandwidth between the nodes: 1Gps - Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): Baremetal - Number of Longhorn volumes in the cluster: 7 ## Additional context I'm not seeing this on other nodes in the cluster. <issue_comment>username_1: cc @username_4 <issue_comment>username_2: Can you check to see if there is any related error event in the PVC `pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1` ? <issue_comment>username_3: @username_0 did you receive a event for this pod like `Multi-Attach error for volume "pvc-5a54c8d9-a3df-4d74-8389-c1dfe4d1d95f" Volume is already exclusively attached to one node and can't be attached to another`? I'm running into this issue as well using version 1.2.3. Full Events of this pod: ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m55s default-scheduler Successfully assigned 6dd3dbdb/xxxxx-6dd3dbdb-20220119-000000-b5rrr to k3s-cx-worker-3 Warning FailedAttachVolume 2m56s attachdetach-controller Multi-Attach error for volume "pvc-5a54c8d9-a3df-4d74-8389-c1dfe4d1d95f" Volume is already exclusively attached to one node and can't be attached to another Normal SuccessfulAttachVolume 2m24s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-5a54c8d9-a3df-4d74-8389-c1dfe4d1d95f" Warning FailedMount 53s kubelet Unable to attach or mount volumes: unmounted volumes=[util-volume], unattached volumes=[local mysql-6dd3dbdb-snapshot-token-hwsc7 util-volume osmconfig]: timed out waiting for the condition Warning FailedMount 47s (x8 over 112s) kubelet MountVolume.MountDevice failed for volume "pvc-5a54c8d9-a3df-4d74-8389-c1dfe4d1d95f" : rpc error: code = Internal desc = failed to run cryptsetup args: [luksOpen /dev/longhorn/pvc-5a54c8d9-a3df-4d74-8389-c1dfe4d1d95f pvc-5a54c8d9-a3df-4d74-8389-c1dfe4d1d95f -d /dev/stdin] output: error: exit status 1 ``` <issue_comment>username_0: @username_2 here's the full logs leading up to the situation: ``` Jan 09 23:10:33 capital k3s[34772]: I0109 23:10:33.214547 34772 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 09 23:10:33 capital k3s[34772]: E0109 23:10:33.214638 34772 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-09 23:10:33.714620732 +0100 CET m=+12072.168609503 (durationBeforeRetry 500ms). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") Jan 09 23:10:33 capital k3s[34772]: I0109 23:10:33.717305 34772 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 09 23:10:33 capital k3s[34772]: E0109 23:10:33.717379 34772 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-09 23:10:34.71736332 +0100 CET m=+12073.171352091 (durationBeforeRetry 1s). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") Jan 09 23:10:34 capital k3s[34772]: I0109 23:10:34.721581 34772 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 09 23:10:34 capital k3s[34772]: E0109 23:10:34.725198 34772 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-09 23:10:36.725177842 +0100 CET m=+12075.179166613 (durationBeforeRetry 2s). Error: Volume not attached according to node status for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") Jan 09 23:10:36 capital iscsid[112865]: Connection6:0 to [target: iqn.2019-10.io.longhorn:pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1, portal: 10.42.0.42,3260] through [iface: default] is operational now Jan 09 23:10:36 capital k3s[34772]: I0109 23:10:36.737052 34772 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 09 23:10:36 capital k3s[34772]: E0109 23:10:36.743932 34772 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-09 23:10:40.743916846 +0100 CET m=+12079.197905597 (durationBeforeRetry 4s). Error: Volume not attached according to node status for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") Jan 09 23:10:40 capital k3s[34772]: I0109 23:10:40.763522 34772 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 09 23:10:40 capital k3s[34772]: E0109 23:10:40.772943 34772 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-09 23:10:48.772911646 +0100 CET m=+12087.226900438 (durationBeforeRetry 8s). Error: Volume not attached according to node status for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") Jan 09 23:10:48 capital k3s[34772]: I0109 23:10:48.812008 34772 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 09 23:10:48 capital k3s[34772]: I0109 23:10:48.824881 34772 operation_generator.go:1524] Controller attach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") device path: "" Jan 09 23:10:48 capital k3s[34772]: I0109 23:10:48.912759 34772 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 09 23:10:48 capital k3s[34772]: I0109 23:10:48.917357 34772 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 09 23:10:50 capital k3s[34772]: I0109 23:10:50.137956 34772 operation_generator.go:631] MountVolume.MountDevice succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d/globalmount" Jan 10 02:03:30 capital k3s[34772]: I0110 02:03:30.606551 34772 trace.go:205] Trace[2116936646]: "Update" url:/apis/longhorn.io/v1beta1/namespaces/longhorn-system/engines/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1-e-cf7a3719/status,user-agent:longhorn-manager/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7d4b45cf-810f-4ab1-98cd-d9ed5457aedf,client:10.42.0.24,accept:application/json, */*,protocol:HTTP/1.1 (10-Jan-2022 02:03:15.603) (total time: 15002ms): Jan 10 02:03:45 capital k3s[34772]: I0110 02:03:45.629449 34772 trace.go:205] Trace[1339574357]: "Update" url:/apis/longhorn.io/v1beta1/namespaces/longhorn-system/engines/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1-e-cf7a3719/status,user-agent:longhorn-manager/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:d6b02907-7c99-4953-8dc6-b1b875e5a83f,client:10.42.0.24,accept:application/json, */*,protocol:HTTP/1.1 (10-Jan-2022 02:03:30.627) (total time: 15002ms): Jan 10 02:05:09 capital k3s[406909]: I0110 02:05:09.161124 406909 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 10 02:05:09 capital k3s[406909]: E0110 02:05:09.161236 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:05:09.66121046 +0100 CET m=+78.379424207 (durationBeforeRetry 500ms). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") Jan 10 02:05:09 capital k3s[406909]: I0110 02:05:09.667492 406909 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d\" (UniqueName: \"kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1\") pod \"harbor-registry-7647b557f8-s4cgp\" (UID: \"19dc98bd-85e6-4091-916b-989aae41b16e\") " Jan 10 02:05:10 capital k3s[406909]: I0110 02:05:10.888027 406909 operation_generator.go:1524] Controller attach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") device path: "" Jan 10 02:05:10 capital k3s[406909]: I0110 02:05:10.976595 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:05:11 capital k3s[406909]: I0110 02:05:11.483060 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:05:22 capital k3s[406909]: E0110 02:05:22.686673 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:22 capital k3s[406909]: E0110 02:05:22.686760 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:05:23.186737705 +0100 CET m=+91.904951452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") : rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:23 capital k3s[406909]: I0110 02:05:23.273620 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:05:23 capital k3s[406909]: I0110 02:05:23.276841 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:05:33 capital k3s[406909]: E0110 02:05:33.289679 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:33 capital k3s[406909]: E0110 02:05:33.289777 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:05:34.289752357 +0100 CET m=+103.007966115 (durationBeforeRetry 1s). Error: MountVolume.MountDevice failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") : rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:34 capital k3s[406909]: I0110 02:05:34.333909 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:05:34 capital k3s[406909]: I0110 02:05:34.338003 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:05:44 capital k3s[406909]: E0110 02:05:44.345187 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:44 capital k3s[406909]: E0110 02:05:44.345270 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:05:46.34525212 +0100 CET m=+115.063465857 (durationBeforeRetry 2s). Error: MountVolume.MountDevice failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") : rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:46 capital k3s[406909]: I0110 02:05:46.418683 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:05:46 capital k3s[406909]: I0110 02:05:46.421428 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:05:56 capital k3s[406909]: E0110 02:05:56.428462 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:05:56 capital k3s[406909]: E0110 02:05:56.428547 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:06:00.428528434 +0100 CET m=+129.146742181 (durationBeforeRetry 4s). Error: MountVolume.MountDevice failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") : rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:06:00 capital k3s[406909]: I0110 02:06:00.512542 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:06:00 capital k3s[406909]: I0110 02:06:00.515708 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:06:10 capital k3s[406909]: E0110 02:06:10.529842 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:06:10 capital k3s[406909]: E0110 02:06:10.529936 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:06:18.529917737 +0100 CET m=+147.248131484 (durationBeforeRetry 8s). Error: MountVolume.MountDevice failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") : rpc error: code = Internal desc = Get "http://longhorn-backend:9500/v1/volumes/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 10 02:06:18 capital k3s[406909]: I0110 02:06:18.624814 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:06:18 capital k3s[406909]: I0110 02:06:18.629747 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:06:18 capital k3s[406909]: E0110 02:06:18.655730 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = failed to run cryptsetup args: [luksOpen /dev/longhorn/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 -d /dev/stdin] output: error: exit status 5 Jan 10 02:06:18 capital k3s[406909]: E0110 02:06:18.655819 406909 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 podName: nodeName:}" failed. No retries permitted until 2022-01-10 02:06:34.655800079 +0100 CET m=+163.374013826 (durationBeforeRetry 16s). Error: MountVolume.MountDevice failed for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") : rpc error: code = Internal desc = failed to run cryptsetup args: [luksOpen /dev/longhorn/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 -d /dev/stdin] output: error: exit status 5 Jan 10 02:06:34 capital k3s[406909]: I0110 02:06:34.759308 406909 operation_generator.go:588] MountVolume.WaitForAttach entering for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "" Jan 10 02:06:34 capital k3s[406909]: I0110 02:06:34.770145 406909 operation_generator.go:598] MountVolume.WaitForAttach succeeded for volume "pvc-e9e66430-4241-4a43-8461-1cf9beacfc6d" (UniqueName: "kubernetes.io/csi/driver.longhorn.io^pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1") pod "harbor-registry-7647b557f8-s4cgp" (UID: "19dc98bd-85e6-4091-916b-989aae41b16e") DevicePath "csi-ec48a418683e2ae8d8f4335c48bf4c9b0661e4a4cf33fc2f7d8309b9919f680c" Jan 10 02:06:34 capital k3s[406909]: E0110 02:06:34.790516 406909 csi_attacher.go:340] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = failed to run cryptsetup args: [luksOpen /dev/longhorn/pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1 -d /dev/stdin] output: error: exit status 5 ``` @username_3 I don't think that happened here. <issue_comment>username_2: @username_0 I mean can you check if the PVC has any related events. I.e., checking `kubectl describe pvc pvc-8e99ddf3-6968-40de-bb6b-ff5b792491c1` Currently, Longhorn UI only shows the status of the Longhorn volume. The error here seems to come from CSI and Kubelet levels. Users should look into the PVC's events for the error message. We are discussing to see if we should show the error related to PVC/PV <issue_comment>username_2: @username_3 It seems that there are multiple pods trying to use the volume. Can you check to see how many pods are trying to use the PVC? A workaround would be to scale down the workload pods. Wait. Then scale it back <issue_comment>username_0: @username_2 that PVC is long gone now so kubernetes doesn't have any associated events. All I have are server logs at this point. <issue_comment>username_2: @username_0 Please ping us next time it happens <issue_comment>username_4: @username_3 your issue is different, check the below list for the error codes: Check that you have dm_crypt as well as that cryptsetup works with the default configuration on your host OS. ``` // cryptsetup returns 0 on success and a non-zero value on error. // 1 wrong parameters, 2 no permission (bad passphrase), // 3 out of memory, 4 wrong device specified, // 5 device already exists or device is busy. ``` @username_0 from your logs I am guessing there was some kind of intermittent issue on the raw block dev, network issue, host issue. Then the raw block dev is no longer valid while it's still open via the device mapper. But it's just a guess at this point. You also seems to be having an issue with the http client constantly timing out, so I am further thinking potential network issue. Please let us know if you encounter this down the line.
{'fraction_non_alphanumeric': 0.11952507506989266, 'fraction_numerical': 0.23052497152521312, 'mean_word_length': 5.454444196925818, 'pattern_counts': {'":': 10, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 4, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20663331', 'n_tokens_mistral': 15690, 'n_tokens_neox': 12351, 'n_words': 2103}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Slickgrid check box username_0: if I checked one row in slickgird on first page. The row remain checked in second page. Please guide. <issue_comment>username_1: $('.ui-icon-seek-first').on('click', function () { //choose the first page button if (this.className.indexOf('ui-state-disabled') == -1) { // if the class is not disabled grid.setSelectedRows([]); //clear the content array of selected rows } }); $('.ui-icon-seek-prev').on('click', function () { //choose the prev page button if (this.className.indexOf('ui-state-disabled') == -1) { grid.setSelectedRows([]); } }); $('.ui-icon-seek-next').on('click', function () { //choose the next page button if (this.className.indexOf('ui-state-disabled') == -1) { grid.setSelectedRows([]); } }); $('.ui-icon-seek-end').on('click', function () { //choose the end page button if (this.className.indexOf('ui-state-disabled') == -1) { grid.setSelectedRows([]); } });
{'fraction_non_alphanumeric': 0.18146718146718147, 'fraction_numerical': 0.005791505791505791, 'mean_word_length': 3.05078125, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6157433', 'n_tokens_mistral': 325, 'n_tokens_neox': 312, 'n_words': 82}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: uart rx buffer length error username_0: On ESP32. ```rust use esp_idf_hal::serial; let peripherals = Peripherals::take().unwrap(); let pins = peripherals.pins; let config = serial::config::Config::default().baudrate(Hertz(115_200)); match serial::Serial::new( peripherals.uart1, serial::Pins { tx: pins.gpio27.into_output()?, rx: pins.gpio26.into_input()?, cts: Option::<gpio::Gpio21<gpio::Unknown>>::None, rts: Option::<gpio::Gpio22<gpio::Unknown>>::None, }, config, ) { Ok(serial) => {} Err(e) => error!("uart {:?}", e), } ``` ``` E (658) uart: uart_driver_install(1323): uart rx buffer length error E (658) esp32rogue: uart EspError(-1) ```<issue_closed>
{'fraction_non_alphanumeric': 0.18181818181818182, 'fraction_numerical': 0.03969270166453265, 'mean_word_length': 3.710843373493976, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8084645', 'n_tokens_mistral': 307, 'n_tokens_neox': 284, 'n_words': 53}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Errors CS0107 when compiling the project.. username_0: First, Excuse for my poor english. It'ts my first post here. Because I need a new date for my file DocumentFormat.OpenXml.dll (instead of the initial 01/06/2018), I tried to compile the project to get a new dll. But I get Compiler Error CS0107 (More than one protection modifier..). Thanks for your help. ![image](https://user-images.githubusercontent.com/19156977/36789697-baf06b32-1c92-11e8-938e-eabe06fc28f1.png) <issue_comment>username_1: @username_0 "private protected" is valid per [C# 7.2 in this blog](https://blogs.msdn.microsoft.com/mazhou/2017/10/05/c-7-series-part-5-private-protected/). What version of VS are you using? <issue_comment>username_1: @username_0 we state in the [Build Instructions](https://github.com/OfficeDev/Open-XML-SDK#build-instructions) that VS2017 is required and this uses C# 7.2. You can download VS2017 Community for free. <issue_comment>username_2: Yes, you need VS 2017 with Update 5 to build the project to take advantage of new C# features. Can you try updating and let us know if that works? <issue_comment>username_0: I use VS2017 with latest updates ( I think..) but I'll check that and return to you. Thank you. <issue_comment>username_0: @username_2, @ username_1, I thought wrong.. I was not up to date.. With the new update it works and I have my new dll.Youpiii. 1000 thanks for your help.<issue_closed> <issue_comment>username_2: @username_0 Out of curiosity, what do you mean by a "new dll"? <issue_comment>username_0: I simply mean that I recompiled DocumentFormat.OpenXml.dll. So now, I have a new date for it(instead of the initial date 6th january 2018. I needed absolutelty that for my application.. <issue_comment>username_2: What's the need for a different date? Is something in the SDK out of date? <issue_comment>username_0: No the SDK is fantastically fantastic.. Ok. I'll try to explain .. In fact, we have an application for which we send updates for our users. Our update procedure is done as we send them only files modified from the date of their last update. For example, if a user has already been updated on February 20, 2018 and we need to send him a new update today, March 1, we will only send the files modified between these two dates. The 6th january 2018 version of DocumentFormat.OpenXml.dll will unfortunately never be addressed to them... <issue_comment>username_2: Ah gotcha. I'd recommend not using dates but instead versions - otherwise you'll have problems with any 3rd party library in the future (or even internal builds that you injest into your application that happen to have a timestamp from before the previous update). <issue_comment>username_0: What you say is the holy truth but unfortunately I have no power to change this.. <issue_comment>username_0: I have a new question if you allow me.. <issue_comment>username_0: the Nuget package creates by default under my VS2017 target only "netstandard1.3". The only way I found for target "net40" is to edit the following lines in the .csproj. I guess it's not the right method. I am wrong? : ![image](https://user-images.githubusercontent.com/19156977/36872958-55a25536-1da7-11e8-8694-132ae9e7a0fa.png) <issue_comment>username_2: Yeah, this is a workaround due to the size of the assembly if we try to edit all the targets in a single instance of VS. You have three options: 1. Do what you're doing 2. Start VS with an environment variable ProjectLoadStyle=All 3. Add a ProjectLoadStyle=DevFramework40 (with an analogous update in the test) If you go with (3), please submit a PR with that as it may come in handy for other people. <issue_comment>username_2: Also, if you're creating your own build, make sure to check out a released version (ie `git checkout v2.8.1`) so that you are using the same code.
{'fraction_non_alphanumeric': 0.06837606837606838, 'fraction_numerical': 0.04636104636104636, 'mean_word_length': 4.501424501424501, 'pattern_counts': {'":': 0, '<': 18, '<?xml version=': 0, '>': 18, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8942859', 'n_tokens_mistral': 1217, 'n_tokens_neox': 1088, 'n_words': 553}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Enabling Application insights writes JavaScript to function app host.json username_0: - VSCode Version: 1.41.1 - OS Version: MacOS Mojave 10.14.6 Steps to Reproduce: 1. Confirm the "Application Insights" notification question 2. Step through and enable it for your account This added the following to the function apps `host.json` file: ```json // Enable telemetry collection with Application Insights var ai = require('applicationinsights'); ai.setup(process.env.APPLICATIONINSIGHTSKEY || 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX').start();{ "version": "2.0" } ``` _The `X` are placeholders._ This broke deployments. <issue_comment>username_1: @username_0 I'm not sure where you see the Application Insights notification question, is this shown after installing an extension? <issue_comment>username_0: @username_1 You are right. It's [this extension here](https://marketplace.visualstudio.com/items?itemName=VisualStudioOnlineApplicationInsights.application-insights) by Microsoft. Sorry, if I added this to the wrong repo. I am new to Visual Studio Code. Normally I work with IntelliJ products and only use VSC as I am forced into it, so I haven't wrapped my head around all the details.<issue_closed> <issue_comment>username_1: @username_0 No worries, it's often difficult to tell what the source of an issue is. In this case, this extension does not ship with VSCode. Can you please open this issue at the repo [here](https://github.com/Microsoft/applicationinsights-vscode/issues) instead?
{'fraction_non_alphanumeric': 0.08059316569954868, 'fraction_numerical': 0.012894906511927788, 'mean_word_length': 5.110236220472441, 'pattern_counts': {'":': 1, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 2, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10449523', 'n_tokens_mistral': 446, 'n_tokens_neox': 425, 'n_words': 179}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: CAN node is being detected onlz once. username_0: Hi, I am using the code mentioned below to detect my CANnode. It detects the can node only once and after that when I run the command again node is not detected. I have to restart my computer to get the node detected. #!/usr/bin/python3 import canopen import time network = canopen.Network() node = network.add_node(0x32, '0x19_0x29C_0x127.eds') network.connect(channel='PCAN_USBBUS1', bustype='pcan', bitrate=50000) # print("Error code: ", node.sdo[0x603F].phys) network.scanner.search() time.sleep(1) print("Found %d node(s)" % len(network.scanner.nodes)) for node_id in network.scanner.nodes: print("Found node %d!" % node_id) <issue_comment>username_1: Sounds like something related to PCAN or the node in question. Not sure I can help you much there. Does the rest of the code work like reading the error code? <issue_comment>username_0: Hi, I am using the same hardware on windows side but not getting any error message. But in Ubuntu side, it detects only one time Best regards <NAME> <issue_comment>username_0: I have to power-off the CANopen node and restart in order to get it detected. Best regards <NAME> On Sun, Oct 14, 2018 at 2:30 PM <NAME> <<EMAIL>> wrote: > Hi, > > I am using the same hardware on windows side but not getting any error > message. But in Ubuntu side, it detects only one time > Best regards > <NAME> > > > <issue_comment>username_1: You probably need to log the CAN bus to see what messages are actually being sent from the computer and which messages are sent from the node. That might give you a hint on where to start looking.<issue_closed>
{'fraction_non_alphanumeric': 0.07760141093474426, 'fraction_numerical': 0.02292768959435626, 'mean_word_length': 4.285714285714286, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 22, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17390510', 'n_tokens_mistral': 550, 'n_tokens_neox': 514, 'n_words': 240}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Chore/Add publish plugins plugin username_0: This PR adds the publish-plugin from gradle.com. This allows to publish the plugin at plugins.gradle.org The plugin will be made available at https://plugins.gradle.org/plugin/com.novoda.bintray-release once this PR is merged. This implies a change of the plugin id from bintray-release to com.novoda.bintray-release <issue_comment>username_1: @username_0 slow down, and please explain **why** we want this change?
{'fraction_non_alphanumeric': 0.074, 'fraction_numerical': 0.006, 'mean_word_length': 4.964285714285714, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24096982', 'n_tokens_mistral': 144, 'n_tokens_neox': 138, 'n_words': 58}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Amazon Products Reviews Sentiment username_0: <issue_comment>username_1: As an LGMSoC participant, I'd like to work on this issue. Kindly assign it to me :) <issue_comment>username_2: I'm LMSOC participant...can I work on this? <issue_comment>username_3: I know ML well so I would request you to please assign this project to me <issue_comment>username_4: @username_0 can you please assign this issue to me? I saw its up for grab <issue_comment>username_4: @username_0 can you please assign this issue to me? I saw its up for grab
{'fraction_non_alphanumeric': 0.061619718309859156, 'fraction_numerical': 0.014084507042253521, 'mean_word_length': 5.465909090909091, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4824919', 'n_tokens_mistral': 170, 'n_tokens_neox': 166, 'n_words': 80}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix #2109: LocalDate.until sometimes returns wrong result username_0: Also add private lazy val for proleptic month. <issue_comment>username_1: Jenkins, test this please <issue_comment>username_2: In general LGTM. Will just wait on the answer for the field question before merging. <issue_comment>username_0: Updated. Please retest. <issue_comment>username_2: Jenkins, retest this please.
{'fraction_non_alphanumeric': 0.06855791962174941, 'fraction_numerical': 0.02127659574468085, 'mean_word_length': 6.851851851851852, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29606767', 'n_tokens_mistral': 123, 'n_tokens_neox': 117, 'n_words': 47}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Build: Linkerd install fails if the username contains an underscore username_0: ## Bug Report I have been trying to setup Linkerd for local development on my Arch Linux machine following the [comprehensive development configuration](https://github.com/linkerd/linkerd2/blob/master/BUILD.md#comprehensive). According to the guide building the docker images using `DOCKER_TRACE=1 bin/mkube bin/docker-build` runs fine. However, on attempting to install Linkerd using `bin/linkerd install | kubectl apply -f -` leads to failure with `Error: dev-55e4fd18-srv_twry is not a valid version` message. ### What is the issue? I was able to track down the issue to the `bin/_tag.sh` script. Basically, the issue is that my username on the system is `srv_twry` which contains an underscore. The docker images are generated with the username concatenated with the SHA hash of the `HEAD` commit. As a result, the image name also contains an underscore. The `linkerd install` commands validates the image name and doesn't allow underscores in the image name, hence the error. https://github.com/linkerd/linkerd2/blob/1039d82547388f361160b419e30d0b6b2051dc36/cli/cmd/install.go#L541 ### How can it be reproduced? Change your username to have an underscore and follow the comprehensive development configuration instructions. ### Logs, error output, etc ``` Error: dev-55e4fd18-srv_twry is not a valid version ``` #### `linkerd check` output N/A ### Environment - Kubernetes Version: 1.16 - Cluster Environment: Minikube - Host OS: Arch Linux - Linkerd version: N/A ### Possible solution 1. Force me to change my username: Please don't :) 2. Remove underscores and other forbidden characters from the username before using it in the tag. 3. Allow underscores in the version names.<issue_closed>
{'fraction_non_alphanumeric': 0.07223719676549865, 'fraction_numerical': 0.029110512129380053, 'mean_word_length': 4.379710144927536, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1954264', 'n_tokens_mistral': 572, 'n_tokens_neox': 515, 'n_words': 233}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Expose mechanism for disabling exceptions in PCL username_0: With 0.81.0 it seems we broke some use cases for testing using just the PCL. While we can't provide any functionality there, we could at least not throw exceptions so people could create mock objects and use them in their tests. This should be opt-in as a way to verify that the user is aware of what they're doing.
{'fraction_non_alphanumeric': 0.0364963503649635, 'fraction_numerical': 0.012165450121654502, 'mean_word_length': 4.5675675675675675, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17880445', 'n_tokens_mistral': 103, 'n_tokens_neox': 97, 'n_words': 68}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: fix file naming convention of SidebarLeft component username_0: The file was named `_sidebarLeft.svelte` but was being referenced in other modules as `_SidebarLeft.svelte` which causes problems with Linux operating systems. Renamed the file to `_SidebarLeft.svelte` to match the naming convention of other components.
{'fraction_non_alphanumeric': 0.048295454545454544, 'fraction_numerical': 0.002840909090909091, 'mean_word_length': 6.354166666666667, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '21510', 'n_tokens_mistral': 92, 'n_tokens_neox': 88, 'n_words': 43}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Document SignalR integration username_0: <issue_closed> <issue_comment>username_1: Hi, I wanted to notify that the [(link given above)](http://www.aspnetboilerplate.com/Pages/Documents/SignalR-Integration) returns an error. <issue_comment>username_0: Thanks a lot. Will be published today. <issue_comment>username_0: It's just published. Have a nice day ;) <issue_comment>username_1: Thank you so much! Wish you a nice day, too :)
{'fraction_non_alphanumeric': 0.10021321961620469, 'fraction_numerical': 0.010660980810234541, 'mean_word_length': 6.230769230769231, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 0, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22313748', 'n_tokens_mistral': 149, 'n_tokens_neox': 143, 'n_words': 47}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Support endless method definition username_0: ``` def square(x) = x * x ``` ``` irb(main):002:0> Ripper.lex("def square(x) = x * x") => [[[1, 0], :on_kw, "def", FNAME], [[1, 3], :on_sp, " ", FNAME], [[1, 4], :on_ident, "square", ENDFN], [[1, 10], :on_lparen, "(", BEG|LABEL], [[1, 11], :on_ident, "x", ARG], [[1, 12], :on_rparen, ")", ENDFN], [[1, 13], :on_sp, " ", BEG], [[1, 14], :on_op, "=", BEG], [[1, 15], :on_sp, " ", BEG], [[1, 16], :on_ident, "x", END|LABEL], [[1, 17], :on_sp, " ", END|LABEL], [[1, 18], :on_op, "*", BEG], [[1, 19], :on_sp, " ", BEG], [[1, 20], :on_ident, "x", END|LABEL]] ``` Will be interpreted by dead_end as a keyword but not an end: ``` irb(main):010:0> line = DeadEnd::CodeLine.new(line: "def square(x) = x * x", index: 0) => #<DeadEnd::CodeLine:0x00007fd387985e60 ... irb(main):011:0> line.is_kw? => true irb(main):012:0> line.is_end? => false ```
{'fraction_non_alphanumeric': 0.27979274611398963, 'fraction_numerical': 0.07357512953367876, 'mean_word_length': 4.005181347150259, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 10, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13226438', 'n_tokens_mistral': 503, 'n_tokens_neox': 445, 'n_words': 104}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to run bmv2 in a server to simulate a P4 switch username_0: actually, I want to use a server as a P4 switch,when I run the BMV2,how can I know the <iface*> in the CLI :```./simple_switch -i 0@<iface0> -i 1@<iface1> <path to JSON file>```. <issue_comment>username_1: If you have physical ethernet interfaces you want to use, you can use a command like 'ip link show' to find out their names and properties / configuration options, on Linux. If you want to use virtual Ethernet interfaces, you must create them, and then I believe you get to pick their names. This bash script creates many virtual Ethernet interfaces with names like veth2, veth4, veth6, etc. up to somewhere around veth18: https://github.com/p4lang/behavioral-model/blob/master/tools/veth_setup.sh <issue_comment>username_0: @username_1 Thank you very much for your help.<issue_closed> <issue_comment>username_0: actually, I want to use a server as a P4 switch,when I run the BMV2,how can I know the <iface*> in the CLI :```./simple_switch -i 0@<iface0> -i 1@<iface1> <path to JSON file>```. <issue_comment>username_1: It is physically possible to configure an IP address to such an interface, but I doubt it will achieve whatever effect it normally would. For example, assigning an IP address to such an interface will _not_ communicate any information about that IP address to BMv2, or to any control software adding table entries to the BMv2 process. I cannot think of a reason why you would _want_ to configure an IP address to such an interface. What do you hope to achieve by doing so? <issue_comment>username_0: @username_1 In fact, the function that I want to achive is more like a gateway than a switch,so I think it is nessary to bind the IP to the physical ethernet interfaces. the topology looks like: h1、h2<----->s1<----->s3<----->s2<---------->h3、h4 the h1 and h2 are in the net 192.168.1.1/24,but the h3 and h4 are in the net 192.168.2.1/24. s3 is a traditional router . <issue_comment>username_1: If s3 is a traditional router, e.g. one configured via certain commands, and its data plane is not P4 programmable, then using whatever commands have been created for it in order to configure IP addresses on _its_ interfaces should work as it normally does. If a device is P4-programmable, then the only effect to using Linux commands to assign IP addresses to its physical Ethernet ports will be to inform the Linux kernel of those changes -- it will have 0 effect on the P4 program or its table contents, unless you or someone else has written code to cause the Linux kernel to then communicate with the control plane code of your P4 programmable data plane. If no one has done that, then assigning IP addresses in that way to ports is useless.<issue_closed> <issue_comment>username_2: Closing this since no update
{'fraction_non_alphanumeric': 0.06409807355516638, 'fraction_numerical': 0.024518388791593695, 'mean_word_length': 4.230769230769231, 'pattern_counts': {'":': 0, '<': 23, '<?xml version=': 0, '>': 23, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10758861', 'n_tokens_mistral': 825, 'n_tokens_neox': 783, 'n_words': 450}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Factor out openquake username_0: **Describe the bug** [Openquake GMPEs are being used](https://github.com/usgs/earthquake-impact-utils/blob/64410ba8b2ef2ed948d71630b4f88ced5a9ba000/impactutils/rupture/distance.py#L9), but the dependency is too large for this base level library. Move use of openquake usage to a higher level repository. **To Reproduce** N.A. **Expected behavior** N.A. **Screenshots** N.A. **Environment (please complete the following information):** All **Additional context** N.A. <issue_comment>username_0: Cannot be removed due to the large quantity of GMPEs that would need to bee copied over.<issue_closed>
{'fraction_non_alphanumeric': 0.1031390134529148, 'fraction_numerical': 0.04035874439461883, 'mean_word_length': 5.767676767676767, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3610286', 'n_tokens_mistral': 236, 'n_tokens_neox': 216, 'n_words': 71}