source
sequence | text
stringlengths 99
98.5k
|
---|---|
[
"stackoverflow",
"0006514556.txt"
] | Q:
How do I revert to an earlier version of a package?
I'm trying to write some SPARQL queries in R using the rrdf package. However, I get this error every time I try to load the library.
Error: package 'rrdflibs' 1.1.2 was found, but == 1.1.0 is required by 'rrdf'
Not sure why they didn't write it as >= 1.1.0. Is what they did a good programming practice?
A:
Go to http://cran.r-project.org/src/contrib/Archive/rrdflibs/ to retrieve an older version. This is a source archive, so you will have to be able to build from source (typically easy on Linux, pretty easy on MacOS, and hard on Windows; you can use the http://win-builder.r-project.org/ service to build a Windows binary if necessary).
Actually, based on a quick look at the package, I think you should be able to install in this case (even on Windows without Rtools) via
download.file("http://cran.r-project.org/src/contrib/Archive/rrdflibs/rrdflibs_1.1.0.tar.gz",
dest="rrfdlibs_1.1.0.tar.gz")
install.packages("rrfdlibs_1.1.0.tar.gz",repos=NULL,type="source")
because the package doesn't actually contain anything that needs to be compiled.
Don't know about programming practice, you'd have to ask the authors if they had some particular reason to do it that way. (See maintainer("rrdf").) Maybe they knew the versions would not be backward/forward compatible?
|
[
"ethereum.stackexchange",
"0000038003.txt"
] | Q:
Does transaction have network id?
According to this post, every Ethereum wallet has a number called network id.
Does every transaction have this network id?
If not, then it should be possible to replay transaction on multiple networks, isn't it?
A:
If not, then it should be possible to replay transaction on multiple networks, isn't it?
Replay protection was introduced in EIP-155 by incorporating the chainID into the v part of a transaction's signature. So in effect, yes, the transaction does have knowledge of which network it is on.
From the Specification part of the EIP:
If block.number >= FORK_BLKNUM and v = CHAIN_ID * 2 + 35 or v =
CHAIN_ID * 2 + 36, then when computing the hash of a transaction for
purposes of signing or recovering, instead of hashing only the first
six elements (i.e. nonce, gasprice, startgas, to, value, data), hash
nine elements, with v replaced by CHAIN_ID, r = 0 and s = 0. The
currently existing signature scheme using v = 27 and v = 28 remains
valid and continues to operate under the same rules as it does now.
See also: What does v, r, s in eth_getTransactionByHash mean?
And also: What is a chainID in Ethereum, how is it different than NetworkID, and how is it used?
|
[
"meta.stackexchange",
"0000076308.txt"
] | Q:
DATETIME parametrised queries on the stack exchange data explorer
Is it possible to have DATETIME parametrised queries on the Stack Exchange Data Explorer, e.g:
DECLARE @MyDate DATETIME = ##MyDate##
SELECT @MyDate
The output is usually completely unrelated to the input ('2010-01-01' converted to '1905-07-02') - date conversion functions don't seem to be much help either.
A:
You should be using
DECLARE @MyDate DATETIME = ##MyDate:string##
SELECT @MyDate
AFAIK, SQL Server accepts strings for ANY datatype to be implicitly converted.
The snippet works on SEDE.
|
[
"stackoverflow",
"0000929677.txt"
] | Q:
How exactly is the same-domain policy enforced?
Let's say I have a domain, js.mydomain.com, and it points to some IP address, and some other domain, requests.mydomain.com, which points to a different IP address. Can a .js file downloaded from js.mydomain.com make Ajax requests to requests.mydomain.com?
How exactly do modern browsers enforce the same-domain policy?
A:
The short answer to your question is no: for AJAX calls, you can only access the same hostname (and port / scheme) as your page was loaded from.
There are a couple of work-arounds: one is to create a URL in foo.example.com that acts as a reverse proxy for bar.example.com. The browser doesn't care where the request is actually fulfilled, as long as the hostname matches. If you already have a front-end Apache webserver, this won't be too difficult.
Another alternative is AJAST, which works by inserting script tags into your document. I believe that this is how Google APIs work.
You'll find a good description of the same origin policy here: http://code.google.com/p/browsersec/wiki/Part2
|
[
"stackoverflow",
"0007248656.txt"
] | Q:
how to launch ext.window, using ext?
I want to launch a ext.window in a browser. What is the best way to handle this or do this? The catch is window should use
Ext.layout.BorderLayout?
A:
window = new Ext.Window({layout: 'border'}).show()
You'd need to be a bit more specific about your problem to get more specific answer.
|
[
"stackoverflow",
"0018815545.txt"
] | Q:
UIToolBar With Status bar
I have a problem with a view where the Toolbar appears underneath the status bar.
In Interface Builder my view looks like...
The code that assembles the view controllers...
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
self.mainVC = [[ChartVC alloc] initWithNibName:@"ChartVC_iPad" bundle:nil];
CGRect oldFrame = self.window.frame;
self.mainVC.view.frame = CGRectMake(0, 20, oldFrame.size.width, oldFrame.size.height - 20);
self.window.rootViewController = self.mainVC;
[self.window makeKeyAndVisible];
}
And how it appears in the simulator... - (Notice the extra 20px space at the bottom)
So my question is, how can I correctly position the Toolbar and get rid of the white space at the bottom?
Edit: Added frame code...
A:
Setting a view controller as window's root will adjust its view's frame to window's frame, which is same as screen bounds and therefore includes status bar.
Replace [[UIScreen mainScreen] bounds]] with [[UIScreen mainScreen] applicationFrame]] which will return a rect with status bar cut off and your window will start below status bar, so you don't even need to do anything with your view's frame, it will get set properly by the window.
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]];
self.mainVC = [[ChartVC alloc] initWithNibName:@"ChartVC_iPad" bundle:nil];
self.window.rootViewController = self.mainVC;
[self.window makeKeyAndVisible];
}
|
[
"stackoverflow",
"0043471051.txt"
] | Q:
Watch is not executing when updating inner objects in array
I am creating a project using angularJS. I have a problem while using watch in my project. I am updating the inner objects of the array.
$scope.$watch('demoData',function(_n, _o) {
console.log("watchExecuting")
},true);
Here is the Jsfiddle:
http://jsfiddle.net/HB7LU/29132/
A:
You need to watch over the input with ng-model value like below
var app = angular.module('plunker', []);
app.controller('MainCtrl', function($scope) {
$scope.demoData = []
$scope.lid = "f0b751df4f0444d8";
$scope.demoData[$scope.lid] = {"textBox":"test"}
console.log( $scope.demoData)
$scope.demo = function(){
//console.log( $scope.demoData)
}
$scope.$watch('demoData[lid].textBox',function(_n, _o) {
console.log("watchExecuting")
},true);
});
<!DOCTYPE html>
<html ng-app="plunker">
<head>
<meta charset="utf-8" />
<title>AngularJS Plunker</title>
<script>document.write('<base href="' + document.location + '" />');</script>
<link rel="stylesheet" href="style.css" />
<script data-require="[email protected]" src="https://code.angularjs.org/1.4.12/angular.js" data-semver="1.4.9"></script>
<script src="app.js"></script>
</head>
<body ng-controller="MainCtrl">
<input type = "text" ng-model="demoData[lid].textBox" ng-change="demo()">
</body>
</html>
|
[
"stackoverflow",
"0024954847.txt"
] | Q:
Template parameters not deducible in partial specialization
I have a similar issue like the one found here but it might happen that I am still doing something different, so I will ask none-the less.
There are some types that will be tagged with a tag structure:
template<typename Geometry=void, typename Enable = void>
struct tag
{
typedef void type;
};
and point and triangle tags are introduced:
struct point_tag {};
struct triangle_tag {};
to construct the point type using the std::vector:
template<>
struct tag<std::vector<double>>
{
typedef point_tag type;
};
and a triangle type as an alias template of the std::array:
template<typename Point>
using triangle =
typename std::enable_if
<
std::is_base_of<typename tag<Point>::type, point_tag>::value,
std::array<Point,3>
>::type;
that is enabled if the argument passed as Point parameter is really tagged with point_tag,
and afterwards, I would like to tag all triangles with the triangle_tag like this:
template <typename Point>
struct tag<triangle<Point>>
{
typedef triangle_tag type;
};
The std::array is aliased and not composited/inherited because composition and inheritance causes problems with the initializer list construction. However, the compililation fails with the error
g++ -std=c++1y main.cpp -o main
main.cpp:31:8: error: template parameters not deducible in partial specialization:
struct tag<triangle<Point>>
^
main.cpp:31:8: note: ‘Point’
If I don't rely on enabling the triangle based on the Point parameter being tagged, but do it for all types like this:
template<typename Point>
using triangle =
// This works, but there is no restriction on Point to be tagged with point_tag.
std::array<Point, 3>;
then the compilation works fine. However, then triangle is also a triangle, and I am using function overloading based on arbitrary properties of types to reduce the function template set from those functions for which the enable_if fails. I am not relying on container interfaces for function templates to determine the viable template arguments because sometimes the implicit interfaces are exactly the same, but the operation semantics is different. For example, a triangle is a closed circular line segment (involves operation on 3 edges), and a point chain is an open-ended line segment (involves operations on 2 edges). All operations require a direct access operator which is the only requirement for the template parameter, which leads to ambiguity in function template instantiation when they are implemented without enable_if restrictions - all covered in the linked article.
Here is the complete example.
Is there something I'm missing? How to get around this issue?
A:
What not use your Enable template parameter ?
Something like:
template <typename Point>
struct tag<
std::array<Point, 3>,
typename std::enable_if<
std::is_base_of<
typename tag<Point>::type,
point_tag
>::value
>::type
>
{
typedef triangle_tag type;
};
(ok, you repeat the enable_if...)
Live example
|
[
"stackoverflow",
"0012116778.txt"
] | Q:
plot function of datasets from different files in gnuplot
I have several files with similar data that I need to plot using gnuplot.
By example, I use something like this to plot the 1st Vs 5th column of 3 files:
plot "file1.csv" using 1:5 title 'test 1' with lp, \
"file2.csv" using 1:5 title 'test 2' with lp, \
"file3.csv" using 1:5 title 'test 3' with lp
but, I don't know how to plot a function of the data from the 3 files. By example, I would like to include in the previous plot the media of the 3 columns for each point (that would be the function f(x1,x2,x3)=(x1(i)+x2(i)+x3(i))/3 for the ith data point). Is it possible?
A:
This is a common question, and the answer is: not directly from within gnuplot. You can, however call an external tool to do the math for you. Here are a couple of other answers with examples (you can search this site for 'gnuplot multiple files' for more...):
Example 1
Example 2
|
[
"stackoverflow",
"0015121405.txt"
] | Q:
threads or processes
I am writing basically port scanner (not really, but it's close). Pinging machines one by one is just slow, so I definitely need some kind of parallel processing. Bottle neck is definitely network I/O, so I was thinking that threads would suffice (with python's GIL existing), they're easier to use. But would utilization of processes instead bring significant performance increase (15%+)?
Sadly, I don't have time to try both approaches and pick better of them based on some measurements or something :/
Thanks :)
A:
If you don't have time to wait for a performance test, you presumably just want guesses. So:
There's probably no real advantage to multiprocessing over threading here.
There is a disadvantage to multiprocessing in the overhead per task. You can get around that by tuning the batch size, but with threading, you don't have to.
So, I'd use threading.
However, I'd do it using concurrent.futures.ThreadPoolExecutor, so when you get a bit of time later, you can try the one-liner change to ProcessPoolExecutor and compare performance.
|
[
"stackoverflow",
"0002979396.txt"
] | Q:
Releasing dynamically added UILabel
I am adding a no. of UILable dynamically to my view like this
UILabel *test = [[UILabel alloc] initWithFrame:CGRectMake(x, y, 50, 50)];
[self.view addSubview:tick];
Is it necessary to release these UILabel from memory in viewDidUnLoad and dealloc, if yes how i will release them? how i will get their reference?
A:
Yes.
Since self.view already -retained the label in -addSubview:, you can -release it immediately.
UILabel *test = [[UILabel alloc] initWithFrame:CGRectMake(x, y, 50, 50)];
[self.view addSubview:test];
[test release]; // <--
|
[
"stackoverflow",
"0023717602.txt"
] | Q:
What is the best way to reference a core data entity?
I have been reading and working a lot with core data recently, and I love the way it implements the data storage.
However, there is one thing I am still not able to resolve in my app.
On my data model, I have an entity which represents a chunk of text the user have inserted in a text field. I want to be able to give the user the possibility to embed this chuck of text into another text field by referencing it.
If I was working with a database directly, I would do something like this:
The user adds a chunk of text to the database. It happens to have the row index 17.
The user goes back to the main editor where it can reference chunks of text from the database and inserts a pattern such as {chunk.17} where 17 is the row index in the database.
The user clicks a "parse" button, making the app query the database for this row and replace the string {chunk.17} for the text chunk stored in there.
Since Core Data has no such thing as an auto incremented index, I am not sure how to create a similar behavior without much work. Any ideas are appreciated!
A:
It sounds like you'll need to add a property (such as "id") to your entity that has some unique value (there are lots of discussions on Stackoverflow about how to generate a unique ID/value for an object). Once you have that, you can reference this value using an NSPredicate:
NSFetchRequest *req = [[NSFetchRequest alloc] initWithEntityName:@"Demo"];
// A predicate is used as a limited substitute for a 'where' clause. In this case, we're specifying that
// the result set should only contain entities whose id is 'chunk.17'.
[req setPredicate:[NSPredicate predicateWithFormat:@"%K LIKE[c] %@", @"id", @"chunk.17"]];
NSError *error;
NSArray *results = [self.managedObjectContext executeFetchRequest:req error:&error];
|
[
"stackoverflow",
"0008702304.txt"
] | Q:
Drawing a hero and moving it
What is the best way drawing a hero and moving it? I just need the best code for doing that. Before writing this i found a way, but when i made the surface holder transparent, i realised that the code is drawing new bitmap in the front of the old one every milisecond. That way looks kind of laggy to me, but maymie i'm not right. Please help me. Actualy i'm kind of confused...
Anyway, here's the code that i think is laggy:
MainThread.java
/**
*
*/
package com.workspace.pockethero;
import android.graphics.Canvas;
import android.util.Log;
import android.view.SurfaceHolder;
/**
* @author impaler
*
* The Main thread which contains the game loop. The thread must have access to
* the surface view and holder to trigger events every game tick.
*/
public class MainThread extends Thread {
private static final String TAG = MainThread.class.getSimpleName();
// Surface holder that can access the physical surface
private SurfaceHolder surfaceHolder;
// The actual view that handles inputs
// and draws to the surface
private MainGamePanel gamePanel;
// flag to hold game state
private boolean running;
public void setRunning(boolean running) {
this.running = running;
}
public MainThread(SurfaceHolder surfaceHolder, MainGamePanel gamePanel) {
super();
this.surfaceHolder = surfaceHolder;
this.gamePanel = gamePanel;
}
@Override
public void run() {
Canvas canvas;
canvas = this.surfaceHolder.lockCanvas();
Log.d(TAG, "Starting game loop");
while (running) {
canvas = null;
// try locking the canvas for exclusive pixel editing
// in the surface
try {
synchronized (surfaceHolder) {
// update game state
this.gamePanel.update();
// render state to the screen
// draws the canvas on the panel
this.gamePanel.render(canvas);
}
} finally {
// in case of an exception the surface is not left in
// an inconsistent state
if (canvas != null) {
surfaceHolder.unlockCanvasAndPost(canvas);
}
} // end finally
}
}
}
MainGamePanel.java
/**
*
*/
package com.workspace.pockethero;
import com.workspace.pockethero.model.Droid;
import com.workspace.pockethero.buttons.*;
import android.content.Context;
import android.content.res.Resources;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.drawable.Drawable;
import android.util.Log;
import android.view.MotionEvent;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
/**
* @author impaler
* This is the main surface that handles the ontouch events and draws
* the image to the screen.
*/
public class MainGamePanel extends SurfaceView implements
SurfaceHolder.Callback {
private static final String TAG = MainGamePanel.class.getSimpleName();
private MainThread thread;
public Droid droid;
public Butt butt;
public Butt1 butt1;
public Butt2 butt2;
public Butt3 butt3;
public Buttz buttz;
public Buttz1 buttz1;
public Buttz2 buttz2;
public Buttz3 buttz3;
public Buttx buttx;
public Build build;
public int decentreX;
public int decentreY;
public int debottomA;
public boolean moved;
public boolean moved1;
public boolean moved2;
public boolean moved3;
public boolean moved4;
public boolean moved5;
public boolean moved6;
public boolean moved7;
private Drawable myImage;
public boolean mapPainted;
public MainGamePanel(Context context) {
super(context);
// adding the callback (this) to the surface holder to intercept events
getHolder().addCallback(this);
// create droid and load bitmap
decentreX = PocketHero.centreX;
decentreY = PocketHero.centreY;
debottomA = PocketHero.bottomA;
droid = new Droid(BitmapFactory.decodeResource(getResources(), R.drawable.herod), decentreX, decentreY);
butt = new Butt(BitmapFactory.decodeResource(getResources(), R.drawable.button), 110, debottomA - 70);
butt1 = new Butt1(BitmapFactory.decodeResource(getResources(), R.drawable.button1), 70, debottomA - 110);
butt2 = new Butt2(BitmapFactory.decodeResource(getResources(), R.drawable.button2), 30, debottomA - 70);
butt3 = new Butt3(BitmapFactory.decodeResource(getResources(), R.drawable.button3), 70, debottomA - 30);
buttz = new Buttz(BitmapFactory.decodeResource(getResources(), R.drawable.zbutton), 110, debottomA - 110);
buttz1 = new Buttz1(BitmapFactory.decodeResource(getResources(), R.drawable.zbutton1), 30, debottomA - 110);
buttz2 = new Buttz2(BitmapFactory.decodeResource(getResources(), R.drawable.zbutton2), 30, debottomA - 30);
buttz3 = new Buttz3(BitmapFactory.decodeResource(getResources(), R.drawable.zbutton3), 110, debottomA - 30);
buttx = new Buttx(BitmapFactory.decodeResource(getResources(), R.drawable.xbutton), 70, debottomA - 70);
build = new Build(BitmapFactory.decodeResource(getResources(), R.drawable.building), 500, 200);
// create the game loop thread
//300 indicates start position of bitmapfield on screen
thread = new MainThread(getHolder(), this);
// make the GamePanel focusable so it can handle events
setFocusable(true);
this.setOnTouchListener(new OnTouchListener() {
public boolean onTouch(View V, MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN) {
// delegating event handling to the droid
handleActionDown((int)event.getX(), (int)event.getY());
} if (event.getAction() == MotionEvent.ACTION_MOVE) {
handleActionDown((int)event.getX(), (int)event.getY());
// the gestures
} if (event.getAction() == MotionEvent.ACTION_UP) {
// touch was released
if (droid.touched) {
droid.setTouched(false);
}
if (droid.touched1) {
droid.setTouched1(false);
}
if (droid.touched2) {
droid.setTouched2(false);
}
if (droid.touched3) {
droid.setTouched3(false);
}
}
return true;
}
});
}
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
}
public void surfaceCreated(SurfaceHolder holder) {
// at this point the surface is created and
// we can safely start the game loop
thread.setRunning(true);
thread.start();
}
public void surfaceDestroyed(SurfaceHolder holder) {
Log.d(TAG, "Surface is being destroyed");
// tell the thread to shut down and wait for it to finish
// this is a clean shutdown
boolean retry = true;
while (retry) {
try {
thread.join();
retry = false;
} catch (InterruptedException e) {
// try again shutting down the thread
}
}
Log.d(TAG, "Thread was shut down cleanly");
}
public void render(Canvas canvas) {
canvas.drawColor(Color.TRANSPARENT);
droid.draw(canvas);
butt.draw(canvas);
butt1.draw(canvas);
butt2.draw(canvas);
butt3.draw(canvas);
buttz.draw(canvas);
buttz1.draw(canvas);
buttz2.draw(canvas);
buttz3.draw(canvas);
buttx.draw(canvas);
}
/**
* This is the game update method. It iterates through all the objects
* and calls their update method if they have one or calls specific
* engine's update method.
*/
public void update() {
droid.update();
}
public void handleActionDown(int eventX, int eventY) {
if (eventX >= (butt.x - butt.bitmap.getWidth() / 2) && (eventX <= (butt.x + butt.bitmap.getWidth()/2))) {
if (eventY >= (buttz.y - buttz.bitmap.getHeight() / 2) && (eventY <= (buttz3.y + buttz3.bitmap.getHeight() / 2))) {
// droid touched
droid.setTouched(true);
} else {
droid.setTouched(false);
}
} else {
droid.setTouched(false);
}
if (eventX >= (buttz1.x - buttz1.bitmap.getWidth() / 2) && (eventX <= (buttz.x + buttz.bitmap.getWidth()/2))) {
if (eventY >= (butt1.y - butt1.bitmap.getHeight() / 2) && (eventY <= (butt1.y + butt1.bitmap.getHeight() / 2))) {
// droid touched
droid.setTouched1(true);
} else {
droid.setTouched1(false);
}
}else {
droid.setTouched1(false);
}
if (eventX >= (butt2.x - butt2.bitmap.getWidth() / 2) && (eventX <= (butt2.x + butt2.bitmap.getWidth()/2))) {
if (eventY >= (buttz1.y - buttz1.bitmap.getHeight() / 2) && (eventY <= (buttz2.y + buttz2.bitmap.getHeight() / 2))) {
// droid touched
droid.setTouched2(true);
} else {
droid.setTouched2(false);
}
}else {
droid.setTouched2(false);
}
if (eventX >= (buttz2.x - buttz2.bitmap.getWidth() / 2) && (eventX <= (buttz3.x + buttz3.bitmap.getWidth()/2))) {
if (eventY >= (butt3.y - butt3.bitmap.getHeight() / 2) && (eventY <= (butt3.y + butt3.bitmap.getHeight() / 2))) {
// droid touched
droid.setTouched3(true);
} else {
droid.setTouched3(false);
}
}else {
droid.setTouched3(false);
}
if (droid.touched & !droid.touched1 & !droid.touched3) {
if (!moved) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.heror);
moved = true;
}
}else {
moved = false;
}if (droid.touched1 & !droid.touched & !droid.touched2){
if (!moved1) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.herou);
moved1 = true;
}
}else {
moved1 = false;
} if (droid.touched2 & !droid.touched1 & !droid.touched3){
if (!moved2) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.herol);
moved2 = true;
}
}else {
moved2 = false;
} if (droid.touched3 & !droid.touched2 & !droid.touched){
if (!moved7) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.herod);
moved7 = true;
}
}else {
moved7 = false;
} if (droid.touched & droid.touched1){
if (!moved3) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.herour);
moved3 = true;
}
}else {
moved3 = false;
} if (droid.touched1 & droid.touched2){
if (!moved4) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.heroul);
moved4 = true;
}
}else {
moved4 = false;
} if (droid.touched2 & droid.touched3){
if (!moved5) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.herodl);
moved5 = true;
}
}else {
moved5 = false;
} if (droid.touched3 & droid.touched){
if (!moved6) {
droid.bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.herodr);
moved6 = true;
}
}else {
moved6 = false;
}
}
}
and the Droid.java
/**
*
*/
package com.workspace.pockethero.model;
import android.graphics.Bitmap;
import android.graphics.Canvas;
/**
* This is a test droid that is dragged, dropped, moved, smashed against
* the wall and done other terrible things with.
* Wait till it gets a weapon!
*
* @author impaler
*
*/
public class Droid {
public Bitmap bitmap; // the actual bitmap
public int x; // the X coordinate
public int y; // the Y coordinate
public boolean touched; // if droid is touched/picked up
public boolean touched1; // if droid is touched/picked up
public boolean touched2; // if droid is touched/picked up
public boolean touched3; // if droid is touched/picked up
public Droid(Bitmap bitmap, int x, int y) {
this.bitmap = bitmap;
this.x = x;
this.y = y;
}
public Bitmap getBitmap() {
return bitmap;
}
public void setBitmap(Bitmap bitmap) {
this.bitmap = bitmap;
}
public int getX() {
return x;
}
public void setX(int x) {
this.x = x;
}
public int getY() {
return y;
}
public void setY(int y) {
this.y = y;
}
public void draw(Canvas canvas) {
canvas.drawBitmap(bitmap, x - (bitmap.getWidth() / 2), y - (bitmap.getHeight() / 2), null);
}
/**
* Method which updates the droid's internal state every tick
*/
public void setTouched(boolean touched) {
this.touched = touched;
}
public boolean isTouched() {
return touched;
}
public void setTouched1(boolean touched) {
this.touched1 = touched;
}
public boolean isTouched1() {
return touched1;
}
public void setTouched2(boolean touched) {
this.touched2 = touched;
}
public boolean isTouched2() {
return touched2;
}
public void setTouched3(boolean touched) {
this.touched3 = touched;
}
public boolean isTouched3() {
return touched3;
}
public void update() {
if (touched & !touched1 & !touched3) {
x += 1;
}else if (touched1 & !touched & !touched2){
y -= 1;
}else if (touched2 & !touched1 & !touched3){
x -= 1;
}else if (touched3 & !touched2 & !touched){
y += 1;
}else if (touched & touched1){
x += 1;
y -= 1;
}else if (touched1 & touched2){
x -= 1;
y -= 1;
}else if (touched2 & touched3){
x -= 1;
y += 1;
}else if (touched3 & touched){
x += 1;
y += 1;
}
}
/**
* Handles the {@link MotionEvent.ACTION_DOWN} event. If the event happens on the
* bitmap surface then the touched state is set to <code>true</code> otherwise to <code>false</code>
* @param eventX - the event's X coordinate
* @param eventY - the event's Y coordinate
*/
}
there are also other classes created for each button, but i didnt pasted them here, because the are practicly the same as droid.java
A:
Re-drawing the whole frame each loop is the correct way to draw/move sprites and perform anything with canvas.
The buffer is cleared each frame and you need to re-draw the background and all objects at their specified position.
In your code render() and update() will not be called more frequently than approximately every 16milliseconds~ (60 frames per second), so all you have to think about is drawing the onscreen scene.
I'm not sure what you mean by 'laggy', maybe you have a performance issue related to the size of your Bitmaps or phone performance, but I hope this is close to what you were looking for.
|
[
"stackoverflow",
"0016472776.txt"
] | Q:
PHP lstat command doesn't distinguish shortcuts in windows
In windows, I open a dir, read the files, and for each file, run stat to determine the size, etc.
The problem is that when I run stat on a folder SHORTCUT, it comes back as a FOLDER, and I can't see anywhere in the mode bitmask that might indicate this. This has been true for all of the folder shortcuts in c:\Documents and Settings\myUserName\.
For these shortcuts, is_file returns false, is_dir returns true and is_link isn't supported in XP.
Here's an excerpt from my code (it has been trimmed down, so there may be bugs) :
if(($h=@opendir($root))!==false){
while (false !== ($file = readdir($h))){
if(!($file=="." || $file=="..")){
if( $stat = @lstat($root . $file) ){
$ary[0] = $file;
$ary[1] = $root;
$ary[2] = Date("m/d/y H:i:s", $stat['mtime']);
if($stat['mode'] & 040000){
$ary[3]="dir";
$ary[4]=0;
}else{
$ary[3] ="file";
$ary[4] = $stat['size'];
}
echo(json_encode($ary));
}
}
}
}
A workaround for this will be appreciated...
EDIT: Winterblood's solution almost worked
First off - my bad - it's a win7 machine.
Thanks Winterblood for the quick turnaround - this worked for several of the shortcuts, and the PHP manual says just that... However,
c:\users\myUserName\AppData\Local\Application Data
(and others) are still coming back as directories, while winSCP correctly sees them as shortcuts. As a matter of fact, the 'mode' is 040777, which is exactly the same as many real folders.
Any other suggestions?
A:
PHP's stat() function "follows" shortcuts/symlinks, reporting details on the linked file/folder, not the actual link itself.
For getting stat details on the link itself use lstat().
More information in the PHP documentation on lstat.
|
[
"meta.stackexchange",
"0000114132.txt"
] | Q:
Document "self answer more quickly" in privilege lists
https://meta.stackexchange.com/a/86186/38765 states that you need a minimum of 100 rep to answer your own question within 8 hours.
This isn't mentioned in the privileges that are gained at 100 rep (which are create chat rooms and edit community wiki posts), or the 10 rep Remove new user restrictions.
I sometimes have to look up this privilege so I can respond to people saying "Why are you answering your question in a comment rather than an answer?" to new users.
A:
Added the note about self-answer privilege to Meta's privilege, waffles(or someone from the dev team) will have to put this to the rest of the sites.
|
[
"stackoverflow",
"0062983037.txt"
] | Q:
How to query combo box of only current record/row in Access data entry form?
I have created a data entry form in Access that uses combobox for entering farmer name. The combobox is used for ease and to make sure only farmers from the list are entered. For ease combo box is re-queried as you type in.
The combobox works well for the first entry but previous farmers' names are vanished when queried for the next row. I think, Access is requerying all dropdowns rather than the current drop-down/combo-box.
The VBA for the querying drop down is given below:
Public Sub FilterComboAsYouType(combo As ComboBox, defaultSQL As String,
lookupField As String)
Dim strSQL As String
If Len(combo.Text) > 0 Then
strSQL = defaultSQL & " AND " & lookupField & " LIKE '*" & combo.Text &
"*'"
Else
strSQL = defaultSQL 'This is the default row source of combo box
End If
combo.RowSource = strSQL
combo.Dropdown
End Sub
Private Sub Combo137_Change()
FilterComboAsYouType Me.Combo137, "SELECT farmer.name,farmer.ID FROM farms INNER JOIN farmer ON
farms.ID = farmer.farm_id where farms.ID LIKE" & "'" & Form_Name & "*'", "farmer.name"
End Sub
Private Sub Combo137_GotFocus()
If Form_Name <> "" Then
FilterComboAsYouType Me.Combo137, "SELECT farmer.name,farmer.ID FROM farms INNER JOIN farmer ON
farms.ID = farmer.farm_id where farms.ID LIKE" & "'" & Form_Name & "*'", "farmer.name"
Else
FilterComboAsYouType Me.Combo137, "SELECT farmer.name,farmer.ID FROM farms INNER JOIN farmer ON
farms.ID = farmer.farm_id where farms.ID LIKE" & "'" & "NONE" & "*'", "farmer.name"
End If
End Sub
A:
Yes, all records will show the same filtered list because there is only one combobox and property settings are reflected in all instances. Filtering a combobox RowSource based on value in another field/control is known as "cascading" or "dependent". Also, your RowSource has alias - value saved is not value displayed. When the list is filtered the display alias will not be available for records that have saved value which has been filtered out. This is a well-known issue of cascading combobox. Options for dealing with:
for any form style, only filter the list for new record or when primary value is changed, then reset to full list for existing records
for forms in Continuous or Datasheet view, include lookup table in form RecordSource, bind a textbox to descriptive field from lookup table, position textbox on top of combobox, set textbox as Locked Yes and TabStop No
|
[
"stackoverflow",
"0054386946.txt"
] | Q:
new data added to JSON keeps replacing previous
It seems like "notes = JSON.parse(fs.readFileSync("notes-data.json"))" line of my code is not working as it should...
When I add new notes it should add on to the array in the .json file, but it just replaces the previous note.
let addNote = (title, body) => {
let notes = [];
let note = {
title,
body
};
notes.push(note);
fs.writeFileSync("notes-data.json", JSON.stringify(notes));
notes = JSON.parse(fs.readFileSync("notes-data.json"))
};
Code Screenshot:
Thank you in advance
A:
If you want to add to the file contents, then you should really read the content before doing anything else:
let addNote = (title, body) => {
let notes;
try {
notes = JSON.parse(fs.readFileSync("notes-data.json")); // <---
} catch(e) {
notes = [];
}
let note = {
title,
body
};
notes.push(note);
fs.writeFileSync("notes-data.json", JSON.stringify(notes));
};
|
[
"stackoverflow",
"0005261447.txt"
] | Q:
Xcode 4 & three20 & create IPA archive: No such file or directory
In Xcode 3.2.5 I use "Build And Archive" to create an IPA file without any problems.
How can I do that in Xcode 4? I think I have to use "Product -> Archive", but I get over 100 error messages in the three20 framework. Most are "No such file or directory". ("Product -> Build For -> Build For Archiving" works. No errors.)
For example, this is the first error message:
../scripts/Protect.command: line 23: cd: /Users/[USERNAME]/Library/Developer/Xcode/DerivedData/[PROJECTNAME]-blabla/ArchiveIntermediates/[PROJECTNAME]/BuildProductsPath/Release-iphoneos/../three20/Three20Core: No such file or directory
The path "/[PROJECTNAME]/BuildProductsPath/three20/" really doesn't exists, but this path exists: "/[PROJECTNAME]/three20/"
What can I do?
A:
The Three20 documentation did not solve this issue for me (unfortunately...). Eventually what worked for me was a mix of a few solutions. There is a difference between "Archive" and "Build for Archiving" (or build for run) and using these steps I have both of them working with no build issues:
You will need to change the scripts as Manni mentioned, set the "Skip Install" flag for each Three20 project linked to your project tree and add the following paths to your project's "Header search paths":
"$(BUILT_PRODUCTS_DIR)/../three20"
"$(BUILT_PRODUCTS_DIR)/../../three20"
this will get you to work with the Build option. When you want to perform the archive action, then you will also need to change the "Locations" preference in Xcode as featherless mentioned above.
I documented these steps in this post.
A:
Configuration that works both for build and archive in Xcode4.
https://github.com/pazustep/three20/commit/4a9aad4eb90a6962dd729d245f9293a7cc0d7f36
src/common/Configurations/Paths.xcconfig
REPO_ROOT_PATH = $(SRCROOT)/../..
ROOT_SOURCE_PATH = $(REPO_ROOT_PATH)/src
//OBJROOT = $(REPO_ROOT_PATH)/Build
//SYMROOT = $(OBJROOT)/Products
// Search Paths
LIBRARY_SEARCH_PATHS = $(STDLIB_LIBRARY)
//HEADER_SEARCH_PATHS = $(STDLIB_HEADERS) "$(CONFIGURATION_BUILD_DIR)/../three20"
HEADER_SEARCH_PATHS = $(STDLIB_HEADERS) "$(BUILT_PRODUCTS_DIR)/../three20" "$(BUILT_PRODUCTS_DIR)/../../three20"
src/scripts/Protect.command
# Ignore whitespace characters in paths
IFS=$'\n'
#cd ${CONFIGURATION_BUILD_DIR}${PUBLIC_HEADERS_FOLDER_PATH}
if [ "${DEPLOYMENT_LOCATION}" == "YES" ]; then
PREFIX=${BUILT_PRODUCTS_DIR}/..
else
PREFIX=${BUILT_PRODUCTS_DIR}
fi
cd ${PREFIX}${PUBLIC_HEADERS_FOLDER_PATH}
chmod a-w *.h 2>> /dev/null
chmod a-w private/*.h 2>> /dev/null
exit 0
A:
Another thing that could throw off the build process is using a scheme name that contains spaces.
XCode won't stop you from doing it, but if you use a name like "Ad Hoc" for your scheme, you'll end up the same errors:
Three20/Three20+Additions.h: No such file or directory
|
[
"math.stackexchange",
"0001128749.txt"
] | Q:
Evaluate the sum $P=\sum_{n=1}^\infty \frac{a_n}{2^n}$.
Question: Let ${\{a_n}\}$ be the sequences of $0$s and $1$s, such that $a_n=1$ if $p$ is a prime number, otherwise $a_n=0$. So, ${\{a_n}\}={\{0,1,1,0,1,0,1,0,0,0,1,...}\}$. Evaluate the sum
$P=\sum_{n=1}^\infty \dfrac{a_n}{2^n}$.
Incomplete answer: I can just see some lower and upper bound for $P$. If every $a_n$ was $1$ the sum was $1$; first term is $0$ so upper bound reduces to $\frac{1}{2}$; since even numbers greater than two are not primes thus the upper bound reduces to $\frac{1}{2}-\frac{1}{16}-\frac{1}{64}-...=\frac{1}{2}-\frac{1}{12}=\frac{5}{12}$. We can do this for $3(n+1)$, $5(n+1)$ and so on, but since we have infinite prime numbers, this method doesn't seem to be a proof. First lower bound is $0$, BTW.
Please help me to evaluate the sum; thanks a lot.
A:
It will be impossible to evaluate (find a closed form) this sum, because it's the Prime Constant. You can read more about its value at Wikipedia or Mathworld, if you wish.
By the way: $(a_n)\neq (0,1,1,0,0,0,1,0,0,0,1,...)$, but $(a_n) = (0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, ...)$.
|
[
"stackoverflow",
"0004696963.txt"
] | Q:
Interactive items in Silverlight Accordion Header
I have an Accordion in my silverlight application, and I am putting textboxes and buttons in the header of the accordion items. It seems as though, because the header catches the click event for expanding the accordion item, it does not propagate that event to the items in the header as well.
Here is my code:
<toolkit:Accordion Height="27" HorizontalAlignment="Left" Name="accordion1" VerticalAlignment="Top" Width="400">
<toolkit:AccordionItem>
<toolkit:AccordionItem.Header>
<toolkit:WrapPanel>
<sdk:Label Content="Program" Width="42" FontSize="13" />
<sdk:Label Content="Prog" Width="42" FontSize="13" />
<sdk:Label Content="Start:" />
<TextBox Width="45"></TextBox>
<sdk:Label Content="End:" />
<TextBox Width="45"></TextBox>
<sdk:Label Content="Total:" />
<sdk:Label Content="Total time: " Width="91" />
<Button Click="delete_Click" Content="X" ></Button>
</toolkit:WrapPanel>
</toolkit:WrapPanel>
</toolkit:AccordionItem.Header>
<my:SetlistConfigurator />
</toolkit:AccordionItem>
</toolkit:Accordion>
Anyone have any ideas of how I can propagate the Click event to the children within the header?
A:
By default an AccordionItem sets it's ExpanderButton to disabled when it is expanded. This is in the Locked VisualState. If you remove that VisualState you will see that you can indeed click on items within the header when it is selected.
Here is the LockedStates VisualStateGroup of the AccordionItem which you should alter. I can post the whole style if you need, though it's quite verbose.
<VisualStateGroup x:Name="LockedStates">
<VisualStateGroup.Transitions>
<VisualTransition GeneratedDuration="0"/>
</VisualStateGroup.Transitions>
<VisualState x:Name="Locked">
<Storyboard>
<!--
<ObjectAnimationUsingKeyFrames Duration="0" Storyboard.TargetProperty="IsEnabled" Storyboard.TargetName="ExpanderButton">
<DiscreteObjectKeyFrame KeyTime="0" Value="False"/>
</ObjectAnimationUsingKeyFrames>
-->
</Storyboard>
</VisualState>
<VisualState x:Name="Unlocked">
<Storyboard>
<ObjectAnimationUsingKeyFrames Duration="0" Storyboard.TargetProperty="IsEnabled" Storyboard.TargetName="ExpanderButton">
<DiscreteObjectKeyFrame KeyTime="0" Value="True"/>
</ObjectAnimationUsingKeyFrames>
</Storyboard>
</VisualState>
</VisualStateGroup>
|
[
"serverfault",
"0000425769.txt"
] | Q:
Switch flooding when bonding interfaces in Linux
+--------+
| Host A |
+----+---+
| eth0 (AA:AA:AA:AA:AA:AA)
|
|
+----+-----+
| Switch 1 | (layer2/3)
+----+-----+
|
+----+-----+
| Switch 2 |
+----+-----+
|
+----------+----------+
+-------------------------+ Switch 3 +-------------------------+
| +----+-----------+----+ |
| | | |
| | | |
| eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) |
| +----+-----------+----+ |
| | Host B | |
| +----+-----------+----+ |
| eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) |
| | | |
| | | |
+------------------------------+ +------------------------------+
Topology overview
Host A has a single NIC.
Host B has four NICs which are bonded using the balance-alb mode.
Both hosts run RHEL 6.0, and both are on the same IPv4 subnet.
Traffic analysis
Host A is sending data to Host B using some SQL database application.
Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5.
Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA.
Once the TCP connection has been established, Host B sends no further frames out eth5.
The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2.
Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5.
Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course).
Reproduce
If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops.
After five minutes (the default bridge table timeout), flooding resumes.
Question
It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding.
Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)?
I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.
A:
Yes - this is expected. You've hit a fairly common issue with NIC bonding to hosts, unicast flooding. As you've noted, the timers on your switch for the hardware addresses in question as no frames sourced from these addresses are being observed.
Here are the general options-
1.) Longer address table timeouts. On a mixed L2/L3 switch the ARP and CAM timers should be close to one another (with the CAM timer running a few seconds longer). This recommendation stands regardless of the rest of the configuration. On the L2 switch the timers can generally be set longer without too many problems. That said, unless you disable the timers altogether you'll be back in the same situation eventually if there isn't some kind of traffic sourcing from those other addresses.
2.) You could hard-code the MAC addresses on the switches in question (all of the switches in the diagram, unfortunately). This is obviously not optimal for a number of reasons.
3.) Change the bonding mode on the Linux side to one that uses a common source MAC (i.e. 802.3ad / LACP). This has a lot of operational advantages if your switch supports it.
4.) Generate gratuitous arps via a cron job from each interface. You may need some dummy IP's on the various interfaces to prevent an oscillation condition (i.e. the host's IP cycles through the various hardware addresses).
5.) If it's a traffic issue, just go to 10GE! (sorry - had to throw that in there)
The LACP route is probably the most common and supportable and the switches can likely be configured to balance inbound traffic to the server fairly evenly across the various links. Failing that I think the gratuitous arp option is going to be the easiest to integrate.
|
[
"stackoverflow",
"0013772024.txt"
] | Q:
Construct a vector containing intervals between the values of two other vectors
I have two vectors of integers, like so:
c(1,5,14,24)
c(3,9,22,30)
I need to construct from these the vector containing the ranges between each value, concatenated together, like so:
c(1:3,5:9,14:22,24:30)
What is the best way to do this? I couldn't find another question addressing this on the site. I tried some stuff using higher order functions (Map, Fold, etc.) but they all seem to take only one list argument.
A:
you could use mapply here to get your ranges.
mySeq <- mapply(seq, A, B)
dput(mySeq)
# list(1:3, 5:9, 14:22, 24:30)
As @señor points out, if you want the ranges as a single vector, use unlist as well:
unlist(mapply(seq, A, B))
# [1] 1 2 3 5 6 7 8 9 14 15 16 17 18 19 20 21 22 24 25 26 27 28 29 30
|
[
"stackoverflow",
"0032957090.txt"
] | Q:
How to mutate variable in array?
I'm trying to receive mutable variable from my array, but something goes wrong. Question in code.
var baloonsArray = NSMutableArray()
override func didMoveToView(view: SKView) {
var baloon = SKSpriteNode(imageNamed: "baloon") //"Variable "baloon" was never mutated; consider changing to "let" constant"
baloon.xScale = 0.1
baloon.yScale = 0.1
ballonsArray.addObject(baloon)
addChild(baloon)
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
for touch in touches {
for var baloon in baloonsArray {
baloon.xScale = 0.2 //Here xcode telling me: "Cannot assign to property: "ballon" is immutable". But i declared him as var. With Objective-C there was no problems. What should i do?
}
}
}
When Im trying to change baloon xScale in touchesBegan function xcode telling me: "Cannot assign to property: "ballon" is immutable". But i declared him as var. With Objective-C there was no problems. What should i do?
A:
The problem is your use of NSMutableArray. To Swift, it is opaque. That is, Swift doesn't know what is in it. But Swift has strict typing. So it doesn't know what properties its elements have, and it isn't going to let you assign to a property it doesn't understand. So if you really want to use NSMutableArray, you must cast its elements when you fetch them, so Swift knows what they are:
for baloon in baloonsArray {
let baloonsprite = baloon as! SKSpriteNode
baloonsprite.xScale = 0.2
}
That works. However, it would be better not to use NSMutableArray in the first place. Instead, declare baloonsArray as a mutable Swift array:
var baloonsArray = [SKSpriteNode]()
That takes care of the problem in a much neater way. Now Swift knows what is in this array - it is SKSpriteNode objects. And Swift knows that it is okay to assign to an SKSpriteNode's xScale.
You need to get used to Swift's strict typing, including the fact that the elements of an array are typed.
|
[
"stackoverflow",
"0006387132.txt"
] | Q:
Self-destructive Button inside Column
How to create a Button which will be displayed only when the value of some global FrontEnd setting is False and will self-destruct with entire row of the Column after pressing it setting this value to True?
I need something like this:
Column[{"Item 1", "Item 2",
Dynamic[If[
Last@Last@Options[$FrontEnd, "VersionedPreferences"] === False,
Button["Press me!",
SetOptions[$FrontEnd, "VersionedPreferences" -> True]],
Sequence @@ {}]]}]
But with this code the Button does not disappear after pressing it. Is it possible to make it self-destructive?
The final solution based on ideas by belisarius and mikuszefski:
PreemptProtect[SetOptions[$FrontEnd, "VersionedPreferences" -> False];
b = True];
Dynamic[Column[
Join[{"Item 1", "Item 2"},
If[Last@Last@Options[$FrontEnd, "VersionedPreferences"] === False &&
b == True, {Button[
Pane[Style[
"This FrontEnd uses shared preferences file. Press this \
button to set FrontEnd to use versioned preferences file (all the \
FrontEnd settings will be reset to defaults).", Red], 300],
AbortProtect[
SetOptions[$FrontEnd, "VersionedPreferences" -> True];
b = False]]}, {}]], Alignment -> Center],
Initialization :>
If[! Last@Last@Options[$FrontEnd, "VersionedPreferences"], b = True,
b = False]]
The key points are:
introducing additional Dynamic variable b and binding it with the value of Options[$FrontEnd, "VersionedPreferences"],
wrapping entire Column construct
with Dynamic instead of using
Dynamic inside Column.
A:
Perhaps
PreemptProtect[SetOptions[$FrontEnd, "VersionedPreferences" -> False]; b = True];
Column[{"Item 1", "Item 2", Dynamic[
If[Last@Last@Options[$FrontEnd, "VersionedPreferences"]===False && b == True,
Button["Here!", SetOptions[$FrontEnd, "VersionedPreferences"->True];b=False],
"Done"]]}]
Edit
Answering your comment. Please try the following. Encompassing the Column[ ] with Dynamic[ ] allows resizing it:
PreemptProtect[SetOptions[$FrontEnd, "VersionedPreferences" -> False]; b = True];
Dynamic[
Column[{
"Item 1",
"Item 2",
If[Last@Last@Options[$FrontEnd, "VersionedPreferences"] === False && b == True,
Button["Press me!", SetOptions[$FrontEnd, "VersionedPreferences" -> True]; b=False],
Sequence @@ {}]}]]
A:
Hmm, dunno if I get it right, but maybe this:
x = True;
Dynamic[Column[{Button["reset", x = True],
If[x, Button["Press me", x = False]]}]
]
|
[
"sharepoint.stackexchange",
"0000028776.txt"
] | Q:
Claims authentication with AD
I'm very new at this claims based authentication, and I just set up a sharepoint site with claims, with FBA access through a regular asp.net membership db.
This works just fine.
What I now need is to also be able to login with AD, on a different domain than the server is hosted on.
That is to say, the server is on domain A and I need to allow for users on domain B to log on too. Do I have to use ADFS to accomplish this? Is there now quick and easy way of doing it?
/Dynde
A:
If you can create a trust relationship between the two domains, then you can add "Integrated Windows authentication"
If you cannot create a trust relationship, then you will likely need an external identity provider for SharePoint. ADFS2.0 is Microsoft's identity provider in this space.
|
[
"superuser",
"0001429218.txt"
] | Q:
How can I view the traffic of this application?
I have a mobile app which connects to my cars wifi and reports data on it.
The app must be connected to the AP the car provides in order to work - I have tested whether the car connects to a public internet api and the phone retrieves this data but this is not the case. It seems that phone directly connects to the car (via the provided AP) in some way.
I am trying to see the network traffic here to replicate the functionality outside of the provided app.
Things I have tried:
Viewing HTTP and HTTPS traffic via Proxy (running app via proxy)
To do this, I connected the phone directly to the AP as required but proxied the phones HTTP and HTTPS traffic via mitmproxy running on a laptop on the same network. Running the app showed traffic but not for the related items Im after (which update in the app)
Running wireshark
I ran wireshark on a laptop connected to the same network (as described above) and captured all traffic. I then loaded the app and reviewed the capture. There seemed to be no direct network connectivty between the devices. There was a broadcast message from the AP, giving an address and description of an API but the phone never seems to then connect to this - I think it may be irrelevant for my needs.
(I tried to telnet to the address/port described and it returned what looked like xml, but i was unable to send any traffic to get any more information)
Neither method I have tried seems to show meaningful traffic between the two devices.
Is there something obvious Ive missed? What else can I try to view the phone communicating with the car?
A:
Using Wireshark to capture unicast traffic between two other devices on an 802.11 network requires using 802.11 monitor mode, which is tricky to pull off. If you were only capturing in promiscuous mode (which is common for wired Ethernet packet captures on hubs or via port mirroring on managed switches), then you wouldn't have seen any unicast traffic to/from other wireless devices.
You should read up on the Wireshark site about how to do 802.11 monitor mode packet captures, and how to decrypt the unicast data frames you capture, if the network uses WPA2-PSK (or original WPA-PSK or even WEP).
Here are some highlights of what you need to deal with:
In your Wireshark machine, you must have a WNIC and driver that supports 802.11 monitor mode.
Your monitor-mode-capable WNIC must support all the same flavors of 802.11 transmissions that your AP and the target wireless device (your phone) support. For example, if your AP and phone both support 802.11ac, but your monitor mode WNIC doesn't, it won't be able to receive those 802.11ac transmissions. Even if all three devices support 802.11ac, if the AP and phone support, say, 2 spatial streams (a.k.a. "2x2:2"), but your monitor mode WNIC only supports 1 spatial stream (1x1:1), then your Wireshark machine won't be able to see anything your phone or AP transmit using 2 spatial streams.
If your Wireshark machine is too far from the AP and wireless client, it might not get enough signal strength to be able to receive the packets reliably.
If your car's AP uses WPA2-PSK (or original WPA-PSK), and you have no option to disable security, then in order to decrypt traffic to/from the phone, you must capture the WPA2-PSK 4-way handshake that happens when the phone joins the network. Capturing that handshake, and knowing the PSK or passphrase for the network, allows you to decrypt all of the packets you capture from that session. But if the phone falls asleep or otherwise leaves and rejoins the network, that's a new session and you'll need to be sure to capture the WPA2 4-way handshake for that new session if you want to decrypt any of the packets from that new session.
|
[
"stackoverflow",
"0015025963.txt"
] | Q:
Response encoding with node.js "request" module
I am trying to get data from the Bing search API, and since the existing libraries seem to be based on old discontinued APIs I though I'd try myself using the request library, which appears to be the most common library for this.
My code looks like
var SKEY = "myKey...." ,
ServiceRootURL = 'https://api.datamarket.azure.com/Bing/Search/v1/Composite';
function getBingData(query, top, skip, cb) {
var params = {
Sources: "'web'",
Query: "'"+query+"'",
'$format': "JSON",
'$top': top, '$skip': skip
},
req = request.get(ServiceRootURL).auth(SKEY, SKEY, false).qs(params);
request(req, cb)
}
getBingData("bookline.hu", 50, 0, someCallbackWhichParsesTheBody)
Bing returns some JSON and I can work with it sometimes but if the response body contains a large amount of non ASCII characters JSON.parse complains that the string is malformed. I tried switching to an ATOM content type, but there was no difference, the xml was invalid. Inspecting the response body as available in the request() callback actually shows bad code.
So I tried the same request with some python code, and that appears to work fine all the time. For reference:
r = requests.get(
'https://api.datamarket.azure.com/Bing/Search/v1/Composite?Sources=%27web%27&Query=%27sexy%20cosplay%20girls%27&$format=json',
auth=HTTPBasicAuth(SKEY,SKEY))
stuffWithResponse(r.json())
I am unable to reproduce the problem with smaller responses (e.g. limiting the number of results) and unable to identify a single result which causes the issue (by stepping up the offset).
My impression is that the response gets read in chunks, transcoded somehow and reassembled back in a bad way, which means the json/atom data becomes invalid if some multibyte character gets split, which happens on larger responses but not small ones.
Being new to node, I am not sure if there is something I should be doing (setting the encoding somewhere? Bing returns UTF-8, so this doesn't seem needed).
Anyone has any idea of what is going on?
FWIW, I'm on OSX 10.8, node is v0.8.20 installed via macports, request is v2.14.0 installed via npm.
A:
i'm not sure about the request library but the default nodejs one works well for me. It also seems a lot easier to read than your library and does indeed come back in chunks.
http://nodejs.org/api/http.html#http_http_request_options_callback
or for https (like your req) http://nodejs.org/api/https.html#https_https_request_options_callback (the same really though)
For the options a little tip: use url parse
var url = require('url');
var params = '{}'
var dataURL = url.parse(ServiceRootURL);
var post_options = {
hostname: dataURL.hostname,
port: dataURL.port || 80,
path: dataURL.path,
method: 'GET',
headers: {
'Content-Type': 'application/json; charset=utf-8',
'Content-Length': params.length
}
};
obviously params needs to be the data you want to send
|
[
"stackoverflow",
"0005526396.txt"
] | Q:
Maximize tree height while minimizing children count of any node?
I am currently faced with the following problem:
Given is a tree with an unchangeable root node and n children.
I need to optimize this tree so that:
The children count of any node is minimized (only talking about the direct children of a node here, not their children or the like)
As a result of this, the tree height is maximized
The tree is descending in order, so that always node > child
All nodes are < root node.
However, sometimes a node is only < root node and neither < or > than another node.
Any ideas, hints or the like would be greatly appreciated.
Thank you.
A:
From your description, it sounds as if you just want to: (1) sort the nodes into descending order, then (2) make each node a child of its predecessor if its value is strictly smaller than the predecessor's, and a sibling of its predecessor otherwise. This way, the height of the tree is simply the number of distinct values, which is the biggest it can possibly be given your third condition.
I can't help suspecting that you're wanting something more complicated. Am I missing the point somehow?
|
[
"judaism.stackexchange",
"0000028988.txt"
] | Q:
Under what circumstances is a false virgin put to death?
Deut 22:13–21 says that if a man marries a virgin, is intimate with her, and then claims that she is not a virgin, and there is no evidence that she is a virgin, that she is stoned. That seems to be essentially taking the man's word that he didn't do anything, and anyway on the surface that seems extreme. So what are the exact conditions for this to happen?
A:
In ancient times, the two stages of effectuating a marriage were separate (Qiddushin/Nissuin). The first stage Qiddushin - it is often translated as "betrothal" however it is of greater significance than the English word connotes. Once Qiddushin has taken place, the couple are fully married - they simply have not yet been intimate with one another (this takes place after Nissuin). The case under consideration is where it is suspected that the wife has had an adulterous relationship once Qiddushin has already taken place and prior to the Nissuin. In order to be liable for capital punishment there would need to be witnesses testifying that they witnessed her commit the adulterous act.
|
[
"movies.stackexchange",
"0000100770.txt"
] | Q:
Why is the earth supposed to move thus far?
Warning, spoilers ahead.
In the film The Wandering Earth, the earth is supposed to move to another solar system, because the sun is getting stronger and larger over a short period of time.
However, why is the earth not only moved a little more out to the outer planets, instead of actually tried to "wander" to another system?
Is there an in-universe explanation I missed?
A:
This is explained in the first five minutes of the movie: in less than 100 years the Sun will engulf Earth, and in 300 years our solar system will no longer exist. Considering that we need a sun for warmth etc., Earth needs to be moved to another sun, i.e. out of our current solar system.
The Sun is rapidly degenerating and expanding. At this rate, the Sun will engulf Earth
in 100 years. Within 300 years, the Solar System will no longer exist.
|
[
"stackoverflow",
"0056526299.txt"
] | Q:
assigning html entity code in javascript button not working
I am trying to create a button in javascript and assigning HTML entity code ⛨ to it. This is the HTML entity code of down arrow. Instead of showing the down arrow, the entire code is displayed in the button as is.
Below is how i am trying to achieve it
<!DOCTYPE html>
<html>
<head></head>
<body>
<script type="text/javascript">
var btn = document.createElement("input");
btn.setAttribute('type','button');
btn.setAttribute('value','▼');
document.body.appendChild(btn);
</script>
</body>
</html>
Below is the output of my code
I expect that the button should display down arrow but for some reason it is not showing.
A:
You should use innerHTML instead of setAttribute:
var btn = document.createElement("input");
btn.setAttribute('type','button');
btn.innerHTML = '⛨';
document.body.appendChild(btn);
http://jsfiddle.net/dapx0nsy/
|
[
"stackoverflow",
"0039094657.txt"
] | Q:
Android : How to get larger profile pic from Facebook using FirebaseAuth?
I am using FirebaseAuth to login user through FB. Here is the code:
private FirebaseAuth mAuth;
private FirebaseAuth.AuthStateListener mAuthListener;
private CallbackManager mCallbackManager;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
FacebookSdk.sdkInitialize(getApplicationContext());
// Initialize Firebase Auth
mAuth = FirebaseAuth.getInstance();
mAuthListener = firebaseAuth -> {
FirebaseUser user = firebaseAuth.getCurrentUser();
if (user != null) {
// User is signed in
Log.d(TAG, "onAuthStateChanged:signed_in:" + user.getUid());
} else {
// User is signed out
Log.d(TAG, "onAuthStateChanged:signed_out");
}
if (user != null) {
Log.d(TAG, "User details : " + user.getDisplayName() + user.getEmail() + "\n" + user.getPhotoUrl() + "\n"
+ user.getUid() + "\n" + user.getToken(true) + "\n" + user.getProviderId());
}
};
}
The issue is that the photo in I get from using user.getPhotoUrl() is very small. I need a larger image and can't find a way to do that. Any help would be highly appreciated.
I have already tried this
Get larger facebook image through firebase login
but it's not working although they are for swift I don't think the API should differ.
A:
It is not possible to obtain a profile picture from Firebase that is larger than the one provided by getPhotoUrl(). However, the Facebook graph makes it pretty simple to get a user's profile picture in any size you want, as long as you have the user's Facebook ID.
String facebookUserId = "";
FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser();
ImageView profilePicture = (ImageView) findViewById(R.id.image_profile_picture);
// find the Facebook profile and get the user's id
for(UserInfo profile : user.getProviderData()) {
// check if the provider id matches "facebook.com"
if(FacebookAuthProvider.PROVIDER_ID.equals(profile.getProviderId())) {
facebookUserId = profile.getUid();
}
}
// construct the URL to the profile picture, with a custom height
// alternatively, use '?type=small|medium|large' instead of ?height=
String photoUrl = "https://graph.facebook.com/" + facebookUserId + "/picture?height=500";
// (optional) use Picasso to download and show to image
Picasso.with(this).load(photoUrl).into(profilePicture);
A:
Two lines of code. FirebaseUser user = firebaseAuth.getCurrentUser();
String photoUrl = user.getPhotoUrl().toString();
photoUrl = photoUrl + "?height=500";
simply append "?height=500" at the end
A:
If someone is looking for this but for Google account using FirebaseAuth. I have found a workaround for this. If you detail the picture URL:
https://lh4.googleusercontent.com/../.../.../.../s96-c/photo.jpg
The /s96-c/ specifies the image size (96x96 in this case)so you just need to replace that value with the desired size.
String url= FirebaseAuth.getInstance().getCurrentUser().getPhotoUrl();
url = url.replace("/s96-c/","/s300-c/");
You can analyze your photo URL to see if there is any other way to change its size.
As I said in the begining, this only works for Google accounts. Check @Mathias Brandt 's answer to get a custom facebook profile picture size.
EDIT 2020:
Thanks to Andres SK and @alextouzel for pointing this out. Photo URLs format have changed and now you can pass URL params to get different sizes of the picture. Check https://developers.google.com/people/image-sizing.
|
[
"stackoverflow",
"0040329742.txt"
] | Q:
Removing object properties with Lodash
I have to remove unwanted object properties that do not match my model. How can I achieve it with Lodash?
My model is:
var model = {
fname: null,
lname: null
}
My controller output before sending to the server will be:
var credentials = {
fname: "xyz",
lname: "abc",
age: 23
}
If I use
_.extend(model, credentials)
I am getting the age property too. I am aware I can use
delete credentials.age
but what if I have more than 10 unwanted objects? Can I achieve it with Lodash?
A:
You can approach it from either an "allow list" or a "block list" way:
// Block list
// Remove the values you don't want
var result = _.omit(credentials, ['age']);
// Allow list
// Only allow certain values
var result = _.pick(credentials, ['fname', 'lname']);
If it's reusable business logic, you can partial it out as well:
// Partial out a "block list" version
var clean = _.partial(_.omit, _, ['age']);
// and later
var result = clean(credentials);
Note that Lodash 5 will drop support for omit
A similar approach can be achieved without Lodash:
const transform = (obj, predicate) => {
return Object.keys(obj).reduce((memo, key) => {
if(predicate(obj[key], key)) {
memo[key] = obj[key]
}
return memo
}, {})
}
const omit = (obj, items) => transform(obj, (value, key) => !items.includes(key))
const pick = (obj, items) => transform(obj, (value, key) => items.includes(key))
// Partials
// Lazy clean
const cleanL = (obj) => omit(obj, ['age'])
// Guarded clean
const cleanG = (obj) => pick(obj, ['fname', 'lname'])
// "App"
const credentials = {
fname:"xyz",
lname:"abc",
age:23
}
const omitted = omit(credentials, ['age'])
const picked = pick(credentials, ['age'])
const cleanedL = cleanL(credentials)
const cleanedG = cleanG(credentials)
A:
Get a list of properties from model using _.keys(), and use _.pick() to extract the properties from credentials to a new object:
var model = {
fname:null,
lname:null
};
var credentials = {
fname:"xyz",
lname:"abc",
age:23
};
var result = _.pick(credentials, _.keys(model));
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.16.4/lodash.min.js"></script>
If you don't want to use Lodash, you can use Object.keys(), and Array.prototype.reduce():
var model = {
fname:null,
lname:null
};
var credentials = {
fname:"xyz",
lname:"abc",
age:23
};
var result = Object.keys(model).reduce(function(obj, key) {
obj[key] = credentials[key];
return obj;
}, {});
console.log(result);
A:
You can easily do this using _.pick:
var model = {
fname: null,
lname: null
};
var credentials = {
fname: 'abc',
lname: 'xyz',
age: 2
};
var result = _.pick(credentials, _.keys(model));
console.log('result =', result);
<script src="https://cdn.jsdelivr.net/lodash/4.16.4/lodash.min.js"></script>
But you can simply use pure JavaScript (specially if you use ECMAScript 6), like this:
const model = {
fname: null,
lname: null
};
const credentials = {
fname: 'abc',
lname: 'xyz',
age: 2
};
const newModel = {};
Object.keys(model).forEach(key => newModel[key] = credentials[key]);
console.log('newModel =', newModel);
|
[
"math.stackexchange",
"0001737080.txt"
] | Q:
Exercise #9 in chapter 11 of Rudin's Principles of Mathematical Analysis.
Suppose $f$ is Lebesgue integrable on $[a,b]$. Let $F(x)$=$\int_{a}^x fdt$. Then prove that $F$ is continuous on $[a,b]$.
I know that $F$ is continuous almost everywhere, because $F'(x)=f(x)$ almost everywhere on $[a,b]$. But does this imply that $F$ is continuous on $[a,b]$?
A:
The fact that $F$ is differentiable almost everywhere on its own doesn't imply its continuity on $[a,b]$. Also you shouldn't be using that because it is a deeper fact (it uses the absolute continuity of $F$). The continuity of $F$ is more straightforward: suppose $(x_n)_n$ is a sequence tending to $x$ with $a\le x,x_n\le b$. Then,
$$|F(x)-F(x_n)| = \left|\int_y^{x} f(t) dt\right|\le \int_a^b \mathbf{1}_{[x_n,x]}(t) |f(t)| dt$$
The functions $g_n(t)=\mathbf{1}_{[x_n,x]}(t)|f(t)|$ converge to $0$ pointwise and are dominated by the Lebesgue integrable function $|f|$. Thus, by the dominated convergence theorem, the previous integral tends to $0$ as $n\to\infty$, which implies continuity of $F$ at $x$.
Nitpicky note: Here we mean $[x_n,x]$ to be the convex hull of the points $x$, $x_n$ regardless of which is larger.
|
[
"stackoverflow",
"0017996957.txt"
] | Q:
fe_sendauth: no password supplied
database.yml:
# SQLite version 3.x
# gem install sqlite3
#
# Ensure the SQLite 3 gem is defined in your Gemfile
# gem 'sqlite3'
development:
adapter: postgresql
encoding: utf8
database: sampleapp_dev #can be anything unique
#host: localhost
#username: 7stud
#password:
#adapter: sqlite3
#database: db/development.sqlite3
pool: 5
timeout: 5000
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
adapter: postgresql
encoding: utf8
database: sampleapp_test #can be anything unique
#host: localhost
#username: 7stud
#password:
#adapter: sqlite3
#database: db/test.sqlite3
pool: 5
timeout: 5000
production:
adapter: postgresql
database: sampleapp_prod #can be anything unique
#host: localhost
#username: 7stud
#password:
#adapter: sqlite3
#database: db/production.sqlite3
pool: 5
timeout: 5000
pg_hba.conf:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres md5
#host replication postgres 127.0.0.1/32 md5
#host replication postgres ::1/128 md5
I changed the METHOD in the first three lines from md5 to trust, but I still get the error.
And no matter what combinations of things I try in database.yml, when I do:
~/rails_projects/sample_app4_0$ bundle exec rake db:create:all
I always get the error:
fe_sendauth: no password supplied
I followed this tutorial to get things setup:
https://pragtob.wordpress.com/2012/09/12/setting-up-postgresql-for-ruby-on-rails-on-linux
Mac OSX 10.6.8
PostgreSQL 9.2.4 installed via enterpriseDB installer
Install dir: /Library/PostgreSQL/9.2
A:
After making changes to the pg_hba.conf or postgresql.conf files, the cluster needs to be reloaded to pick up the changes.
From the command line: pg_ctl reload
From within a db (as superuser): select pg_reload_conf();
From PGAdmin: right-click db name, select "Reload Configuration"
Note: the reload is not sufficient for changes like enabling archiving, changing shared_buffers, etc -- those require a cluster restart.
|
[
"mathematica.stackexchange",
"0000192493.txt"
] | Q:
My function for computing a partial derivative generates an error message
Apologies, I'm still new at Mathematica. I have defined the function
ψ[x_, y_] :=
(C1 Cos[(k y)/Sqrt[Gy]] + C2 Sin[(k y)/Sqrt[Gy]])
(C3 Cosh[(k x)/Sqrt[Gx]] +C4 Sinh[(k x)/Sqrt[Gx]])
When I type in ψ[-a, y] everything is fine. and I get the output I wanted:
(C1 Cos[(k y)/Sqrt[Gy]] + C2 Sin[(k y)/Sqrt[Gy]])
(C3 Cosh[(a k)/Sqrt[Gx]] - C4 Sinh[(a k)/Sqrt[Gx]])
But when I type in
dx[x_] := D[ψ[x, y], x]
dx[-a]
I get the error message
General::ivar: -a is not a valid variable.
but Mathematica still outputs more or less the correct answer
$$
\frac{\partial \left(\left(\text{C3} \cosh \left(\frac{a k}{\sqrt{\text{Gx}}}\right)-\text{C4} \sinh \left(\frac{a k}{\sqrt{\text{Gx}}}\right)\right) \left(\text{C1} \cos \left(\frac{k y}{\sqrt{\text{Gy}}}\right)+\text{C2} \sin \left(\frac{k y}{\sqrt{\text{Gy}}}\right)\right)\right)}{\partial -a}
$$
Is there a problem with when I take the partial derivative of ψ?
A:
As Roman pointed out, the problem was that I was using := instead of =. The following code solved the problem:
dx[x_,y_] = D[ψ[x, y], x]
|
[
"stackoverflow",
"0014150492.txt"
] | Q:
C++ basic type wrappers
I'd like to make some basic wrapper classes around simple types in C++. Since you can't just inherit from base types like you should be able to, I'm just using a wrapper class. The problem is, I want to be able to cast directly to that type, since that cast would be totally valid. The compiler just doesn't let you and I can't find a way to tell it that it's ok without running a cast method which kills performance. Is there any way to do this?
Here's what I have for the conversion constructor:
class Integer32
{
public:
Integer32(int value) { this->Value = value; }
int Value;
};
Does the compiler know to skip that and just assign it directly from an int? How do I test this to make sure since it's rather important...
A:
Provide a non-explicit constructor to allow conversion and casting from a base type to your wrapper.
Provide a non-explicit conversion operator to allow conversion casting from your wrapper to a base class.
class my_wrapper {
my_wrapper(int); // casting from int
operator int(); // casting to int
};
(make them explicit to allow explicit casting but not implicit conversion)
|
[
"stackoverflow",
"0027790198.txt"
] | Q:
ObjectOutputStream/ObjectInputStream with sockets
I'm making an online game using ObjectOutputStream... to exchange data. Since I have different types of data I'm using the write/readObject() functions only. I was wondering if sending a String for commands was good practice or if there is a better, safer solution.
When I say I send commands with a String, for example I have a chat and I want to ignore a user, so I send to the server "block +username"; if I want to add a friend I send "addfriend +username", etc.
A:
Well, using serialized objects might create lot of interoperability work if you are going for a serious installation. It also can become a bottleneck. I would (besides the obvious of using any other messeging protocol) stick to DataOutputStream if you look for a compact home grown protocol.
Sending strings as serialized java objects is the most suprising thing to do (and wont easily allow you to have client or servers in different languages).
If you want to be cool, use JSON and Websocket. :)
|
[
"stackoverflow",
"0017620333.txt"
] | Q:
Do I need to call EasyTracker.getInstance().setContext in Fragment
I have an Activity, which is going to switch among various different Fragment views from time to time. In my Activity code, let's say I have
@Override
public void onStart() {
super.onStart();
... // The rest of your onStart() code.
EasyTracker.getInstance().activityStart(this); // Add this method.
}
@Override
public void onStop() {
super.onStop();
... // The rest of your onStop() code.
EasyTracker.getInstance().activityStop(this); // Add this method.
}
In every Fragment code, do I need to have?
EasyTracker.getInstance().setContext(this.getActivity());
A:
Do I need to call EasyTracker.getInstance().setContext in Fragment?
Not necessarily. It depends on where you are using the EasyTracker in your Fragment. If your Activity's onStart() method has been called before you use the EasyTracker, then you will be fine and EasyTracker will use the Activity's Context.
However, if your Fragment uses the EasyTracker before the Activity's onStart() finishes (for example in onCreateView() or the Fragment's onStart()), then the EasyTracker will not have a Context yet and you will get an exception.
|
[
"stackoverflow",
"0007051608.txt"
] | Q:
Parameter Naming conventions for function in python
This isn't a big deal of a question but it bothers me, so I decided to ask for inspiration.
Assume a function defined like this:
def store(book=None, author=None):
pass
When calling this function like this:
book = Book()
author = Author()
store(book=book, author=author)
do I have to fear sideeffects because of book=book and author=author? I am tending to redefine the function to
def store(thebook=None, theauthor=None):
pass
but it seems a little verbose. Any suggestions?
A:
You have no side effects to fear. It's no different semantically than it would be if you had just called
store(book, author)
The values are stored in new variables inside the function's scope, subject to all the normal python scoping rules.
Keyword arguments do free you to use other, more specific variable names if you need to though. The names book and author are pretty generic, which is appropriate within a function, which should be reusable and therefore a little bit more abstract. But using more specific names might be appropriate within the calling scope. From that perspective, thebook and theauthor aren't really any different from book and author; you'd want to do something more like -- say -- local_book or borrowed_book -- or whatever would more precisely describe the book in question.
A:
First, there's no ambiguity or side-effect in saying store(book=book, author=author). The interpreter has no problem telling argument names from names in general.
Now, concerning the second part of your question, I don't think you should change the names of the function's arguments: after all, store() does perform its work from a book and an author, in the general sense.
Your local variables, however, might be more precise about what they contain. They probably don't reference any book or author, but one having specific characteristics like, say, the current book or the best-selling author.
So, if you wish to disambiguate names, I would suggest you rename your local variables instead.
|
[
"meta.stackoverflow",
"0000328017.txt"
] | Q:
Can you ask a question you know is way over your own head?
As you know, one of the guidelines here is we should show what we've already tried in answering our own question when we post it.
But what if the poster has good reason to believe it would take something like a semester of study for him/her to make significant (or any) headway?
Example situation: I was reading a blog post where the author presents a simple and short example in a language that I'm somewhat familiar with (ClojureScript). Then he says this example "should seem impossible to someone familiar with JavaScript". And I'd like to know what a best-practice JavaScript version would look like.
So for me to show what work I've already tried in answering this question, I'd need the ability to tackle things things on level Seems-Impossible in a language I'm not yet familiar with (JavaScript). But I know I occasionally get stuck on level Normal in a language I am familiar with (ClojureScript) and have never been accused of accomplishing anything that seems impossible.
Would it be inappropriate for me to ask the community to show me something that is over my head? If yes, where would be a better place to ask?
Edit:
Re: "semester of study"
I don't mean it would take a semester for someone to teach me what it takes, because I imagine the answer would involve a small number of key features (like 3) in JavaScript combined in a particular way
Nor do I mean it would take me a semester to learn those 3 features by myself
What I mean is it would take me a semester (or much more?) to survey all the features in JavaScript to possibly figure out which 3 features would be involved and how to combine them, especially if I'm going to get interrupted often by rogue elements in my own life situation
Edit 2: I accepted Gert Arnold but would like to also call attention to the one by @Suragch because it's what you can do first.
A:
To answer your question as stated in the title:
Can you ask a question you know is way over your own head?
No, because you don't know what you're asking1, as in, you probably have no idea what it takes to answer your question satisfactorily.
Your question is bound to be either -
too broad, because you don't have the ability to wrap the question into a specific programming problem.
unclear, because you don't know how to describe the problem sufficiently.
Focusing on your example, questions about external blogs or tutorials are generally not too well-received at Stack Overflow. Readers, also future readers, are forced to find the problem statement in an external source. The answer may contain a solution that's incomprehensible without knowing the external source. And the link to the external source can break any moment.
A good question is stand-alone as much as possible, ideally an mcve. It's very hard to make a good question without showing own coding efforts. Consequently, it's very hard to ask a good question that's "way over your head", because you don't know where to begin trying it yourself.
More than anything, questions like this will look like academic exercises. Stack Overflow is not a good fit for such questions and I don't think any of the Stack Exchange sites are.
1 Not meaning to offend. Curiosity and a drive to venture into the unknown are commendable features. It's only that they don't come in handy at Stack Overflow...
A:
I'm going to say, yes, it is ok to ask a question that is way over your head if you also do the following:
Research the topic thoroughly yourself first (maybe not spending an entire semester but probably at least a day or two). Read all the related information that you can find. Even if the topic is still over your head, this will give you the vocabulary to present your question in a clear way.
At least attempt some code, even if this is just reproducing some code you found online. Often the attempt will open up some new doors of understanding. It will also make your question be better received on Stack Overflow.
Phrase your question as much as possible in a way that can be specifically answered and is non-opinion based. Even though I don't know ClosureScript or JavaScript, it seems to me that Travis J just answered your question quite succinctly and well.
I've asked a few questions here that were way over my head at the time. Even when certain questions were negatively received by some people, I've also gotten helpful comments and answers by other people. These helped me to deepen my understanding so that the next time I could ask a question that was a little less over my head.
If you make a regular practice of writing thoughtful questions following my advice above, you probably won't get question banned.
A:
If it would take you a semester of study to get to the point where you could make a reasonable attempt to solve that problem, and be able to ask a specific question related to that problem, then your question is Too Broad.
An SO question isn't here to take the place of an entire college course.
|
[
"stackoverflow",
"0002186978.txt"
] | Q:
Create a plist iPhone SDK
I am trying to set up a method that creates a .plist file at a given file path, however my current code only works for modifying a plist. Is there an easy way to create the plist file? This is simply a first draft of the method, the final version will take a variable for the course_title. Thanks
- (void)writeToPlist{
NSString *filePath = [NSString stringWithFormat:@"%@/course_title", DOCUMENTS_FOLDER];
NSMutableDictionary* plistDict = [[NSMutableDictionary alloc] initWithContentsOfFile:filePath];
[plistDict setValue:@"1.1.1" forKey:@"ProductVersion"];
[plistDict writeToFile:filePath atomically: YES];
}
Thanks for the code goes to http://www.ipodtouchfans.com/forums/showthread.php?t=64679
A:
You just init an empty dictionary using
NSMutableDictionary* plistDict = [NSMutableDictionary dictionary];
instead of reading it from the file. Then you proceed as before.
|
[
"drupal.stackexchange",
"0000030083.txt"
] | Q:
How to print page/node templates being used on a page?
I'm using Drupal 6.19 and trying to debug why a custom page template for my login page isn't being picked up.
I've tried to use devel themer, but I don't see the "widget" when I'm not logged in, and of course I can't see the login page when I am.
I'm hoping for a script that I can drop in on my page tpl that will print the current template being used. Or maybe there are other suggestions?
A:
For Drupal 7, there's a new 'theme debug mode'. This eliminates the need for installing the Theme Developer module and its dependencies, if all you want is to see the templates being used on a page.
As of Drupal 7.33, Drupal core has a theme debug mode that can be
enabled and disabled via the theme_debug variable. Theme debug mode
can be used to see possible template suggestions and the locations of
template files right in your HTML markup (as HTML comments). To enable
it, add this line to your settings.php:
$conf['theme_debug'] = TRUE;
see https://www.drupal.org/node/223440
|
[
"stackoverflow",
"0032289066.txt"
] | Q:
Random text value from Input text and display in another input text
I am having trouble with get random text from other input text value like this.
when user write name
<input type="text" name="txtuser" onchange="getalias(this)">
and user outomaticly get the alias
<input id="alias" type="text" name="txtalias">
onchange code
<script>
function getalias() {
...
}
</script>
and the result is like this
name : riski
alias: rki_ (or other)
can anyone give me a example how to random value from "txtuser" and display in alias input text.
A:
Try this:
function shuffle(array) {
var currentIndex = array.length,
temporaryValue, randomIndex;
// While there remain elements to shuffle...
while (0 !== currentIndex) {
// Pick a remaining element...
randomIndex = Math.floor(Math.random() * currentIndex);
currentIndex -= 1;
// And swap it with the current element.
temporaryValue = array[currentIndex];
array[currentIndex] = array[randomIndex];
array[randomIndex] = temporaryValue;
}
return array;
}
function getalias(that) {
var val = that.value;
var alias = '';
var splitVal = val.split('');
var shuffled = shuffle(splitVal);
for (var i = 0, len = shuffled.length; i < len; i++) {
alias += shuffled[i];
}
document.getElementById('alias').value = alias;
}
<input type="text" name="txtuser" onkeyup="getalias(this)">
<input id="alias" type="text" name="txtalias">
|
[
"tex.stackexchange",
"0000129734.txt"
] | Q:
LaTeX complaining commands in Verbatim
Known Fact:
Have read this LaTeX complaining about illegal parameter number
With a Verbatim environment, every LaTeX commands will be handled as plain text. However, I need a working command in the Verbatim.
Objective:
I have a sample.cls file that will be \VerbatimInput to a main .tex file so that I can refer to later in the main file for explanations once it is read in. What follows is the code I used to input sample.cls
\fvset{frame=none,numbers=left,numbersep=3pt,firstline=1,lastline=20}
\VerbatimInput[commandchars=+\[\]]{sample.cls}
and an image after read in is shown below (The sample.cls is a makeshift here containing the following lines.)
For example, the block between line (9) and line (13) is what I am refering to. Notice that the label format I used in sample.cls are +label[vrb:test1] and +label[vrb:test2], which are not seen here due to being two control sequences in the Verbertim environment.
The referring scheme by inserting a command in the Verbatim structure works fine, but LaTeX complains those commands because they are embedded in the sample.cls that the main file needs (in use.)
In short the labels, +label[vrb:test1] and +label[vrb:test2], are equivalent to acting commands \label{vrb:test1} and \label{vrb:test1} in the Verbatim environment, but LaTeX renders errors.
Question:
Although hitting ENTER key can still finish the compilation, I am writing to seek help solving this dilemma (i.e., no need to hit ENTER key) and how to avoid these contradictions.
Error signal obtained, Window, TeXworks environment
60
LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H <return> for immediate help.
...
l.60 \setlength{\topmargin}{-0.5cm} +
label[vrb:psize1]
89
LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H <return> for immediate help.
...
l.89 +
label[vrb:cc]
175
LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H <return> for immediate help.
...
l.175 +
label[vrb:title1]
sample.cls
\NeedsTeXFormat{LaTeX2e}
\ProvidesClass{sample}[Sample class]
%-------------------------- initial code -----------------------
\DeclareOption*{\PassOptionsToClass{\CurrentOption}{report}}
+label[vrb:test1]
%-------------------------- initial code -----------------------
\DeclareOption*{\PassOptionsToClass{\CurrentOption}{report}}
%-------------------------- executation ---------------------
\ProcessOptions\relax
\LoadClass[a4paper,openright]{report}
+label[vrb:test2]
\endinput
Main file
\documentclass[12pt]{sample}
\usepackage{fancyvrb}
\begin{document}
\fvset{frame=none,numbers=left,numbersep=3pt,firstline=1,lastline=20}
\VerbatimInput[commandchars=+\[\]]{sample.cls}
For example, the block between line (\ref{vrb:test1}) and line (\ref{vrb:test2}) is
what I am refering to. Notice that the label format I used in sample.cls are +
label[vrb:test1] and + label[vrb:test2], which are not seen here due to being two
control sequences in Verbertim. environment.
\end{document}
A:
I get no error from \VerbatimInput; but of course your class file is illegal and the error message
LaTeX Error: Missing \begin{document}.
See the LaTeX manual or LaTeX Companion for explanation.
Type H <return> for immediate help.
...
l.60 \setlength{\topmargin}{-0.5cm} +
label[vrb:psize1]
is quite clear: when loading the class, + starts typesetting, which is disallowed for obvious reasons when classes or packages are being loaded.
You could obviate by hiding the label behind a comment character:
\NeedsTeXFormat{LaTeX2e}
\ProvidesClass{sample}[Sample class]
%-------------------------- initial code -----------------------
\DeclareOption*{\PassOptionsToClass{\CurrentOption}{report}} % +label[vrb:test1]
%-------------------------- initial code -----------------------
\DeclareOption*{\PassOptionsToClass{\CurrentOption}{report}}
%-------------------------- execution ---------------------
\ProcessOptions\relax
\LoadClass[a4paper,openright]{report} % +label[vrb:test2]
\endinput
However, square brackets will not be printed because they will be considered like braces when the text is processed by \VerbatimInput.
|
[
"stackoverflow",
"0061852588.txt"
] | Q:
How can I maintain type narrowing inside Array.find method?
In the following code snippet, I'm receiving the following typescript compilation error within the Array.find method. I would expect that my type is narrowed given the if statement checking that context.params.id is not undefined.
Is there a reason this type is losing its narrowing within the find method? What options do I have to successfully narrow this type?
TS2345 [ERROR]: Argument of type 'string | undefined' is not assignable to parameter of type 'string'.
Type 'undefined' is not assignable to type 'string'.
type Book = {
id: number;
}
const books: Book[] = [];
type Context = {
response: {
body: any;
},
params?: {
id?: string
}
}
const handler = (context: Context) => {
if (context.params && context.params.id) {
context.response.body = books.find(
(book) => book.id === parseInt(context.params.id) // Error
);
}
};
A:
One option is to assign context.params.id to a new variable outside of find callback.
const handler = (context: Context) => {
if (context.params && context.params.id) {
const id = parseInt(context.params.id);
context.response.body = books.find(
(book) => book.id === id // OK
);
}
};
|
[
"stackoverflow",
"0018136158.txt"
] | Q:
What should I do when I have date with the NULL features?
I have a big amount of data, some of the information is missing (NULL). Should I skip this example in the learning process or do something else?
A:
It really depends on the data and what you think makes sense. If you think that the NULL information might be meaningful, one option is to create a new variable that denotes the entries that have NULL data. For example, if you have true/false categorical data with NULL values like
1, 0, 1, NULL, NULL, 1, 1, NULL
you could transform it to:
1 0 1 0 0 1 1 0 <- true
0 1 0 0 0 0 0 0 <- false
0 0 0 1 1 0 0 1 <- NULL
Something similar can be done with floating point values.
Of course, you could also throw out the NULL values or replace them with an educated guess (maybe the average) if you don't think the fact that they are NULL will be helpful.
|
[
"stackoverflow",
"0016425279.txt"
] | Q:
warning: incompatible implicit declaration of built-in function log2
So we have this program in C, we need to use the base-2 logarithmic function, to get the base 2 logarithm of n.
Here's the code:
#include <math.h>
int partSize(int n) {
return log2(n);
}
But when compiling, it gives us the following warning.
sim.c: In function partSize : sim.c:114: warning: incompatible
implicit declaration of built-in function log2
This is the command we use
gcc $file -o $name.out -lm
A:
Here's the thing, 99.99999% of the time, when someone says "this basic function that's available to the world doesn't work" they're wrong. The fraction of the time when something this basic breaks, there's already an army with pitchforks somewhere.
#include <math.h>
#include <stdio.h>
int partSize(int n){
return log2(n);
}
int main(int argc, char *argv[]) {
int ret = -1;
ret = partSize(16);
printf("%d\n", ret);
return 0;
}
Compile with:
> gcc -std=c99 a.c -o log2.out -lm
> ./log2.out
> 4
Yup, it's working.
In C, using a previously undeclared function constitutes an implicit declaration of the function. In an implicit declaration, the return type is int. So the error tells you that log2() was not defined in your code that leads to some issue in the code you didn't post.
When I skip the -lm I get:
a.c:(.text+0x11): undefined reference to `log2'
collect2: ld returned 1 exit status
..that doesn't look right. OK, when I add the -lm but remove the #include <math.h> I get:
a.c: In function ‘partSize’:
a.c:5:5: warning: implicit declaration of function ‘log2’ [-Wimplicit-function-declaration]
Hey, there's your warning! So you're probably correct that you're including the -lm but for some reason the #include math.h has a problem. Could be that:
math.h is missing
you didn't really include it in the file, is it in a #def and being compiled out for example?
Your version of math.h doesn't define log2
|
[
"stackoverflow",
"0025869391.txt"
] | Q:
Why do we use UseCase diagrams in object oriented analysis and design even if Usecases are not considered as Object oriented?
UML notations says, Usecases are drawn to point out the functional requirements in the Problem Domain, it by no means gives the information about object or class as Data Flow Diagrams or Entity Relationship diagrams. But also why do we use Usecase Diagrams in object oriented analysis and design even if Usecases are not considered as Object oriented.
A:
Use case diagram is meant to shed light on the main functionalities of the system , and emphasis the perspective presenting the latter as a blackBox merely existing for a sole mission;deliver to the actor the Promised service .
At this point we don't realy care about OOP realy , as you can definetly use Use case diagram for any other type of analysis.
UML is just a set of visual tools to allow a unified expression of different perspective of the system.
In Case you are using The Unified Process it advocates to start with identifiying the use cases first then explode every use case into collaborative entities (classes) and establish the static collaboration between them by harnessing the Class Diagram toolbox.
|
[
"stackoverflow",
"0020927420.txt"
] | Q:
How do I create a user with a random password and store it to a file using puppet
I want to create a user for a service (postgres, rabbitmq...) using a random generated password. This password should then be written to a file on the host. This file, containing env vars is then used by an application to connect to those services.
I don't want to store these passwords elsewhere.
postgresql::server::db { $name:
user => $name,
password => postgresql_password($name, random_password(10)),
}
Then i want to insert this password in the form PG_PASS='the same password' into a config file but the whole thing should happen only if the user is not already present.
A:
In pure Puppet
A trick is to define a custom type somehow like :
define authfile($length=24,$template,$path) {
$passwordfile = "/etc/puppet/private/${::hostname}/${::title}"
$password = file($passwordfile,'/dev/null')
@@exec { "generate-${title}":
command => "openssl rand -hex -out '$passwordfile' 24",
creates => $passwordfile,
tag => 'generated_password'
}
file { $path:
content => template($template);
}
}
And on your puppetmaster, have something like :
Exec<|| tag = 'generated_password' ||>
You can then pass in the $template variable the name of a template that will have the variable $password available. You will need to be careful about how these authfile types are defined (as it creates files on the puppetmaster, you will want to take care of malicious facts), and you will need to run puppet once on the host (so that the exported resources is created), once on the puppetmaster (so that the secret file is generated), then once again on the host (so that the secret file is read) for it to work.
With a custom function
Another solution is to write a custom function, random_password, that will use the fqdn and sign it with a secret that is stored on the puppetmaster (using a HMAC) to seed the password. That way you will have a password that can't be guessed without getting the secret, and no extra puppet roundtrips.
|
[
"stackoverflow",
"0004445238.txt"
] | Q:
LESS CSS preprocessor: Is there a way to map a single color to an rgb and rgba definition?
I'm trying to write a block in the CSS preprocessor LESS that will do the following:
@transparent_background(@color; @alpha: .8)
{
background: @color;
background: rgba(<color R value>, <color G value>, <color B value>, @alpha);
}
Is there any way to get the RGB values out of @color if it's a standard hex definition (i.e. #rrggbb)? Is there a way to do this if @color is defined some other way?
EDIT: SOLUTION
@transparent_background(@color; @alpha: .8)
{
background: @color;
background: @color + rgba(0, 0, 0, @alpha);
}
A:
Try this:
background: @color - rgba(0, 0, 0, 1.0) + rgba(0, 0, 0, @alpha);
The subtraction will clear the alpha channel on @color then you just add the desired @alpha to the alpha channel. Colors have the full suite of operators and they work component by component when operating on two colors; colors are stored as RGBA components internally so this should work. Also, the alpha channel is normalized to be in the interval [0, 1.0] so subtracting 1.0 from the alpha channel should clear it without causing any problems.
I don't have CSS LESS set up right now so I can't check but this is worth a shot.
A:
None of the solutions work anymore, but this one does (thanks to @Elyse):
.alpha(@color, @alpha: 0.8) {
color: @color;
color: hsla(hue(@color), saturation(@color), lightness(@color), @alpha);
}
The hsla() function, while not advertised in the LESS website, is defined in the source code.
|
[
"stackoverflow",
"0050150148.txt"
] | Q:
Efficient and fastest way to search in a list of strings
The following function return the number of words from a list that contain the exact same characters as the word entered. The order of the characters in the words is not important. However, say there is a list that contain millions of words. What is the most efficient and fastest way to perform this search?
Example:
words_list = ['yek','lion','eky','ekky','kkey','opt'];
if we were to match the word "key" with the words in the list, the function only return "yek" and "eky" since they share the same exact characters with "key" regardless of the order.
Below is the function I wrote
def find_a4(words_list, word):
# all possible permutations of the word that we are looking for
# it's a set of words
word_permutations = set([''.join(p) for p in permutations(word)])
word_size = len(word)
count = 0
for word in word_list:
# in the case of word "key",
# we only accept words that have 3 characters
# and they are in the word_permutations
if len(word) == word_size and word in word_permutations:
count += 1
return count
A:
A dictionary whose key is the sorted version of the word:
word_list = ['yek','lion','eky','ekky','kkey','opt']
from collections import defaultdict
word_index = defaultdict(set)
for word in word_list:
idx = tuple(sorted(word))
word_index[idx].add(word)
# word_index = {
# ('e', 'k', 'y'): {'yek', 'eky'},
# ('i', 'l', 'n', 'o'): {'lion'},
# ('e', 'k', 'k', 'y'): {'kkey', 'ekky'},
# ('o', 'p', 't'): {'opt'}
# }
Then for querying you would do:
def find_a4(word_index, word):
idx = tuple(sorted(word))
return len(word_index[idx])
Or if you need to return the actual words, change it to return word_index[idx].
Efficiency: querying runs in average in O(1) time.
|
[
"stackoverflow",
"0061666545.txt"
] | Q:
Sorting a parent relation by child relation in laravel eloquent
I have a problem finding the correct query for my situation.
I have a User model, a Product model, a UserProduct model that has product_id and user_id and price, and a UserPromotion model which has a user_product_id and discount.
So I want to have them all nested and I want to have the highest discount (if there is one) for each user.
So my query is like this:
$promotions = User::whereHas('products')->with(['userProduct.product','userProduct.promotion'])->take(10)->get();
I don't know how to get the highest discount promotion for each user.
A:
i think you should reach the table user_promotions using leftJoin and then select the max discount:
$promotions = User::whereHas('products')
->with(['userProduct.product','userProduct.promotion'])
->leftJoin('user_products','user_products.user_id','users.id')
>leftjoin('user_promotions','user_promotions.user_product_id','user_products.id')
->select('users.*',DB::raw('MAX(discount)'))->take(10)->get();
please be careful about the tables names, they should match your tables in your db exactly
|
[
"stackoverflow",
"0021540172.txt"
] | Q:
Pass-through authentication not working. IIS 7
On IIS 7 I set up an application called "XYZ", and an application pool for it.
I set the identity of this application pool to a custom user, let's call it "Mario".
Mario has NTFS access to the folder/files in which XYZ points to (remote share).
In the XYZ authentication settings, only windows authentication is enabled:
In the providers for windows authentication, only NTLM is active:
Physical path credentials for XYZ are set to application user / pass-through:
So the problem is, when I go to http://server.com/XYZ I get challenged (which is to be expected), but I does not matter what I put in, it looks like the authentication token is not accepted, and the browser challenges me again.
I have looked at logs for Active Directory and the requests are coming through, but even when the user is successfully authenticated the browser challenges again.
HERE'S THE GOAL: to allow directory listing, but to use credentials provided by the user for NTFS access. Right now I can't get that to work. THANK YOU!
Here's the Web.config file:
A:
The trick to getting this to work is to add 'Users' to the permissions. Set up IIS just like you have with NTLM as the top provider, Windows Authentication only enabled (you can get rid of the section in the web.config, all you need is <authentication="Windows" />) and add IIS_USRS and Users to the permission set.
|
[
"stackoverflow",
"0001567929.txt"
] | Q:
Website Safe Data Access Architecture Question
I'd like to get some opinions on what I am planning to do is safe, safe from people hacking into data.
We have a database in city A.
In city B we have a company that has an internal network, and a server that has two application servers on it that each run an application, App. 1 and App. 2.
App. 1 serves on port 80, and is exposed to the internet.
I want App. 2 to only be exposed to App. 1, via web services(?), meaning people on the internet and intranet would not be able to "see" App. 2.
I want App. 2 to have a private communication link to the database in city A. I need to somehow ensure that the communication between App. 2 and the database in city A is secure, but I also need to have the data in App. 1.
Does this general set up accomplish what I need to do?
My main objective is data security between App. 2 and the database in city A.
Any general recommendations would be appreciated.
Thanks
A:
There are multiple things that you can do to protect your data. I would look at using SSL over https, providing authentication for the caller of the web services (maybe a public/private key system or client certificates), and looking to set IP restriction rules for the caller on the hosting server - at a minimum.
Edit: For some reason I had thought you said an application accessing a web service, although you say application to application. Don't know if you are going to use web services, but that is the path we have followed.
|
[
"stackoverflow",
"0044008942.txt"
] | Q:
Using IN subquery in Entity Framework using Linq
I have the following tables in my database:
- Reservation
- TrainStation
- Train
- Traveller
- Reservation_Traveller
- TrainSeats: Attributes are: Train_Id, Seat, Traveller_Id
I want to find the TrainSeats rows of a particular Reservation
I have a Reservation object that contains an ICollection<Traveller> property containing the travellers of which I want to remove their seats from the TrainSeats table. I fetched the reservation object from the database like this:
var reservation = db.Reservation.Where(r => r.Id == id).FirstOrDefault();
So I want to do something like this:
db.TrainSeats.Where(ts => ts.Traveller_Id IN reservation.Travellers)
A:
First select the travelers id:
var ids=reservation.Travellers.Select(e=>e.Id);
Then use Contains extension method which is translated into IN in sql:
var query=db.TrainSeats.Where(ts => ids.Contains(ts.Traveller_Id));
I guess if you use the FK property and navigation properties you can do it in one query:
var query= db.Travellers.Where(e=>e.ReservationId==id).SelectMany(t=>t.TrainSeats);
|
[
"stackoverflow",
"0022341177.txt"
] | Q:
How to search a date in timestamp datatype in MySQL
I have a timestamp datatype in my column. (Example Data: 12/4/2013 8:57:10 PM) How can I make a query to on my timestamp column using this format. 12/4/2013 to execute all information within that row. Thank you.
A:
It is easier when you use the default date format YYYY-MM-DD
select * from your_table
where date(date_column) = '2013-04-12'
|
[
"stackoverflow",
"0043377702.txt"
] | Q:
How to set padding or margin of element in code behind in Zebble for Xamarin?
I set it like code below, but it did not worked.
ProductList.Margin = 10 //Exception
A:
If ProductList is a View you can change its padding and margin like this:
ProductList.Css.Padding = 10;
ProductList.Css.Margin = 20;
|
[
"stackoverflow",
"0045512248.txt"
] | Q:
Catching "Failed to load resource" when using the Fetch API
I'm trying to catch a bunch of errors related to the same origin policy when using the Fetch API but without any success:
window.onerror = (message, file, line, col, error) => console.log(error)
window.addEventListener('error', (error) => console.log(error))
try {
fetch('https://www.bitstamp.net/api/ticker/').catch(e => {
console.log('Caugth error:')
console.log(e)
console.log(JSON.stringify(e))
})
}
catch (e) {
console.log('try-catch')
console.log(e)
}
The errors I want to catch only appear in the web console:
See code example here: https://github.com/nyg/fetch-error-test
How can I catch these errors to provide an on-screen message?
EDIT: The fetch's catch block is actually executed.
fetch('https://www.bitstamp.net/api/ticker/')
.then(response => response.text())
.then(pre)
.catch(e => {
pre(`Caugth error: ${e.message}`)
})
function pre(text) {
var pre = document.createElement('pre')
document.body.insertAdjacentElement('afterbegin', pre)
pre.insertAdjacentText('beforeend', text)
}
pre {
border: 1px solid black;
margin: 20px;
padding: 20px;
}
A:
As far as I remember, you can not catch browser driven exceptions in your typical try->catch or a catch chain inside of fetch.
CORS exceptions are thrown with intent, to have the user browsing the site, know of such abnormalities if you may call them that, and to protect any leak of possible secure information on the called api/server
Read here
Of a previous SO discussion on whether you could catch these errors with an exception handler
If the request throws an error that can be part of the response , like a status error and such, then you may catch it and show a custom message
|
[
"askubuntu",
"0000457171.txt"
] | Q:
What are the differences between gnome, gnome-shell and gnome-core?
I am running Ubuntu GNOME and apt says that I have gnome-shell installed, but not gnome or gnome-core.
$ apt-cache policy gnome
gnome:
Installed: (none)
Candidate: 1:3.8+4ubuntu3
Version table:
1:3.8+4ubuntu3 0
500 http://in.archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages
$ apt-cache policy gnome-shell
gnome-shell:
Installed: 3.10.4-0ubuntu5
Candidate: 3.10.4-0ubuntu5
Version table:
*** 3.10.4-0ubuntu5 0
500 http://in.archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages
100 /var/lib/dpkg/status
$ apt-cache policy gnome-core
gnome-core:
Installed: (none)
Candidate: 1:3.8+4ubuntu3
Version table:
1:3.8+4ubuntu3 0
500 http://in.archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages
Why does apt say I have not installed gnome, although I'm using GNOME as the desktop environment?
A:
This is just an issue of metapackages. The Debian world (and I believe the RedHat one as well) has collected certain programs that are used together into easy-to-install metapackages. So, the package gnome is actually a shortcut for installing all sorts of goodies:
aisleriot, alacarte, avahi-daemon, cheese, cups-pk-helper, desktop-base, evolution, evolution-plugins, file-roller, gedit, gedit-plugins, gimp, gnome-applets, gnome-color-manager, gnome-core, gnome-documents, gnome-games, gnome-media, gnome-nettool, gnome-orca, gnome-shell-extensions, gnome-tweak-tool, gstreamer1.0-libav, gstreamer1.0-plugins-ugly, hamster-applet, inkscape, libgtk2-perl, libreoffice-calc, libreoffice-gnome, libreoffice-impress, libreoffice-writer, nautilus-sendto, network-manager-gnome, rhythmbox, rhythmbox-plugin-cdrecorder, rhythmbox-plugins, rygel-playbin, rygel-preferences, rygel-tracker, seahorse, shotwell, simple-scan, sound-juicer, telepathy-gabble, telepathy-rakia, telepathy-salut, tomboy, totem, totem-plugins, tracker-gui, transmission-gtk, vinagre, xdg-user-dirs-gtk, browser-plugin-gnash, gdebi, nautilus-sendto-empathy, telepathy-idle, dia-gnome, gnome-boxes, gnucash, libreoffice-evolution, planner
This is the full Gnome desktop and is not needed to run the Gnome desktop environment. So, while you have gnome-shell installed, you don't have all the associated applications like games and email client etc that come with the full desktop environment.
This is not a problem and it does not hinder you from using Gnome in any way.
gnome-core is also a meta package, it will install the official "core" modules of the Gnome desktop:
at-spi2-core, baobab, brasero, caribou, caribou-antler, dconf-gsettings-backend, dconf-tools, empathy, eog, evince, evolution-data-server, firefox, or, fonts-cantarell, gconf2, gdm, gkbd-capplet, glib-networking, gnome-backgrounds, gnome-bluetooth, gnome-calculator, gnome-contacts, gnome-control-center, gnome-dictionary, gnome-disk-utility, gnome-font-viewer, gnome-icon-theme, gnome-icon-theme-extras, gnome-icon-theme-symbolic, gnome-keyring, gnome-menus, gnome-online-accounts, gnome-packagekit, gnome-panel, gnome-power-manager, gnome-screensaver, gnome-screenshot, gnome-session, gnome-settings-daemon, gnome-shell, gnome-sushi, gnome-system-log, gnome-system-monitor, gnome-terminal, gnome-themes-standard, gnome-user-guide, gnome-user-share, gsettings-desktop-schemas, gstreamer1.0-plugins-base, gstreamer1.0-plugins-good, gstreamer1.0-pulseaudio, gtk2-engines, gucharmap, gvfs-backends, gvfs-bin, libatk-adaptor, libcanberra-pulse, libcaribou-gtk-module, libcaribou-gtk3-module, libgtk-3-common, libpam-gnome-keyring, metacity, mousetweaks, nautilus, notification-daemon, pulseaudio, sound-theme-freedesktop, tracker-gui, vino, yelp, zenity, network-manager-gnome, gnome,
Note that the gnome metapackage also installs the gnome-core metapackage. In any case, the main point here is that metapackages are not needed. You can install each of their component packages manually so lacking one or more metapackages does not imply that anything is actually missing from your system.
|
[
"salesforce.stackexchange",
"0000187539.txt"
] | Q:
Passing Parameters Commandbutton
I don't know what is wrong with the code, I read a lot about the commandbutton and param, but it doesn't work.
The Debug shows me that the value of the MS_Item is Null.
07:44:00.0 (8771917)|USER_DEBUG|[8]|DEBUG|my MS_Item: null
PAGE:
<apex:page controller="clicktest">
<apex:form>
<apex:commandButton action="{!Stats_In}" value="click" reRender="">
<apex:param name="MS_Item" value="MS_Item_45382" assignTo="{!MS_Item}" />
</apex:commandButton>
</apex:form>
</apex:page>
CLASS:
public class clicktest {
Public String MS_Item {get; set;}
public PageReference Stats_In() {
MedienServer_Stats__c sta = new MedienServer_Stats__c();
sta.MS_Item__c = MS_Item;
System.debug('my MS_Item: ' +MS_Item);
INSERT sta;
RETURN NULL;
}
}
A:
Need to be add a DummyID for reRedner.
<apex:page controller="clicktest">
<apex:form>
<apex:commandButton action="{!Stats_In}" value="click" reRender="dummyId">
<apex:param name="MS_Item" value="MS_Item_45382" assignTo="{!MS_Item}" />
</apex:commandButton>
</apex:form>
</apex:page>
|
[
"unix.stackexchange",
"0000151657.txt"
] | Q:
Tokenize string from $REPLY in bash script
This is my first post, and I have no idea how I managed anything before StackExchage, Google, Wiki, GNU, Internet, the list goes on :)
I am trying to find a way to construct a SQL database generator bash script, which currently looks like this...
renice -n 19 $$;
idx=32768;
dbt='Radix_en';
cat Domains_en.txt;
cat Tables_en.txt;
while read;
do
checks="$(echo -n $REPLY | md5sum)";
checks=${checks%" -"};
echo "insert into $dbt values ($idx,'$(uuidgen)','${checks}',$REPLY);";
idx=$((idx+1));
done < Data.txt;
echo "commit;";
The data comes from Data.txt, currently in the form of:
'NUMBER','US_EN','LATIN','GREEK','GERMAN'
0,'zero','nulla','μηδέν','Null'
1,'one','Unum','ένα','ein'
The output is valid SQL (for Firebird):
create domain ...;
create domain ...;
commit;
create table ( ... );
create table ( ... );
commit;
insert into Radix_en values (32768,'dff0207a-591f-4435-9f8b-7b9b3e6ba2c1','d1f77359b3f7236806489ba3108c771f','NUMBER','US_EN','LATIN','GREEK','GERMAN');
insert into Radix_en values (32769,'5ef0e634-5c96-4ae4-92a8-0d68c02ffeb6','4e3f710600230cf0520bf32269511062',0,'zero','nulla','μηδέν','Null');
insert into Radix_en values (32770,'eae9cacc-3ee3-4471-afad-e5af201da435','9ab2f782988416431238ec63277b11df',1,'one','Unum','ένα','ein');
commit;
I would like to find a way to generate the MD5 checksum for every field, instead of the entire line of text including delimiters.
Data.txt format is not yet finalized, and I may change its format to anything which would makes this possible or easier.
Also, if there has to be several separate steps - fine, since the entire process should be scripted and automated. I was considering processing Data.txt first, then run it through this script, but I would like to simplify the process as much as possible. The number of different Data.txt files could be large, and I still have numerous other processors to include.
As the matter of fact I am also trying to learn more about bash scripting, and I am rather hooping to find expert approach and advise to this problem more than a specific solution.
I am not even sure if my post title is the solution I need, and thus it is related to my question. I was not sure if I should post this on Superuser where I usually visit or here. So I post here first, and sorry if I am off a bit.
Thanks!
Sandor
...
Edited to add more on 08/23/2014 3:00 AM
Thanks to mikeserv using IFS is working, so my scripts now looks like this:
renice -n 19 $$ > /dev/null; #for now
idx=32768;
dbt='Radix_en';
cat Domains_en.txt;
cat Tables_en.txt;
while read;
do
gid="$idx,'$(uuidgen)'";
IFS=,; set -f # no gobbling!
echo "insert into $dbt values ($gid";
for field in $REPLY
do
printf '%s' ",$field,'";
printf '%s' "$field" | md5sum;
done | cut -d\ -f1;
echo "$var);";
idx=$((idx+1));
done < Data.txt;
The output is great, the line breaks are making text edit/search much easier while Firebird is still happy, except one thing..
Here is the output:
create domain ...;
create domain ...;
commit;
create table ( ... );
create table ( ... );
commit;
insert into Radix_en values (32768,'303f8957-57cf-4485-ace4-d21c7cf144e6'
,'NUMBER','722d79c16b51fe86610972b8d950038c
,'US_EN','b63fb39e32b062c76694bec58c4f8c67
,'LATIN','fd6f27a3c59111fc2a0b5e452595ef3d
,'GREEK','c081310697bb6b7d7bed5034824e2530
,'GERMAN','15db1d0e1b0861d8ac1f391db801493a
);
insert into Radix_en values (32769,'e7fdf095-d31c-4c59-a23b-7ea67db7aefb'
,0,'cfcd208495d565ef66e7dff9f98764da
,'zero','01b40535afbfd9611e910f58f4ab5146
,'nulla','584edd0b6638798dee53e2c23e84e2d1
,'μηδέν','cd3ed2f1039ed8668b4d48e742bd2e5b
,'Null','e0a93a9e6b0eb1688837d8bab9b4badb
);
insert into Radix_en values (32770,'a21916b5-2a05-4656-ad4e-c8cfee1abfcc'
,1,'c4ca4238a0b923820dcc509a6f75849b
,'one','7e31533231a12e4a560a18ac8cd02112
,'Unum','05d92bcbffbf59b375f25945e9af2dd0
,'ένα','826f5e2d5ba7ace48f4d6fe3c5e2925f
,'ein','dcc09a2cb665ca332d1689cb11aff592
);
commit;
The md5 hash is missing a delimiter at the end, and I have no idea how to negotiate the output with the pipes. What is it I am not understanding here?
Since in this particular case the data fields are going to hold code for programmable ICs no extra characters are going to be acceptable in the checksum between the delimiters, and so far it looks like so. Again, the code is in ASCII and my delimiter is going be something that is not part of ASCII so it is safe. Firebird also going to reject anything not ASCII.
If you would be so kind to point me to how to finish this script, as I am already banging my head against some new issues IFS just showed me (yes, file paths on Windows). I'll try and see how this script is going to work with pure ASCII, then I would like to move on and make another post about some more issues.
Thanks again for your help!
Sandor
...
Edited to final on 08/30/2014 7:00 PM
Replacing cut with sed seems to work. Firebird field input still needs to be escaped for semi-colons (') with another of the same added, and the current comma IFS delimiter in data files still has to be replaced with non-ASCII. Instead of recursive file lists this script is still a single-file input. echo should probably be replaced by printf, and a whole lot of more...
Here is the final script excluding the shebang:
renice -n 19 $$ >> Radix_en_log.txt;
idx=32768; dbt='Radix_en';
cat Domains_en.txt; cat Tables_en.txt;
while read; do
gid="$idx,'$(uuidgen)'";
IFS=,; set -f;
echo "insert into $dbt values ($gid";
for field in $REPLY
do
printf '%s' ",$field,'"; printf '%s' "$field" | md5sum;
done | sed "s/[ ][ ][-]/\'/g"; printf '%s\n' ");";
idx=$((idx+1));
done < Data.txt;
echo "commit;";
Here is the output:
create domain ...;
create domain ...;
commit;
create table ( ... );
create table ( ... );
commit;
insert into Radix_en values (32768,'2f675b86-b2b4-4e52-b000-e6a8cf0f3dca'
,'NUMBER','722d79c16b51fe86610972b8d950038c'
,'US_EN','b63fb39e32b062c76694bec58c4f8c67'
,'LATIN','fd6f27a3c59111fc2a0b5e452595ef3d'
,'GREEK','c081310697bb6b7d7bed5034824e2530'
,'GERMAN','15db1d0e1b0861d8ac1f391db801493a'
);
insert into Radix_en values (32769,'e2afcd65-9a1b-49e3-baf1-74b0619a4776'
,0,'cfcd208495d565ef66e7dff9f98764da'
,'zero','01b40535afbfd9611e910f58f4ab5146'
,'nulla','584edd0b6638798dee53e2c23e84e2d1'
,'μηδέν','cd3ed2f1039ed8668b4d48e742bd2e5b'
,'Null','e0a93a9e6b0eb1688837d8bab9b4badb'
);
insert into Radix_en values (32770,'f51b72eb-d64f-4e9e-ab49-8954df9505cd'
,1,'c4ca4238a0b923820dcc509a6f75849b'
,'one','7e31533231a12e4a560a18ac8cd02112'
,'Unum','05d92bcbffbf59b375f25945e9af2dd0'
,'ένα','826f5e2d5ba7ace48f4d6fe3c5e2925f'
,'ein','dcc09a2cb665ca332d1689cb11aff592'
);
commit;
Thanks!
Sandor
A:
The shell has a built-in variable expansion field separator. So if you have a string and your delimiter is solid you can do:
var=32768,'dff0207a-591f-4435-9f8b-7b9b3e6ba2c1','d1f77359b3f7236806489ba3108c771f','NUMBER','US_EN','LATIN','GREEK','GERMAN'
( IFS=,; set -f
for field in $var
do printf '\n%s\n\t' "$field - md5:" >&2
printf %s "$field" |
md5sum
done |
cut -d\ -f1
)
32768 - md5:
f43764367fa4b73ba947fae71b0223a4
dff0207a-591f-4435-9f8b-7b9b3e6ba2c1 - md5:
0983e6c45209f390461c1b1df9320674
d1f77359b3f7236806489ba3108c771f - md5:
07d82ab57ba81f991ab996bd7c5a0441
NUMBER - md5:
34f55eca38e0605a84f169ff61a2a396
US_EN - md5:
c9d3e580b7b102e864d9aea8703486ab
LATIN - md5:
0e869135050d24ea6e7a30fc6edbac6c
GREEK - md5:
d4cacc28e56302bcec9d7af4bba8c9a7
GERMAN - md5:
ed73cca110623766d7a2457331a4f373
That should give you a newline separated list of md5s - as it did me.
IFS=, is used to specify that when any variable type shell expansion is performed the shell should split it out on the , character rather than <space><newline><tab> - which is the default. set -f is used to specify that if the shell should encounter any file globs within an unquoted expansion it should not expand them - so echo * would print only * regardless of the contents of the current directory.
For every comma separated field in $var the shell does printf "$field" | md5sum - so once per field without separator strings as I take the question to mean. And last cut trims the few spaces and the - at the end of each output line as it receives them. Most of the output is actually to stderr - cut only ever sees the md5sums.
|
[
"stackoverflow",
"0005252313.txt"
] | Q:
Can you access the concrete class from the abstract class?
I have a abstract class which represents an immutable value. Like strings, the immutable class has methods which appear to change the instance, but really just return a new instance with the changes. The abstract class has some of these methods which have some default behavior. Can I have this kind of Create modified instance and return logic in the abstract class? I'm getting hung up because I don't know what to put in the ??? positions. It seems like what I need is a type T that represents the deriving type. Are there any such patterns out there to overcome this problem?
public abstract class BaseImmutable
{
public readonly object Value;
protected BaseImmutable(object val)
{
Value = val;
}
public ??? ModifiyButNotReally()
{
object newValue = GetNewObjectFromOld(Value);
return new ???(newValue);
}
}
public class DerivedImmutable : BaseImmutable
{
public DerivedImmutable(object val) : base(val)
{
}
}
ANSWER
This was easier than I suspected. A method can be made generic without making the class generic. With that in mind the only thing we have to do is constrain the generic argument to inherit from the base class and require it not be abstract by including a new() constraint. One last step I needed to do was to use reflection to get the non-parameterless constructor to create a new instance. I found this solution to be better than the generic class answer suggested by @Jason, which I have since learned is known as the Curiously Recurring Template Pattern. It also avoids the need to create a difficult-to-test/moq extension method in a separate static class as suggested by @bottleneck. However, @bottleneck's answer is what gave me the inspiration for the solution so the points go to him.
public abstract class BaseImmutable
{
public readonly object Value;
protected BaseImmutable(object val)
{
Value = val;
}
public T ModifiyButNotReally<T>() where T : BaseImmutable, new()
{
object newValue = GetNewObjectFromOld(Value);
var ctor = typeof(T).GetConstructor(new[] { typeof(object) });
return (T)ctor.Invoke(new object[] { newValue });
}
}
public class DerivedImmutable : BaseImmutable
{
public DerivedImmutable(object val) : base(val)
{
}
}
A:
You could do this:
abstract class BaseImmutable<T> {
// ...
public T ModifiedButNotReally() { // ... }
}
class DerivedImmutable : BaseImmutable<DerivedImmutable> { // ... }
where I've elided a ton of details.
That said, this does smell kind of bad. Basically, I'm giving you a mechanism to solve your problem, it's just not clear to me that whatever you're modeling is best modeled by the setup that you have.
A:
Maybe an extension method would be in order here?
public static T Modify<T>(this T immutable) where T:BaseImmutable{
//work your magic here
//return T something
}
|
[
"stackoverflow",
"0011540531.txt"
] | Q:
Where do i store username and userid? sessions or cookie?
There are many instances in my code where quick access to the logged in username and userid is needed. I currently use cookies. This isn't secure.
I thought sessions would be a solution but sessions expire.
Another option, is to store a unique token in a cookie and then match with a stored token in the database, to retrieve logged in user data. This is the most secure solution but the problem i see with this is, there are many times in my code where the logged in username and userid is needed but querying all the time would use up resources unnecessary(is this true?)
What is the solution?
A:
I'm going to try to coalesce everything that's been said in the comments into one answer. As such, please show other helpful users some love by upvoting their answers / comments! I'm also going to give a brief overview of how sessions work, to make the answer useful to a wider audience.
When a user logs into a website, a session is created to identify them. Sessions are handled in PHP by creating a session ID, then associating that ID with a variable store on the server side, which is accessed in PHP scripts using the $_SESSION superglobal. The session ID is stored in a cookie on the client-side, and identifies their session.
In order to remain secure, the session ID must be unique, random and secret. If an attacker guesses your session ID, they can create a cookie on their own computer using that same ID, and take over your session. They'd be logged in as you! This is obviously bad, so PHP uses a strong random number generator to create those session IDs. In order to make things more secure, you can enable SSL site-wide and inform the browser to only ever send the cookie over SSL, using the HTTPS-only flag. This might be overkill for your site, though.
In order to prevent leaked session IDs from being useful forever, or bad guys sneaking into your room after you've gone out to the shop, sessions use timeouts. It is best to have them expire after a reasonably short time - between 10 and 60 minutes depending on your security requirements. You can reset the timeout every time you view the page, so an active user doesn't get logged out.
In order to allow the user to be remembered (i.e. the "remember me" checkbox), you need to provide a cookie that works as an authentication token. Keep in mind that this token is, for all intents and purposes, the same as having your password. This means that if the bad guy steals the cookie, they can log into your account. In order to make this option safe, we can use a one-time token. These tokens should be single-use, random, long and secret. Treat them as if they were passwords!
Here's how to implement them:
Generate a random token. Use /dev/urandom if you can, otherwise you need several hundred values from mt_rand to get something "random" enough, then hash the resulting string with SHA1 to produce the token.
Use a strong password hashing algorithm (e.g. PBKDF2 or bcrypt) to create a hash of the token. Do not use SHA1 or MD5 for this purpose - they are NOT designed for hashing passwords!
Insert the hash into a database table, along with the ID of the user it belongs to and the date it was created.
Set the token in a cookie on the user's side.
When the user visits the site, is not logged in, and a login token cookie is detected, hash the token value from the cookie using the same algorithm you used in step 2, and look it up in the database. If it matches an entry, log the user in as that ID.
Delete the entry from the database and issue a new one (back to step 1).
You should also run a script that looks for very old session tokens (e.g. 3 months or more) and delete them. This will require the user to log in again when they come back after a long period of inactivity.
For a longer explanation of this, and many more important bits of information about secure web form login systems, read The Definitive Guide to Forms-Based Website Authentication and take a look at the OWASP project.
A:
If it is not needed on the client, make sure it does not end up there.
Since userId's are specific to a logged in user and not a specific computer, a cookie does not seem like the way to go.
Basic authentication in PHP is usually done with sessions, so you could just as well add the userId to the session.
If the session times are too short, increase the session time.
|
[
"stackoverflow",
"0038811029.txt"
] | Q:
Toggle Menu individual List from enter key
Practicing some Jquery I have a simple un-ordered list that I want to expand when I press enter. but only the the one I press enter on not all of them. Possibly setting it up wrong. At the moment its expanding both items.
<nav>
<ul>
<li class="menu"><a href="#">Menu 1</a>
<ul class="subMenu">
<li>
<a href="">Sub Menu 1</a>
</li>
</ul>
</li>
<li class="menu"><a href="#">Menu 2</a>
<ul class="subMenu">
<li>
<a href="">Sub Menu 1</a>
</li>
<li>
<a href="">Sub Menu 2</a>
</li>
</ul>
</li>
</ul>
$('.menu').keydown(function(event){
var keycode = (event.keyCode ? event.keyCode : event.which);
if(keycode == '13'){
$('ul.subMenu').toggleClass('show');
}
});
A:
Change $('ul.subMenu') to $(this).children('ul.subMenu'). This will search only for children of the current element (not all of them) that match the ul.subMenu selector (as desired).
See the jQuery children([selector]) function for more info.
|
[
"stackoverflow",
"0056109254.txt"
] | Q:
Flask SQLAlchemy foreign key contraint incorrectly malformed
I wrote the following code:
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
first_name = db.Column(db.String(255), nullable=True)
last_name = db.Column(db.String(255), nullable=True)
email = db.Column(db.String(255), unique=True, nullable=False)
password_expires_at = db.Column(db.DateTime, nullable=False)
active = db.Column(db.Boolean, nullable=False)
created_at = db.Column(db.DateTime, nullable=False,
default=datetime.utcnow)
updated_at = db.Column(db.DateTime, nullable=False,
default=datetime.utcnow)
session_id = db.Column(db.Integer, db.ForeignKey('session.id'),
nullable=False)
class Session(db.Model):
id = db.Column(db.String(255), primary_key=True)
ip = db.Column(db.String(255), unique=True, nullable=False)
user = db.relationship('User', backref='session')
db.create_all()
When running Flask I get the following error:
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1005, 'Can\'t create table `bugbot`.`user` (errno: 150 "Foreign key constraint is incorrectly formed")')
[SQL:
CREATE TABLE user (
id INTEGER NOT NULL AUTO_INCREMENT,
first_name VARCHAR(255),
last_name VARCHAR(255),
email VARCHAR(255) NOT NULL,
password_expires_at DATETIME NOT NULL,
active BOOL NOT NULL,
created_at DATETIME NOT NULL,
updated_at DATETIME NOT NULL,
session_id INTEGER NOT NULL,
PRIMARY KEY (id),
UNIQUE (email),
CHECK (active IN (0, 1)),
FOREIGN KEY(session_id) REFERENCES session (id)
)
]
I followed a tutorial to get to this code so I don't know what I'm doing wrong. Could anyone help me out?
A:
The column you are trying to place constraint on (User.session_id) is of type Integer whereas the column it is referencing (Session.id) is of type String! That's why it is failing, make both types compatible and run it again, it should work.
|
[
"stackoverflow",
"0023436011.txt"
] | Q:
window.parent.document no longer working in safari
I have a line of code in a page being loaded in an iframe that accesses an element in its parent window:
$('#titleBack', window.parent.document).fadeIn(500);
Both pages are in the same domain. This works fine in Chrome and Firefox but crashes* in Safari. From what info I can gather from people testing my site, this was working fine in Safari as of a few days ago. Any ideas?
*No code beyond this line gets processed, however all the code up until this line is processed properly.
A:
Using top.document instead of window.parent.document fixes the problem.
|
[
"stackoverflow",
"0010318509.txt"
] | Q:
How dangerous is it to use runModalForWindow from multiple threads?
I have a memory corruption error (I suspect), which is resulting in a program crash after specific UI actions. This is a Cocoa Objective-C application and does not use GC.
After many hours of debugging, I found a possible reason for the crash:
DiscSelectPopup *popupSelect = [[DiscSelectPopup alloc] initWithDataList:dataList count:count];
NSInteger result = [NSApp runModalForWindow:popupSelect.window];
The above popup routine is executed from a secondary thread. This thread is created and started every time the user clicks on a button. So, we can have several modal popup show up simultaneously.
When I run the program in debug mode with MallocStackLogging=1 and MallocStackLoggingNoCompact=1, it prints a malloc error log message at the runModalForWindow: call (but not always).
malloc: *** error for object 0xbc65840: double free
.....
malloc: *** error for object 0xbc547e0: pointer being freed was not allocated
Is it really bad to use runModalForWindow: from a secondary thread?
Could it be the reason for the crash?
A:
Is it really bad to use runModalForWindow from the secondary thread?
Yes. UI stuff needs to happen on the main thread.
You also shouldn't use runModalForWindow: unless you specifically want to block all of the other windows in your application (essentially freezing your app except for that window). Just show the window. If you want to block a specific window (or your app is single-window), begin it as a sheet.
Edit: Looking again at the question, this caught my eye:
The above popup routine is executed from a secondary thread.
Don't do that. To show a window, just show it. Receive the action message from the button on the main thread, and then do only the actual work—if anything—on a secondary thread.
Note that showing the window will not block any other windows or anything else you're doing unless you specifically make it do so (i.e., use runModalForWindow:). If you show the window in the normal way, all your windows continue to work normally. Any timers and observers and similar things you've scheduled on the main thread also continue to work. You don't need to create threads or do anything else special for this; it all just works by default.
If the work that you'll eventually do may take a non-trivial amount of time, then you should put that on a secondary thread, only when it comes time to do it. You should also look into whether or not it'd be easier or better to construct as operation objects or blocks than as raw threads. It probably will.
A:
Using valgrind memcheck, I concluded that the secondary thread runModalForWindow: calls are not directly connected with the memory corruption problem.
Yes, it is bad coding to manipulate UI components from non-main threads but such behavior alone cannot crash the program.
The malloc error message in the question was due to the mistakenly double released popup window object.
By the way, real cause of the memory corruption was the mismatched malloc/free call (freeing not malloced memory pointer).
|
[
"stackoverflow",
"0013986663.txt"
] | Q:
What prior knowledge is required for proper Boost library use?
I'm still in the process of learning C++ concepts, but I'm fairly comfortable with pointers, references, Object Oriented Programming, and other programming basics. But I still need to learn more about templates, iterators, and regular expressions. Are there any other concepts I should have a firm grounding in to get the best use out of Boost libraries?
A:
There is no such thing as "proper" use of Boost. You use that part of Boost that helps you with your problem. For Boost Test, for example, you don't have to know much about anything specific. For Boost Graph or Algorithm, you should have a good grasp of templates.
Hence, there's no good way to answer your question. Look at the documentation of the library you want to use (Boost or otherwise), and if you think you can handle it, use it. Otherwise, come back here and ask a more specific question. ;-)
|
[
"stackoverflow",
"0015670001.txt"
] | Q:
How to know where a .f90 file installed?
I installed a Linux software with compile, make install. Then I can't find a file called aaa.f90, the aaa.f90 is called by bbb.py, those two files are in my installation directory. How can I know where did it install from Makefile? I tried 'which aaa.f90', it's not work.
I tried to read Makefile.am, Makefile.in, but it's very hard to understand.
A:
which searches the path for executables. Why not just use find to look for it from the command line?
Example:
find / -name aaa.f90 -print
It's possible you might need sudo to find a given file this way, but it seems unlikely in this case.
|
[
"bitcoin.stackexchange",
"0000000367.txt"
] | Q:
Have any cryptography experts vetted the bitcoin source code?
Theoretically, bitcoin's open source nature makes it more resistant to bugs and exploits. However, due to the specialized nature of the code, even many programmers don't fully understand the cryptography pieces. Have any well-regarded cryptography experts done an analysis of the code and published their thoughts anywhere?
A:
"It looks good to me" tends to make for a pretty boring paper.
Security expert Dan Kaminsky has given talks and written articles about the Bitcoin system. His two main points are that it cannot scale to the number of transactions a payment processing system needs and that it is not as anonymous as many people think.
He also wrote, "As a note, I have a tremendous amount of respect for BitCoin; I count it in the top five most interesting security projects of the decade. Entire classes of bugs are missing. But it's just not an anonymous solution, and the devs will say as much."
A:
Brian Warner is a security expert and he has studied the source code. His presentation about Bitcoin is by far the best deep technical explanation I've seen:
http://vimeo.com/27177893
There is a brief mention about the security of the source code, embedded in two hours of brilliant explication of the security of the overall system design.
Also, the cryptography mailing list hosted by Jack Lloyd is a discussion forum for a wide range of cryptography and security experts. Bitcoin has been discussed several times. The discussions that I have looked at on that list tend to be more about the protocol, the economics, and so forth than about the actual source code. Here is a google search that returns letters from that mailing list that have the string "Bitcoin" in:
https://encrypted.google.com/search?hl=en&q=site%3Alists.randombit.net%20bitcoin
A:
While not a crypto expert, security guru Steve Gibson covered Bitcoin in his podcast Security Now on February 9 2011.
Audio
Transcript
|
[
"stackoverflow",
"0013021401.txt"
] | Q:
lagging panel data with data.table
I currently lag panel data using data.table in the following manner:
require(data.table)
x <- data.table(id=1:10, t=rep(1:10, each=10), v=1:100)
setkey(x, id, t) #so that things are in increasing order
x[,lag_v:=c(NA, v[1:(length(v)-1)]),by=id]
I am wondering if there is a better way to do this? I had found something online about cross-join, which makes sense. However, a cross-join would generate a fairly large data.table for a large dataset so I am hesitant to use it.
A:
I'm not sure this is that much different from your approach, but you can use the fact that x is keyed by id
x[J(1:10), lag_v := c(NA,head(v, -1)) ]
I have not tested whether this is faster than by, especially if it is already keyed.
Or, using the fact that t (don't use functions as variable names!) is the time id
x <- data.table(id=1:10, t=rep(1:10, each=10), v=1:100)
setkey(x, t)
replacing <- J(setdiff(x[, unique(t)],1))
x[replacing, lag_v := x[replacing, v][,v]]
but again, using a double join here seems inefficient
|
[
"stackoverflow",
"0059013496.txt"
] | Q:
error installing pytorch using pip on windows 10
I am trying to install pytorch with pip using
pip install torch
or
pip3 install torch===1.3.1 torchvision===0.4.2 -f https://download.pytorch.org/whl/torch_stable.html
with python 3.7.4
and with python 3.8 (latest stable release)
both on 32 and 64 bit.
and getting
Collecting torch Using cached
https://files.pythonhosted.org/packages/f8/02/880b468bd382dc79896eaecbeb8ce95e9c4b99a24902874a2cef0b562cea/torch-0.1.2.post2.tar.gz
Collecting pyyaml (from torch) Downloading
https://files.pythonhosted.org/packages/bc/3f/4f733cd0b1b675f34beb290d465a65e0f06b492c00b111d1b75125062de1/PyYAML-5.1.2-cp37-cp37m-win_amd64.whl
(215kB)
100% |████████████████████████████████| 225kB 1.2MB/s Installing collected packages: pyyaml, torch Running setup.py install for torch
... error
Complete output from command C:\Noam\Code\threadart\stav-rl\venv\Scripts\python.exe -u -c "import
setuptools,
tokenize;__file__='C:\\Users\\noams\\AppData\\Local\\Temp\\pip-install-djc6s2t8\\torch\\setup.py';f=getattr(tokenize,
'open', open)(__file__);code=f.read( ).replace('\r\n',
'\n');f.close();exec(compile(code, __file__, 'exec'))" install
--record C:\Users\noams\AppData\Local\Temp\pip-record-zohv2zo7\install-record.txt
--single-version-externally-managed --compile --install-headers C:\Noam\Code\threadart\stav-rl\venv\inclu de\site\python3.7\torch:
running install
running build_deps
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\noams\AppData\Local\Temp\pip-install-djc6s2t8\torch\setup.py",
line 265, in <module>
description="Tensors and Dynamic neural networks in Python with strong GPU acceleration",
File "C:\Noam\Code\threadart\stav-rl\venv\lib\site-packages\setuptools-40.8.0-py3.7.egg\setuptools\__init__.py",
line 145, in setup
File "C:\Python37_x64\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Python37_x64\lib\distutils\dist.py", line 966, in run_commands
self.run_command(cmd)
File "C:\Python37_x64\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\noams\AppData\Local\Temp\pip-install-djc6s2t8\torch\setup.py",
line 99, in run
self.run_command('build_deps')
File "C:\Python37_x64\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Python37_x64\lib\distutils\dist.py", line 985, in run_command
cmd_obj.run()
File "C:\Users\noams\AppData\Local\Temp\pip-install-djc6s2t8\torch\setup.py",
line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named 'tools.nnwrap'
---------------------------------------- Command "C:\Noam\Code\threadart\stav-rl\venv\Scripts\python.exe -u -c "import
setuptools,
tokenize;__file__='C:\\Users\\noams\\AppData\\Local\\Temp\\pip-install-djc6s2t8\\torch\\setup.py';f=getattr(tokenize,
'open', open)(__file__);code=f.read().replace('\r\n', '\n');
f.close();exec(compile(code, __file__, 'exec'))" install --record
C:\Users\noams\AppData\Local\Temp\pip-record-zohv2zo7\install-record.txt
--single-version-externally-managed --compile --install-headers C:\Noam\Code\threadart\stav-rl\venv\include\site\python3.7\torch"
failed with error code 1 in
C:\Users\noams\AppData\Local\Temp\pip-install-djc6s2t8\torch\
clearly, I am doing something wrong.
Please help!
A:
I had the same problem. Now the problem is fixed. (2020-05-31)
Visited the site pytorch.org
and find "QUICK START LOCALLY" on homepage of pytorch.org. ( it' can find by scroll down little )
Checking the environment form of your system
(ex: Windows, pip, python, ,,)
then, you can see the install command.
"pip install torch===.... "
Copy the install command
and Execute the command at your system.
Good Luck !!
|
[
"stackoverflow",
"0060721024.txt"
] | Q:
How to reference web in nginx config when run with docker-compose?
I try to configure service with Nginx, Certbot and my app:
docker-compose.yml:
version: '3'
services:
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
web:
image: example_app:latest
restart: unless-stopped
ports:
- "5005:5005"
nginx/app.conf:
server {
listen 80;
server_name my.app;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name my.app;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/my.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/my.app/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://web;
}
}
Whenever I request service by https://my.app I always 502 Gateway.
I looked through many materials and I cannot grasp what am I doing wrong?
A:
The problem was with upstream service and I changed restart: unless-stopped to restart:always for web in docker-compose.yml.
nginx/app.conf:
upstream flask {
server web:5005;
}
server {
listen 80;
server_name my.app;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name my.app;
server_tokens off;
ssl_certificate /etc/letsencrypt/live/patricia.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/patricia.app/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://flask;
}
}
|
[
"stackoverflow",
"0025068224.txt"
] | Q:
Delete a huge amount of files in Rackspace using fog
I have millions of files in my Rackspace Files. I would like to delete a part of them, passing lists of file names instead of deleting one by one, which is very slow. Is there any way to do this with fog? Right now, I have a script to delete each file, but would be nice to have something with better performance.
connection = Fog::Storage.new({
:provider => 'Rackspace',
:rackspace_username => "xxxx",
:rackspace_api_key => "xxxx",
:rackspace_region => :iad
})
dir = connection.directories.select {|d| d.key == "my_directory"}.first
CloudFileModel.where(duplicated: 1).each do |record|
f = record.file.gsub("/","")
dir.files.destroy(f) rescue nil
puts "deleted #{record.id}"
end
A:
Yes, you can with delete_multiple_objects.
Deletes multiple objects or containers with a single request.
To delete objects from a single container, container may be provided and object_names should be an Array of object names within the container.
To delete objects from multiple containers or delete containers, container should be nil and all object_names should be prefixed with a container name.
Containers must be empty when deleted. object_names are processed in the order given, so objects within a container should be listed first to empty the container.
Up to 10,000 objects may be deleted in a single request. The server will respond with 200 OK for all requests. response.body must be inspected for actual results.
Examples:
Delete objects from a container
object_names = ['object', 'another/object']
conn.delete_multiple_objects('my_container', object_names)
Delete objects from multiple containers
object_names = ['container_a/object', 'container_b/object']
conn.delete_multiple_objects(nil, object_names)
Delete a container and all it's objects
object_names = ['my_container/object_a', 'my_container/object_b', 'my_container']
conn.delete_multiple_objects(nil, object_names)
|
[
"stackoverflow",
"0059199157.txt"
] | Q:
efficient processing of multiple json to xml in java
I need to process multiple json stored in a file and convert them into xml. These json would on an average convert into 3000 lines of xml (to get a picture of the size of json).
I am looking at achieving this in less than half an hour.
The time includes reading from file, traversing and converting and again storing into another file as xml's.
Later I need to store them in DB as well. I have still not done that but I was planning to use a Kafka connector to directly insert in DB to achieve performance.
Issues I am facing are:-
- I am able to read only one line from my input text file but not multiple lines.
- How to use my util in my SpliteratorBenchmark.processLine() method which accepts a String to convert.
- How to create a new xml file everytime it creates a new xml file. I am having a single file which contains multiple json. I am supposed to read a line and convert it into a xml and then another xml for next json.
Kindly help.
Is there any other way to achieve this. I cannot use any kind of JMS like MQ or Kafka for this task.
This is my core FixedBatchSpliteratorBase:-
public abstract class FixedBatchSpliteratorBase<T> implements Spliterator<T> {
private final int batchSize;
private final int characteristics;
private long est;
public FixedBatchSpliteratorBase(int characteristics, int batchSize, long est) {
this.characteristics = characteristics | SUBSIZED;
this.batchSize = batchSize;
this.est = est;
}
public FixedBatchSpliteratorBase(int characteristics, int batchSize) {
this(characteristics, batchSize, Long.MAX_VALUE);
}
public FixedBatchSpliteratorBase(int characteristics) {
this(characteristics, 128, Long.MAX_VALUE);
}
public FixedBatchSpliteratorBase() {
this(IMMUTABLE | ORDERED | NONNULL);
}
@Override
public Spliterator<T> trySplit() {
final HoldingConsumer<T> holder = new HoldingConsumer<T>();
if (!tryAdvance(holder)) return null;
final Object[] a = new Object[batchSize];
int j = 0;
do a[j] = holder.value; while (++j < batchSize && tryAdvance(holder));
if (est != Long.MAX_VALUE) est -= j;
return spliterator(a, 0, j, characteristics() | SIZED);
}
@Override
public Comparator<? super T> getComparator() {
if (hasCharacteristics(SORTED)) return null;
throw new IllegalStateException();
}
@Override
public long estimateSize() {
return est;
}
@Override
public int characteristics() {
return characteristics;
}
static final class HoldingConsumer<T> implements Consumer<T> {
Object value;
@Override
public void accept(T value) {
this.value = value;
}
}
}
This is my class FixedBatchSpliterator:-
import static java.util.stream.StreamSupport.stream;
import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.stream.Stream;
public class FixedBatchSpliterator<T> extends FixedBatchSpliteratorBase<T> {
private final Spliterator<T> spliterator;
public FixedBatchSpliterator(Spliterator<T> toWrap, int batchSize) {
super(toWrap.characteristics(), batchSize, toWrap.estimateSize());
this.spliterator = toWrap;
}
public static <T> FixedBatchSpliterator<T> batchedSpliterator(Spliterator<T> toWrap, int batchSize) {
return new FixedBatchSpliterator<>(toWrap, batchSize);
}
public static <T> Stream<T> withBatchSize(Stream<T> in, int batchSize) {
return stream(batchedSpliterator(in.spliterator(), batchSize), true);
}
@Override public boolean tryAdvance(Consumer<? super T> action) {
return spliterator.tryAdvance(action);
}
@Override public void forEachRemaining(Consumer<? super T> action) {
spliterator.forEachRemaining(action);
}
}
This is my JsonSpliterator:-
import java.util.function.Consumer;
import com.dj.gls.core.FixedBatchSpliteratorBase;
public class JsonSpliterator extends FixedBatchSpliteratorBase<String[]> {
private final JSONReader jr;
JsonSpliterator(JSONReader jr, int batchSize) {
super(IMMUTABLE | ORDERED | NONNULL, batchSize);
if (jr == null) throw new NullPointerException("JSONReader is null");
this.jr = jr;
}
public JsonSpliterator(JSONReader jr) { this(jr, 128); }
@Override
public boolean tryAdvance(Consumer<? super String[]> action) {
if (action == null) throw new NullPointerException();
try {
final String[] row = jr.readNext();
if (row == null) return false;
action.accept(row);
return true;
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
@Override
public void forEachRemaining(Consumer<? super String[]> action) {
if (action == null) throw new NullPointerException();
try {
for (String[] row; (row = jr.readNext()) != null;) action.accept(row);
} catch (RuntimeException e) {
throw e;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
This is my main class which reads input file and starts the conversion process:-
import static java.util.concurrent.TimeUnit.SECONDS;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.stream.Stream;
import com.dj.gls.core.FixedBatchSpliterator;
public class SpliteratorBenchmark {
static double sink;
public static void main(String[] args) throws IOException {
final Path inputPath = createInput();
for (int i = 0; i < 3; i++) {
System.out.println("Start processing JDK stream");
measureProcessing(Files.lines(inputPath));
System.out.println("Start processing fixed-batch stream");
measureProcessing(FixedBatchSpliterator.withBatchSize(Files.lines(inputPath), 10));
}
}
private static void measureProcessing(Stream<String> input) throws IOException {
final long start = System.nanoTime();
try (Stream<String> lines = input) {
final long totalTime = lines.parallel().mapToLong(SpliteratorBenchmark::processLine).sum();
final double cpuTime = totalTime, realTime = System.nanoTime() - start;
final int cores = Runtime.getRuntime().availableProcessors();
System.out.println(" Cores: " + cores);
System.out.format(" CPU time: %.2f s\n", cpuTime / SECONDS.toNanos(1));
System.out.format(" Real time: %.2f s\n", realTime / SECONDS.toNanos(1));
System.out.format("CPU utilization: %.2f%%\n\n", 100.0 * cpuTime / realTime / cores);
}
}
private static long processLine(String line) {
final long localStart = System.nanoTime();
//conversion of json to xml and storing an xml file
return System.nanoTime() - localStart;
}
private static Path createInput() throws IOException {
final Path inputPath = Paths.get("/src/main/resources/input/input.txt");
return inputPath;
}
}
This is how I am planning to convert my json to xml :-
import java.io.InputStream;
import java.io.OutputStream;
import java.io.Reader;
import java.io.Writer;
import javax.json.Json;
import javax.json.stream.JsonParser;
import javax.xml.stream.XMLOutputFactory;
import javax.xml.stream.XMLStreamException;
import javax.xml.stream.XMLStreamWriter;
import javax.xml.transform.Result;
/**
* Converts JSON stream to XML stream.
* Uses XML representation of JSON defined in XSLT 3.0.
*
* @see <a href="https://www.w3.org/TR/xslt-30/#json">22 Processing JSON Data</a>
*
*/
public class JsonStreamXMLWriter
{
public static final String XPATH_FUNCTIONS_NS = "http://www.w3.org/2005/xpath-functions";
private static final XMLOutputFactory XOF = XMLOutputFactory.newInstance();
static
{
XOF.setProperty(XMLOutputFactory.IS_REPAIRING_NAMESPACES, true);
}
private final JsonParser parser;
private final XMLStreamWriter writer;
public JsonStreamXMLWriter(Reader reader, Writer stream) throws XMLStreamException
{
this(Json.createParser(reader), getXMLOutputFactory().createXMLStreamWriter(stream));
}
public JsonStreamXMLWriter(Reader reader, OutputStream stream) throws XMLStreamException
{
this(Json.createParser(reader), getXMLOutputFactory().createXMLStreamWriter(stream));
}
public JsonStreamXMLWriter(Reader reader, OutputStream stream, String encoding) throws XMLStreamException
{
this(Json.createParser(reader), getXMLOutputFactory().createXMLStreamWriter(stream, encoding));
}
public JsonStreamXMLWriter(Reader reader, Result result) throws XMLStreamException
{
this(Json.createParser(reader), getXMLOutputFactory().createXMLStreamWriter(result));
}
public JsonStreamXMLWriter(Reader reader, XMLStreamWriter writer)
{
this(Json.createParser(reader), writer);
}
public JsonStreamXMLWriter(InputStream is, Writer stream) throws XMLStreamException
{
this(Json.createParser(is), getXMLOutputFactory().createXMLStreamWriter(stream));
}
public JsonStreamXMLWriter(InputStream is, OutputStream stream) throws XMLStreamException
{
this(Json.createParser(is), getXMLOutputFactory().createXMLStreamWriter(stream));
}
public JsonStreamXMLWriter(InputStream is, OutputStream stream, String encoding) throws XMLStreamException
{
this(Json.createParser(is), getXMLOutputFactory().createXMLStreamWriter(stream, encoding));
}
public JsonStreamXMLWriter(InputStream is, Result result) throws XMLStreamException
{
this(Json.createParser(is), getXMLOutputFactory().createXMLStreamWriter(result));
}
public JsonStreamXMLWriter(InputStream is, XMLStreamWriter writer)
{
this(Json.createParser(is), writer);
}
public JsonStreamXMLWriter(JsonParser parser, Writer stream) throws XMLStreamException
{
this(parser, getXMLOutputFactory().createXMLStreamWriter(stream));
}
public JsonStreamXMLWriter(JsonParser parser, OutputStream stream) throws XMLStreamException
{
this(parser, getXMLOutputFactory().createXMLStreamWriter(stream));
}
public JsonStreamXMLWriter(JsonParser parser, OutputStream stream, String encoding) throws XMLStreamException
{
this(parser, getXMLOutputFactory().createXMLStreamWriter(stream, encoding));
}
public JsonStreamXMLWriter(JsonParser parser, Result result) throws XMLStreamException
{
this(parser, getXMLOutputFactory().createXMLStreamWriter(result));
}
public JsonStreamXMLWriter(JsonParser parser, XMLStreamWriter writer)
{
this.parser = parser;
this.writer = writer;
}
public void convert() throws XMLStreamException
{
convert(getWriter());
}
public void convert(XMLStreamWriter writer) throws XMLStreamException
{
convert(getParser(), writer);
}
public static void convert(JsonParser parser, XMLStreamWriter writer) throws XMLStreamException
{
writer.writeStartDocument();
writer.setDefaultNamespace(XPATH_FUNCTIONS_NS);
write(parser, writer);
writer.writeEndDocument();
writer.flush();
parser.close();
}
public static void write(JsonParser parser, XMLStreamWriter writer) throws XMLStreamException
{
String keyName = null;
while (parser.hasNext())
{
JsonParser.Event event = parser.next();
switch(event)
{
case START_ARRAY:
writer.writeStartElement(XPATH_FUNCTIONS_NS, "array");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
break;
case END_ARRAY:
writer.writeEndElement();
break;
case START_OBJECT:
writer.writeStartElement(XPATH_FUNCTIONS_NS, "map");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
break;
case END_OBJECT:
writer.writeEndElement();
break;
case VALUE_FALSE:
writer.writeStartElement(XPATH_FUNCTIONS_NS, "boolean");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
writer.writeCharacters("false");
writer.writeEndElement();
break;
case VALUE_TRUE:
writer.writeStartElement(XPATH_FUNCTIONS_NS, "boolean");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
writer.writeCharacters("true");
writer.writeEndElement();
break;
case KEY_NAME:
keyName = parser.getString();
break;
case VALUE_STRING:
writer.writeStartElement(XPATH_FUNCTIONS_NS, "string");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
writer.writeCharacters(parser.getString());
writer.writeEndElement();
break;
case VALUE_NUMBER:
writer.writeStartElement(XPATH_FUNCTIONS_NS, "number");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
writer.writeCharacters(parser.getString());
writer.writeEndElement();
break;
case VALUE_NULL:
writer.writeEmptyElement(XPATH_FUNCTIONS_NS, "null");
if (keyName != null)
{
writer.writeAttribute("key", keyName);
keyName = null;
}
break;
}
writer.flush();
}
}
protected JsonParser getParser()
{
return parser;
}
protected XMLStreamWriter getWriter()
{
return writer;
}
protected static XMLOutputFactory getXMLOutputFactory()
{
return XOF;
}
}
This is my class to call this Util class to convert :-
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.Reader;
import java.nio.charset.StandardCharsets;
import javax.xml.stream.XMLStreamException;
import org.apache.commons.io.IOUtils;
public class JSON2XML
{
public static void main(String[] args) throws IOException, XMLStreamException
{
OutputStreamWriter outputStreamWriter = new OutputStreamWriter(outputStream);
InputStream json = new FileInputStream(file);
if (json.available() == 0)
{
System.err.println("JSON input: stdin");
System.exit(1);
}
try (Reader reader = new BufferedReader(new InputStreamReader(json, StandardCharsets.UTF_8)))
{
new JsonStreamXMLWriter(reader, new BufferedWriter(new OutputStreamWriter(outputStream))).convert();
}
}
}
A:
I was able to read multiple json files one by one and create new xml file.
Below is the code:-
private static long processLine(Path line) {
final long localStart = System.nanoTime();
System.out.println("Path" + line);
OutputStream outputStream;
try {
if(Files.isRegularFile(line)) {
String str = line.getFileName().toString();
System.out.println("FIlename - " + str.substring(0, str.length()-4));
outputStream = new FileOutputStream("/Users/d0s04ub/Documents/STS_Workspace/OrderProcessor/src/main/resources/output/" + str.substring(0, str.length()-4));
InputStream content = Files.newInputStream(line);
try (Reader reader = new BufferedReader(new InputStreamReader(content, StandardCharsets.UTF_8)))
{
try {
new JsonStreamXMLWriter(reader, new BufferedWriter(new OutputStreamWriter(outputStream))).convert();
} catch (XMLStreamException e) {
e.printStackTrace();
}
}
}
} catch (IOException e1) {
e1.printStackTrace();
}finally{
}
return System.nanoTime() - localStart;
}
|
[
"stackoverflow",
"0009007039.txt"
] | Q:
Makefile execute phonytargets of a file only if file does not exists
Is there any way I can only execute the phony dependency of a file
if this file does not exist?
If the file has no phony dependency it works and only executes the content
of this rule. But if I add a phony target as dependency it keeps executing the dependency rule and the rule of generating the existing file.
I simplified my Makefile so you can see my problem:
.PHONY: phonytarget all clean
all: $(CURDIR)/a.a
@echo done
$(CURDIR)/a.a: phonytarget
@touch $@
phonytarget:
@echo what the heck is wrong with you
A:
Use order-only prerequisites:
Occasionally, however, you have a situation where you want to impose a specific ordering on the rules to be invoked without forcing the target to be updated if one of those rules is executed. In that case, you want to define order-only prerequisites. Order-only prerequisites can be specified by placing a pipe symbol (|) in the prerequisites list: any prerequisites to the left of the pipe symbol are normal; any prerequisites to the right are order-only:
targets : normal-prerequisites | order-only-prerequisites
The normal prerequisites section may of course be empty. Also, you may still declare multiple lines of prerequisites for the same target: they are appended appropriately (normal prerequisites are appended to the list of normal prerequisites; order-only prerequisites are appended to the list of order-only prerequisites).
.PHONY: phonytarget all clean
all: $(CURDIR)/a.a
@echo done
$(CURDIR)/a.a: | phonytarget
@touch $@
phonytarget:
@echo what the hack is wrong with you
|
[
"bitcoin.stackexchange",
"0000069817.txt"
] | Q:
transaction inputs does not have prev_out field
I found some transaction inputs does not contain prev_out field, i.e. it does not have input address. It only has an output address. Does this mean this kind of transactions records the activities of mining new coins?
e.g.
'tx': [{'hash': '2b1f06c2401d3b49a33c3f5ad5864c0bc70044c4068f9174546f3cfc1887d5ba', 'inputs': [{'script': '04ffff001d0138', 'sequence': 4294967295, 'witness': ''}], 'lock_time': 0, 'out': [{'addr': '1HwmP33SaknLYShXfjVU8KmVThU3JiuVgH', 'n': 0, 'script': '41045e071dedd1ed03721c6e9bba28fc276795421a378637fb41090192bb9f208630dcbac5862a3baeb9df3ca6e4e256b7fd2404824c20198ca1b004ee2197866433ac', 'spent': False, 'tx_index': 15066, 'type': 0, 'value': 5000000000}],
while for another transation:
{'hash': 'ee475443f1fbfff84ffba43ba092a70d291df233bd1428f3d09f7bd1a6054a1f', 'inputs': [{'prev_out': {'addr': '1CfD77hupeUvFwBPxZ2fA8iyWmVwQY22oh', 'n': 1, 'script': '76a9147fe34b97aeff4ab754770be5c8f12e2e95332fd488ac', 'spent': True, 'tx_index': 8845778, 'type': 0, 'value': 10212000000}, 'script': '483045022100e5e4749d539a163039769f52e1ebc8e6f62e39387d61e1a305bd722116cded6c022014924b745dd02194fe6b5cb8ac88ee8e9a2aede89e680dcea6169ea696e24d52012102b4b754609b46b5d09644c2161f1767b72b93847ce8154d795f95d31031a08aa2', 'sequence': 4294967295, 'witness': ''}], ...
the second one has a prev_out field while the first one does not
A:
Transactions without prev_outs are known as coinbase transactions (not to be confused with transactions from the company named Coinbase) or coin generation transactions. They create outputs without needing any inputs to the transaction. Coinbase transactions are special transactions which are created miners and included as the first transaction in a block. They are what pays miners for their work.
There can only be one coinbase transaction per block, and it must be the first transaction of the block. The sum of the outputs must also meet certain constraints: it can be no more than the sum of the block subsidy and the sum of the transaction fees paid by all transactions in the block.
|
[
"stackoverflow",
"0039522967.txt"
] | Q:
how do promises work with #then and #json?
I'm confused on why the first example works, but the 2nd doesn't. I believe it has to do with calling json will resolve the response into javascript object? So then it returns a promise that has to be placed into a then function? I get this because of the error that is thrown in the 3rd example. What does #json do exactly?
export const promiseErrorMiddleware = store => next => action => {
const url = action.url
const fetchName = action.fetchName
return Promise.resolve(fetch(url)).then((response) => {
return response.json()
}).then((data) => {
store.dispatch({data: data, needDirection: true, fetchName: fetchName })
})
}
//works
export const promiseErrorMiddleware = store => next => action => {
const url = action.url
const fetchName = action.fetchName
return Promise.resolve(fetch(url)).then((response) => {
store.dispatch({data: response.json(), needDirection: true, fetchName: fetchName })
})
}
//doesn't work
export const promiseErrorMiddleware = store => next => action => {
const url = action.url
const fetchName = action.fetchName
return Promise.resolve(fetch(url)).then((response) => {
console.log(resopnse.json())
return response.json()
}).then((data) => {
store.dispatch({data: data, needDirection: true, fetchName: fetchName })
})
}
//throws error
A:
response.json() returns a promise. You can't use the result of it immediately, you have to wait for the promise to resolve.
Also, you don't need to use Promise.resolve(). fetch() already returns a promise.
And instead of writing {data: data} you can just write {data}. This is called shorthand property names.
Your third example throws an error, because you can't call the json() method twice.
|
[
"stackoverflow",
"0008527886.txt"
] | Q:
Entity Framework (4.2) HasRequired results in unexpected LEFT OUTER JOIN
It appears that the Entity Framework (latest version from NuGet) may be ignoring the HasRequired configuration when constructing joins for Navigation Properties other than the first one defined.
For example, given a POCO object (Person) with the following configuration:
var person = modelBuilder.Entity<Person>();
person.ToTable("The_Peoples");
person.HasKey(i => i.Id);
person.Property(i => i.Id).HasColumnName("the_people_id");
person.HasRequired(i => i.Address)
.WithMany()
.Map(map => map.MapKey("address_id"));
person.HasRequired(i => i.WorkPlace)
.WithMany()
.Map(map => map.MapKey("work_place_id"));
I'm attempting to load a list of people with the following query:
myContext.Set<People>()
.Include(o => o.Address)
.Include(o => o.WorkPlace);
Entity Framework generates the following query:
FROM [dbo].[The_Peoples] AS [Extent1]
INNER JOIN [dbo].[The_Addresses] AS [Extent2] ON [Extent1].[address_id] = [Extent2].[address_id]
LEFT OUTER JOIN [dbo].[The_Work_Places] AS [Extent3] ON [Extent1].[work_place_id] = [Extent3].[work_place_id]
Notice that the join to the *The_Addresses* table is an inner join (as expected), however, the subsequent join to the *The_Work_Places* is an outer join. Given that both the Address and WorkPlace properties are marked as required, I would expect both joins to be inner joins. I've also attempted marking the Address and WorkPlace properties with the Required attribute, but this had no effect.
Is this a bug or am I perhaps misconfiguring something? Suggestions?
A:
Your model configuration is correct and I think it is not a bug but it is behaviour by design, but I cannot tell exactly what design. I've also seen that SQL in such queries. Only a few remarks:
The query you are seeing is not specific to EF 4.2. It would also occur for EF 4.1 and EF 4.0. But not for EF 1 (.NET 3.5). In EF 1 every Include, also the first, has been mapped to a LEFT OUTER JOIN, also for required relationships.
I think one cannot say that using an INNER JOIN is "correct" for required navigation properties and LEFT OUTER JOIN is wrong. From a mapping view point it doesn't matter what you use, given that the constraints in the database represent the relationships in the model correctly. For a required navigation property the FK column in the database must not be nullable and there must a constraint in the database which enforces that the FK refers to an existing row in the target table. If that is the case, every JOIN must return a row, no matter if you use INNER JOIN or LEFT OUTER JOIN.
What happens if model and database is "out of sync" regarding the relationships? Basically nonsense happens in both cases: If you use a LEFT OUTER JOIN and the FK is NULL in the DB or refers to a not existing row, you'd get an entity where the navigation property is null, violating the model definition that the property is required. Using an INNER JOIN is not better: You'd get no entity at all, a query result which is at least as wrong as the result with the LEFT OUTER JOIN, if not worse.
So, I think the change in .NET 4 to use INNER JOINs for some Includes has been made not because the SQL in EF 1 was wrong but to create better and more performant SQL. This change actually introduced a breaking change in that some queries returned other results now than they did in EF 1: http://thedatafarm.com/blog/data-access/ef4-breaking-change-ef4-inner-joins-affect-eager-loading-many-to-many/
My understanding is that this has been fixed and that the reason was that INNER JOINs in too many situations have been introduced for eager loading in EF 4. (Perhaps in this phase (beta/release candidate for EF 4) your query would have had two INNER JOINs.) The reply to that problem from the EF team: http://connect.microsoft.com/VisualStudio/feedback/details/534675/ef4-include-method-returns-different-results-than-ef1-include (bold highlight from me):
We are fixing the issue for .net 4 RTM. This was an unintended
breaking change. We did not make an intended change where every left
outer join produced by an Include became an inner join in .Net 4. But
rather the optimization looked at the constraints in the EF metadata
and tried to convert those left outer joins which could be safely
converted to inner joins based on the constraints. We had a bug in the
code where we were reasoning based on the constraints which resulted
in more aggressive conversion than what the constraints implied. We
have scaled back the optimization so that we convert left outer joins
to inner joins only in the places where we are absolutely sure we can
do it based on the constraints. We think we can improve this
optimization a little more in the future. You will start seeing more
left outer joins for some queries in RTM when compared to RC and Beta
2 but in most of these cases this is needed to return correct results.
So, the final release for EF 4 apparently reintroduced some more LEFT OUTER JOINs (compared to beta/release candidate) to avoid a breaking change like that.
Sorry, this is more a historical story than a real explanation why you get an INNER JOIN and then a LEFT OUTER JOIN. As said, it is not wrong to write the query this way - as it wouldn't be wrong to use two INNER JOINs or two LEFT OUTER JOINs. I guess that only the EF team can explain exactly why your query produces that specific SQL.
I'd recommend - if you don't experience serious performance problems - not to worry about that SQL (since the result you get is correct after all) and proceed. Not liking the SQL which EF creates ends up in writing either a lot of feature and change requests or in writing a lot of raw SQL queries or in abandoning EF at all.
|
[
"stackoverflow",
"0036623580.txt"
] | Q:
Create multiple columns in R based on other column
I have 2 columns in data frame , please refer to below
no value
1 A_0.9
1 B_0.8
1 C_0.7
1 D_0.7
2 B_0.9
2 D_0.8
2 A_0.7
2 C_0.7
I want to create new data frame as below
no value1 value2 value3 value4
1 A_0.9 B_0.8 C_0.7 D_0.7
2 B_0.9 D_0.8 A_0.7 C_0.7
i.e: for each unique value in column "no" there will be multiple columns created using data in column "value"
A:
Using data.table, we can create a sequence per unique value by no with rleid(), and consequently use it to dcast() the data to wide format.
library(data.table)
dcast(setDT(df)[, nr := rleid(value),by = no], no ~ nr)
# no 1 2 3 4
#1 1 A_0.9 B_0.8 C_0.7 D_0.7
#2 2 B_0.9 D_0.8 A_0.7 C_0.7
Or with the dev version (1.9.7) of data.table, the following is possible, thanks @Arun!
dcast(setDT(df), no ~ rowid(no, prefix = 'value'))
# no value1 value2 value3 value4
#1: 1 A_0.9 B_0.8 C_0.7 D_0.7
#2: 2 B_0.9 D_0.8 A_0.7 C_0.7
|
[
"math.stackexchange",
"0000144693.txt"
] | Q:
Showing $\sum _{k=1} 1/k^2 = \pi^2/6$
Possible Duplicate:
Different methods to compute $\sum\limits_{n=1}^\infty \frac{1}{n^2}$
Does $\sum\limits_{k=1}^n 1 / k ^ 2$ converge when $n\rightarrow\infty$?
I read my book of EDP, and there appears the next serie
$$\sum _{k=1} \dfrac{1}{k^2} = \dfrac{\pi^2}{6}$$
And, also, we prove that this series is equal $\frac{\pi^2}{6}$ for methods od analysis of Fourier, but...
Do you know other proof, any more simple or beautiful?
A:
Fourteen proofs compiled by Robin Chapman.
http://empslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf
A:
If you just want to show it converges, then the partial sums are increasing but the whole series is bounded above by $$1+\int_1^\infty \frac{1}{x^2} dx=2$$ and below by $$\int_1^\infty \frac{1}{x^2} dx=1,$$ since $\int_{k}^{k+1} \frac{1}{x^2} dx \lt \frac{1}{k^2} \lt \int_{k-1}^{k} \frac{1}{x^2} dx$.
|
[
"stackoverflow",
"0063399011.txt"
] | Q:
What is the difference between . and .data?
I'm trying to develop a deeper understand of using the dot (".") with dplyr and using the .data pronoun with dplyr. The code I was writing that motivated this post, looked something like this:
cat_table <- tibble(
variable = vector("character"),
category = vector("numeric"),
n = vector("numeric")
)
for(i in c("cyl", "vs", "am")) {
cat_stats <- mtcars %>%
count(.data[[i]]) %>%
mutate(variable = names(.)[1]) %>%
rename(category = 1)
cat_table <- bind_rows(cat_table, cat_stats)
}
# A tibble: 7 x 3
variable category n
<chr> <dbl> <dbl>
1 cyl 4 11
2 cyl 6 7
3 cyl 8 14
4 vs 0 18
5 vs 1 14
6 am 0 19
7 am 1 13
The code does what I wanted it to do and isn’t really the focus of this question. I was just providing it for context.
I'm trying to develop a deeper understanding of why it does what I want it to do. And more specifically, why I can't use . and .data interchangeably. I've read the Programming with dplyr article, but I guess in my mind, both . and .data just mean "our result up to this point in the pipeline." But, it appears as though I'm oversimplifying my mental model of how they work because I get an error when I use .data inside of names() below:
mtcars %>%
count(.data[["cyl"]]) %>%
mutate(variable = names(.data)[1])
Error: Problem with `mutate()` input `variable`.
x Can't take the `names()` of the `.data` pronoun
ℹ Input `variable` is `names(.data)[1]`.
Run `rlang::last_error()` to see where the error occurred.
And I get an unexpected (to me) result when I use . inside of count():
mtcars %>%
count(.[["cyl"]]) %>%
mutate(variable = names(.)[1])
.[["cyl"]] n variable
1 4 11 .[["cyl"]]
2 6 7 .[["cyl"]]
3 8 14 .[["cyl"]]
I suspect it has something to do with, "Note that .data is not a data frame; it’s a special construct, a pronoun, that allows you to access the current variables either directly, with .data$x or indirectly with .data[[var]]. Don’t expect other functions to work with it," from the Programming with dplyr article. This tells me what .data isn't -- a data frame -- but, I'm still not sure what .data is and how it differs from ..
I tried figuring it out like this:
mtcars %>%
count(.data[["cyl"]]) %>%
mutate(variable = list(.data))
But, the result <S3: rlang_data_pronoun> doesn't mean anything to me that helps me understand. If anybody out there has a better grasp on this, I would appreciate a brief lesson. Thanks!
A:
Up front, I think .data's intent is a little confusing until one also considers its sibling pronoun, .env.
The dot . is something that magrittr::%>% sets up and uses; since dplyr re-exports it, it's there. And whenever you reference it, it is a real object, so names(.), nrow(.), etc all work as expected. It does reflect data up to this point in the pipeline.
.data, on the other hand, is defined within rlang for the purpose of disambiguating symbol resolution. Along with .env, it allows you to be perfectly clear on where you want a particular symbol resolved (when ambiguity is expected). From ?.data, I think this is a clarifying contrast:
disp <- 10
mtcars %>% mutate(disp = .data$disp * .env$disp)
mtcars %>% mutate(disp = disp * disp)
However, as stated in the help pages, .data (and .env) is just a "pronoun" (we have verbs, so now we have pronouns too), so it is just a pointer to explain to the tidy internals where the symbol should be resolved. It's just a hint of sorts.
So your statement
both . and .data just mean "our result up to this point in the pipeline."
is not correct: . represents the data up to this point, .data is just a declarative hint to the internals.
Consider another way of thinking about .data: let's say we have two functions that completely disambiguate the environment a symbol is referenced against:
get_internally, this symbol must always reference a column name, it will not reach out to the enclosing environment if the column does not exist; and
get_externally, this symbol must always reference a variable/object in the enclosing environment, it will never match a column.
In that case, translating the above examples, one might use
disp <- 10
mtcars %>%
mutate(disp = get_internally(disp) * get_externally(disp))
In that case, it seems more obvious that get_internally is not a frame, so you can't call names(get_internally) and expect it to do something meaningful (other than NULL). It'd be like names(mutate).
So don't think of .data as an object, think of it as a mechanism to disambiguate the environment of the symbol. I think the $ it uses is both terse/easy-to-use and absolutely-misleading: it is not a list-like or environment-like object, even if it is being treated as such.
BTW: one can write any S3 method for $ that makes any classed-object look like a frame/environment:
`$.quux` <- function(x, nm) paste0("hello, ", nm, "!")
obj <- structure(0, class = "quux")
obj$r2evans
# [1] "hello, r2evans!"
names(obj)
# NULL
(The presence of a $ accessor does not always mean the object is a frame/env.)
A:
The . variable comes from magrittr, and is related to pipes. It means "the value being piped into this expression". Normally with pipes, the value from a previous expression becomes argument 1 in the next expression, but this gives you a way to use it in some other argument.
The .data object is special to dplyr (though it is implemented in the rlang package). It does not have any useful value itself, but when evaluated in the dplyr "tidy eval" framework, it acts in many ways as though it is the value of the dataframe/tibble. You use it when there's ambiguity: if you have a variable with the same name foo as a dataframe column, then .data$foo says it is the column you want (and will give an error if it's not found, unlike data$foo which will give NULL). You could alternatively use .env$foo, to say to ignore the column and take the variable from the calling environment.
Both .data and .env are specific to dplyr functions and others using the same special evaluation scheme, whereas . is a regular variable and can be used in any function.
Edited to add: You asked why names(.data) didn't work. If @r2evans excellent answer isn't enough, here's a different take on it: I suspect the issue is that names() isn't a dplyr function, even though names.rlang_fake_data_pronoun is a method in rlang. So the expression names(.data) is evaluated using regular evaluation instead of tidy evaluation. The method has no idea what dataframe to look in, because in that context there isn't one.
A:
On a theoretical level:
. is the magrittr pronoun. It represents the entire input (often a data frame when used with dplyr) that is piped in with %>%.
.data is the tidy eval pronoun. Technically it is not a data frame at all, it is an evaluation environment.
On a practical level:
. will never be modified by dplyr. It remains constant until the next piped expression is reached. On the other hand, .data is always up to date. That means you can refer to previously created variables:
mtcars %>%
mutate(
cyl2 = cyl + 1,
am3 = .data[["cyl2"]] + 10
)
And you can also refer to column slices in the case of a grouped data frame:
mtcars %>%
group_by(cyl) %>%
mutate(cyl2 = .data[["cyl"]] + 1)
If you use .[["cyl"]] instead, the entire data frame will be subsetted and you will get an error because the input size is not the same as the group slice size. Tricky!
|
[
"stackoverflow",
"0004458401.txt"
] | Q:
Variable watchpoint doesn't work in Eclipse/ADT Android projects
public class Main extends Activity {
int field = 0;
/** Called when the activity is first created. */
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
int local = 0;
field = local;
local = field;
}
}
I've put "watchpoint" on "field" and in "Breakpoint Properties" I confirmed that both access and modification are being watched. But the watchpoint didn't trigger the debugger to pause execution of program. Tried both in 2.2 AVD and on Desire with USB Debugging enabled.
Anyone has experience on how watchpoint works with Android?
Thanks,
Ryan
A:
I can't find an official google source that says that it does not work. However I keep finding mirrors of this document
http://www.milk.com/kodebase/dalvik-docs-mirror/docs/debugger.html which says
Known Issues and Limitations - Most of the optional features JDWP allows are not implemented. These include field access watchpoints and better tracking of monitors.
|
[
"stackoverflow",
"0018284753.txt"
] | Q:
urllib.request and urllib2 both not working
I am trying to learn Python and I was trying to learn modules. The Python version I am using is Python 2.7.4. The module I was trying to learn was urllib. But whenever I am trying to run the code below I am getting an error called 'no import named request'. The code is given below.
import urllib.request
class App():
def main(self):
inp = raw_input('Please enter a string\n')
print(inp)
inp = input('Please enter a value\n')
print(inp)
if __name__ == '__main__':
App().main()
Then I tried using urllib2. So I changed the first line with
import urllib2
But then it say 'IndentationError: expected an indented block'. But if I write
import urllib
then I don't get any error. But then I can't use any of the functions of that library.
A:
urllib.request is for Python 3. For Python 2, you'll want to do:
from urllib import urlopen
Or use the urllib2 module:
from urllib2 import urlopen
You shouldn't be getting an IndentationError, but you may have done some minor error.
|
[
"askubuntu",
"0000050523.txt"
] | Q:
Can't add Fedora 14 to grub
Today I installed Fedora 14 in a different partition in the same hard drive as Ubuntu. At the Fedora 14 installation, I chose not to install Boot-loader in the MBR, and instead chose to install it in the Fedora partition itself, which is according to my HD layout /sda3.
After the Fedora 14 installation I booted in to Ubuntu and ran sudo update-grub but 'grub.cfg' fails to add Fedora 14 in to the OS list. Here is the output of boot-info script.
Boot Info Script 0.60 from 17 May 2011
============================= Boot Info Summary: ===============================
=> Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of
the same hard drive for core.img. core.img is at this location and looks
for (,msdos1)/boot/grub on this drive.
sda1: __________________________________________________________________________
File system: ext4
Boot sector type: -
Boot sector info:
Operating System: Ubuntu 11.04
Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img
sda2: __________________________________________________________________________
File system: Extended Partition
Boot sector type: Unknown
Boot sector info:
sda5: __________________________________________________________________________
File system: swap
Boot sector type: -
Boot sector info:
sda3: __________________________________________________________________________
File system: ext4
Boot sector type: Grub Legacy
Boot sector info: Grub Legacy (v0.97) is installed in the boot sector
of sda3 and looks at sector 49897340 on boot drive #1
for the stage2 file. A stage2 file is at this
location on /dev/sda. Stage2 looks on partition #3
for /grub/grub.conf.
Operating System:
Boot files: /grub/menu.lst /grub/grub.conf
sda4: __________________________________________________________________________
File system: LVM2_member
Boot sector type: -
Boot sector info:
============================ Drive/Partition Info: =============================
Drive: sda _____________________________________________________________________
Disk /dev/sda: 40.0 GB, 40020664320 bytes
255 heads, 63 sectors/track, 4865 cylinders, total 78165360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
Partition Boot Start Sector End Sector # of Sectors Id System
/dev/sda1 * 2,048 49,865,759 49,863,712 83 Linux
/dev/sda2 74,866,686 78,163,967 3,297,282 5 Extended
/dev/sda5 74,866,688 78,163,967 3,297,280 82 Linux swap / Solaris
/dev/sda3 49,866,752 50,890,751 1,024,000 83 Linux
/dev/sda4 50,890,752 74,864,639 23,973,888 8e Linux LVM
"blkid" output: ________________________________________________________________
Device UUID TYPE LABEL
/dev/sda1 03e2a8da-171f-49e9-b24d-434e66cd1140 ext4
/dev/sda3 dea81d77-a375-4d0e-954e-1829f6b91f10 ext4
/dev/sda4 mzVoj0-GHJu-DJr4-0G2Y-SzZ0-LTfW-F01yf9 LVM2_member
/dev/sda5 3e89ba8e-7754-4ee4-aca1-e2a82bffb7a7 swap
================================ Mount points: =================================
Device Mount_Point Type Options
/dev/sda1 / ext4 (rw,errors=remount-ro,user_xattr,commit=0)
=========================== sda1/boot/grub/grub.cfg: ===========================
--------------------------------------------------------------------------------
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
set have_grubenv=true
load_env
fi
set default="2"
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function recordfail {
set recordfail=1
if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi
}
function load_video {
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
}
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
if loadfont /usr/share/grub/unicode.pf2 ; then
set gfxmode=1024x768
load_video
insmod gfxterm
fi
terminal_output gfxterm
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
set locale_dir=($root)/boot/grub/locale
set lang=en_US
insmod gettext
if [ "${recordfail}" = 1 ]; then
set timeout=-1
else
set timeout=10
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=white/black
set menu_color_highlight=black/light-gray
if background_color 44,0,30; then
clear
fi
### END /etc/grub.d/05_debian_theme ###
### BEGIN /etc/grub.d/10_linux ###
if [ ${recordfail} != 1 ]; then
if [ -e ${prefix}/gfxblacklist.txt ]; then
if hwmatch ${prefix}/gfxblacklist.txt 3; then
if [ ${match} = 0 ]; then
set linux_gfx_mode=keep
else
set linux_gfx_mode=text
fi
else
set linux_gfx_mode=text
fi
else
set linux_gfx_mode=keep
fi
else
set linux_gfx_mode=text
fi
export linux_gfx_mode
if [ "$linux_gfx_mode" != "text" ]; then load_video; fi
menuentry 'Ubuntu, with Linux 2.6.38-8-generic' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
set gfxpayload=$linux_gfx_mode
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
linux /boot/vmlinuz-2.6.38-8-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7
initrd /boot/initrd.img-2.6.38-8-generic
}
menuentry 'Ubuntu, with Linux 2.6.38-8-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
set gfxpayload=$linux_gfx_mode
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
echo 'Loading Linux 2.6.38-8-generic ...'
linux /boot/vmlinuz-2.6.38-8-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single
echo 'Loading initial ramdisk ...'
initrd /boot/initrd.img-2.6.38-8-generic
}
submenu "Previous Linux versions" {
menuentry 'Ubuntu, with Linux 2.6.35-28-generic' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
set gfxpayload=$linux_gfx_mode
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
linux /boot/vmlinuz-2.6.35-28-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7
initrd /boot/initrd.img-2.6.35-28-generic
}
menuentry 'Ubuntu, with Linux 2.6.35-28-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
set gfxpayload=$linux_gfx_mode
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
echo 'Loading Linux 2.6.35-28-generic ...'
linux /boot/vmlinuz-2.6.35-28-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single
echo 'Loading initial ramdisk ...'
initrd /boot/initrd.img-2.6.35-28-generic
}
menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
set gfxpayload=$linux_gfx_mode
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
linux /boot/vmlinuz-2.6.32-21-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7
initrd /boot/initrd.img-2.6.32-21-generic
}
menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os {
recordfail
set gfxpayload=$linux_gfx_mode
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
echo 'Loading Linux 2.6.32-21-generic ...'
linux /boot/vmlinuz-2.6.32-21-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single
echo 'Loading initial ramdisk ...'
initrd /boot/initrd.img-2.6.32-21-generic
}
}
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/20_memtest86+ ###
menuentry "Memory test (memtest86+)" {
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
linux16 /boot/memtest86+.bin
}
menuentry "Memory test (memtest86+, serial console 115200)" {
insmod part_msdos
insmod ext2
set root='(/dev/sda,msdos1)'
search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140
linux16 /boot/memtest86+.bin console=ttyS0,115200n8
}
### END /etc/grub.d/20_memtest86+ ###
### BEGIN /etc/grub.d/30_os-prober ###
if [ "x${timeout}" != "x-1" ]; then
if keystatus; then
if keystatus --shift; then
set timeout=-1
else
set timeout=0
fi
else
if sleep --interruptible 3 ; then
set timeout=0
fi
fi
fi
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
--------------------------------------------------------------------------------
=============================== sda1/etc/fstab: ================================
--------------------------------------------------------------------------------
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
#
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
# Commented out by Dropbox
# UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=3e89ba8e-7754-4ee4-aca1-e2a82bffb7a7 none swap sw 0 0
UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 / ext4 errors=remount-ro,user_xattr 0 1
--------------------------------------------------------------------------------
=================== sda1: Location of files loaded by Grub: ====================
GiB - GB File Fragment(s)
0.065803528 = 0.070656000 boot/grub/core.img 1
21.263332367 = 22.831329280 boot/grub/grub.cfg 1
0.771381378 = 0.828264448 boot/initrd.img-2.6.31-wl 1
2.054199219 = 2.205679616 boot/initrd.img-2.6.32-21-generic 3
2.893260956 = 3.106615296 boot/initrd.img-2.6.35-28-generic 2
6.833232880 = 7.337127936 boot/initrd.img-2.6.38-8-generic 2
1.772453308 = 1.903157248 boot/vmlinuz-2.6.32-21-generic 2
2.068012238 = 2.220511232 boot/vmlinuz-2.6.35-28-generic 1
5.532531738 = 5.940510720 boot/vmlinuz-2.6.38-8-generic 1
6.833232880 = 7.337127936 initrd.img 2
2.893260956 = 3.106615296 initrd.img.old 2
5.532531738 = 5.940510720 vmlinuz 1
2.068012238 = 2.220511232 vmlinuz.old 1
============================= sda3/grub/grub.conf: =============================
--------------------------------------------------------------------------------
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,2)
# kernel /vmlinuz-version ro root=/dev/mapper/VolGroup-lv_root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda3
default=0
timeout=0
splashimage=(hd0,2)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.35.6-45.fc14.i686)
root (hd0,2)
kernel /vmlinuz-2.6.35.6-45.fc14.i686 ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet
initrd /initramfs-2.6.35.6-45.fc14.i686.img
--------------------------------------------------------------------------------
=================== sda3: Location of files loaded by Grub: ====================
GiB - GB File Fragment(s)
23.792903900 = 25.547436032 grub/grub.conf 1
23.792903900 = 25.547436032 grub/menu.lst 1
23.793020248 = 25.547560960 grub/stage2 1
23.817364693 = 25.573700608 initramfs-2.6.35.6-45.fc14.i686.img 2
23.787566185 = 25.541704704 initrd-plymouth.img 1
23.791228294 = 25.545636864 vmlinuz-2.6.35.6-45.fc14.i686 1
======================== Unknown MBRs/Boot Sectors/etc: ========================
Unknown BootLoader on sda2
00000000 81 71 62 ff a1 94 89 ff 4d 43 3a ff fa f2 ec ff |.qb.....MC:.....|
00000010 fb f6 f1 ff fc f8 f4 ff fc f8 f4 ff fc f8 f4 ff |................|
00000020 5d 56 50 ff a1 94 89 ff 81 70 62 ff 81 70 62 ff |]VP......pb..pb.|
00000030 81 70 62 ff 81 70 62 ff 81 70 62 ff a1 94 89 ff |.pb..pb..pb.....|
00000040 4d 43 3a ff fa f2 ec ff fb f6 f1 ff fc f8 f4 ff |MC:.............|
00000050 fc f8 f4 ff fc f8 f4 ff 5d 56 50 ff a1 94 89 ff |........]VP.....|
00000060 81 70 62 ff 81 70 62 ff 81 70 62 ff 81 70 62 ff |.pb..pb..pb..pb.|
00000070 81 70 62 ff a1 94 89 ff 4d 43 3a ff fa f2 ec ff |.pb.....MC:.....|
00000080 fb f6 f1 ff fc f8 f4 ff fc f8 f4 ff fc f8 f4 ff |................|
00000090 5d 56 50 ff a0 93 89 ff 80 6f 61 ff 80 6f 61 ff |]VP......oa..oa.|
000000a0 80 6f 61 ff 80 6f 61 ff 80 6f 61 ff a0 93 89 ff |.oa..oa..oa.....|
000000b0 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............|
000000c0 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9f 93 88 ff |........]VP.....|
000000d0 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff |.o`..o`..o`..o`.|
000000e0 7f 6f 60 ff 9f 93 88 ff 4d 43 3a ff fa f2 ed ff |.o`.....MC:.....|
000000f0 fb f6 f2 ff fc f8 f5 ff fc f8 f5 ff fc f8 f5 ff |................|
00000100 5d 56 50 ff 9f 93 88 ff 7f 6f 60 ff 7f 6f 60 ff |]VP......o`..o`.|
00000110 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff 9f 93 88 ff |.o`..o`..o`.....|
00000120 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............|
00000130 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9e 92 88 ff |........]VP.....|
00000140 7e 6e 60 ff 7e 6e 60 ff 7e 6e 60 ff 7e 6e 60 ff |~n`.~n`.~n`.~n`.|
00000150 7e 6e 60 ff 9e 92 88 ff 4d 43 3a ff fa f2 ed ff |~n`.....MC:.....|
00000160 fb f6 f2 ff fc f8 f5 ff fc f8 f5 ff fc f8 f5 ff |................|
00000170 5d 56 50 ff 9e 92 88 ff 7d 6d 5f ff 7d 6d 5f ff |]VP.....}m_.}m_.|
00000180 7d 6d 5f ff 7d 6d 5f ff 7d 6d 5f ff 9e 92 88 ff |}m_.}m_.}m_.....|
00000190 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............|
000001a0 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9e 92 88 ff |........]VP.....|
000001b0 7d 6d 5f ff 7d 6d 5f ff 7d 6d 5f ff 7d 6d 00 fe |}m_.}m_.}m_.}m..|
000001c0 ff ff 82 fe ff ff 02 00 00 00 00 50 32 00 00 00 |...........P2...|
000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.|
00000200
=============================== StdErr Messages: ===============================
unlzma: Decoder error
According to this Fedora 14 is visible in sda3. Does anybody know a way to add Fedora 14 to grub.cfg of Ubuntu so I can choose which OS to boot? Thanks in advance.
A:
The easiest way to boot Fedora would be to add an entry to grub.cfg that chain loads to Fedora's copy of Grub.
Create a file /etc/grub.d/15_fedora with the following contents:
#!/bin/sh
cat << \EOF
menuentry "Fedora" {
set root=(hd0,3)
chainloader +1
}
EOF
Make the file executable and then regenerate the grub configuration file:
sudo chmod a+x /etc/grub.d/15_fedora
sudo update-grub
This should ensure that the menu entry stays around when the Grub configuration is regenerated.
|
[
"stackoverflow",
"0051553989.txt"
] | Q:
Applying dropout to input layer in LSTM network (Keras)
Is it possible to apply dropout to input layer of LSTM network in Keras?
If this is my model:
model = Sequential()
model.add(LSTM(10, input_shape=(look_back, input_length), return_sequences=False))
model.add(Dense(1))
The goal is to achieve the effect of:
model = Sequential()
model.add(Dropout(0.5))
model.add(LSTM(10, input_shape=(look_back, input_length), return_sequences=False))
model.add(Dense(1))
A:
You can use the Keras Functional API, in which your model would be written as:
inputs = Input(shape=(input_shape), dtype='int32')
x = Dropout(0.5)(inputs)
x = LSTM(10,return_sequences=False)(x)
define your output layer, for example:
predictions = Dense(10, activation='softmax')(x)
and then build the model:
model = Model(inputs=inputs, outputs=predictions)
|
[
"stackoverflow",
"0057372194.txt"
] | Q:
Change Hidden Field Value Based on Dropdown Choice
I need some basic javascript help. I have a form with a dropdown selection. Based on the choice in that dropdown selection, I want to change the value of a hidden field.
For example, if the user chooses "Google" on the dropdown, I want the hidden field to display "https://google.com" and if the user chooses "Bing" I want the hidden field to display "https://bing.com."
One caveat (not sure if this changes the script) is that I am setting unique values for the dropdown choices. For example, "Google" has a value of 5 and Bing has a value of 10.
Any help would be greatly appreciated!
I've tried the simple script below but I am very unexperienced with Javascript so it is likely not correct.
<script>
var service = $('#service').val();
if(service == 'Google'){
$('#hidden_field').val('https://google.com/')
} else {
$('#hidden_field').val('https://bing.com')
}
</script>
In the example above, I'd like the service choice of Google to add the Google URL to the hidden field, and all other choices will display Bing.
A:
You need to wrap your javascript code into a function and bind it to an event handler:
<script>
function fill()
{
let service = $('#service').val();
if(service == 'Google'){
$('#hidden_field').val('https://google.com/')
} else {
$('#hidden_field').val('https://bing.com')
}
}
/* And the binding */
$(document).ready(function()
{
$('#service').on("change", fill);
});
</script>
|
[
"math.stackexchange",
"0001444503.txt"
] | Q:
Is this a mistake in this integral calculation
Please can someone tell me if the first step in this solution (notes available here) is a mistake?
I believe that if $\gamma: [a,b]\to \mathbb C$ is a path from $\gamma(a)$ to $\gamma (b)$ then $-\gamma$ is a path from $\gamma (b)$ to $\gamma (a)$ and hence the first equality would be:
$$ \int_{-\gamma} f = \int_b^a f(-\gamma)(-\gamma')$$
Here is the original solution:
A:
$-\gamma$ is a function made such that when $t = a$, $-\gamma(a) = \gamma(b)$ and when $t = b$, $-\gamma(b) = \gamma(a)$. So in the first step the bounds are still from $a$ to $b$. In your reasoning you jump directly to the third line.
For the step from the first line to the second line, notice that the derivative of $-\gamma(t)$ is :
$$ -\gamma'(t) = \frac{\mathrm{d}}{\mathrm{d}t}(\gamma(a+b-t)) = \frac{\mathrm{d}\gamma}{\mathrm{d}t}(a+b-t) \cdot \frac{\mathrm{d}(a+b-t)}{\mathrm{d}t} = - \frac{\mathrm{d}\gamma}{\mathrm{d}t}(a+b-t)$$
|
[
"stackoverflow",
"0012937698.txt"
] | Q:
How to toggle Edit and Save Cancel in a jqGrid?
I asked this question at jquery forum cf: my question, but was asked to come here and look for Oleg.
I've done a lot of research online and tested a bunch of code either mine or from the web, but so far have had no luck. I keep on reading that I need to use the oneditfunc function. As you see in my question at the jquery forum, I added those action buttons in gridComplete. Where do I declare and define the oneditfunc function?
The example at http://www.ok-soft-gmbh.com/jqGrid/ActionButtons.htm does look like what I am looking for, but I can't use the pre-defined 'actions' formatter, I have to use buttons which bear the texts Edit, Save and Cancel.
There is an example at http://www.trirand.com/blog/jqgrid/jqgrid.html# under Row Editing (new) > Custom Edit, which I followed to create mine you've seen in my original question at jquery forum. Unfortunately, those buttons don't toggle.
It just doesn't make sense to display the S(ave) and C(ancel) buttons when the rows are not even in edit mode. So I do want to fix this by toggling them. Any ideas to share? I am sure this is going to help a lot of people as no working examples can be found online. Thank you!
A:
For those who are looking to solve this problem. I have solve it and I have posted my solution at http://forum.jquery.com/topic/jqgrid-inline-editing-buttons .
|
[
"stackoverflow",
"0032402115.txt"
] | Q:
Designing a webpage for both purposes of printing and displaying
I have a project to create a report generator. The aim is to have a web version (displayed on browser) in 16:9 ratio and a print version that would print nicely.
So I have a few questions.
First, I have read a lot of answers about that and not one is the same. So is there a good pixel width/height for printing
like :
.page{width:21cm; height:29.7cm})
<div class='page'></div>
If not how do people do it?
Another question :
I wanted to use enquire.js to modify my DOM on print (because I only show one slide at a time and I need all of them to print). The problem is that it shows only one page. I think it's a timing issue (enquire does not complete it's work before the browser gets the page so it's only partly modified. That's the only reason I could come up with)
enquire.register('print', {
match: function() {
slides.each(function(index, element) {
$("#printContainer").append(element)
});
$('.slider').css('display', 'none')
}
});
A:
You can use two different css files (print/screen):
<link rel="stylesheet" media="screen" type="text/css" href="/screen-styles.min.css">
<link rel="stylesheet" media="print" type="text/css" href="/print-styles.min.css">
The first css sheet will affect only the layout when you are viewing the site from the browser. Now, when you press the print button (or ctrl+p in most of the browsers), the second sheet will affect the layout and the first one will be ignored.
|
[
"korean.stackexchange",
"0000003099.txt"
] | Q:
What's the etymology of the expression '맙소사 !'
Apparently '맙소사!' means something like 'oh my God!'
Does it refer to 'God' literally? I thought the words for God was '신'.
What's the etymology and usage of this expression?
A:
In short, the link from @droooze has a good bit of very interesting conjecture with a few references and certainly seems accurate from my vantage.
Highlighted there (highlighting mine) is the following summation:
‘맙소사’ 는 ‘신이여!그렇게 하지 마십시오!’란 뜻이다. 그러니 ‘오 마이 갓’ 보다는 더 구체적인 의미를 담고 있는 것이다.
which I translate to:
맙소사 means "ghost (and/or heavenly being)! please don't do that!" And as such, more so than 오 마이 갓, 맙소사 has a definitive meaning.
|
[
"stackoverflow",
"0037796040.txt"
] | Q:
System.UStrClr Access Violation
Porting an old desktop app to the modern age using RAD Studio 10.1 Berlin. App was last built in C++ Builder 6 (many, many moons ago).
Managed to sort out all the component and external library dependencies, but it appears that there is some lingering issues with the Unicode port. The app used to rely heavily on the built-in String type, which now corresponds to AnsiString.
The source code builds, but the binary throws an Access Violation somewhere before any application code executed. The error stack trace:
rtl240.@System@@UstrClr$qqrpv + 0x12
largest_pos
__linkproc__ Attributebitmaps::Initialize 0x18
__init_exit_proc
__wstartup
The largest_pos function does some numerical manipulation - no String dependencies of any kind.
Attributebitmaps is a static class, with no member called Initialize. In Delphi you use to be able to declare an Initialize and Finalize call at the unit level, but that construct is not used in C++ Builder.
Any ideas around why an error would occur in System.UStrClr? Where would you go digging to get more insight into this?
A:
Related to the static initializer fiascos referred to in numerous other posts on SO, the culprit in this case was the following pattern:
.h file:
class SomeClass {
static String SOME_STATIC_STRING;
};
String SomeClass::SOME_STATIC_STRING("foo");
If you do elect to use an initializer like this, it should be moved to the .cpp file.
|
[
"math.stackexchange",
"0003596384.txt"
] | Q:
A rollercoaster contains four cars, each with six seats. Four families, each containing six individuals wish to ride ...
A rollercoaster contains four cars, each with six seats. Four families, each containing six individuals wish to ride the rollercoaster together. In how many ways can the rollercoaster be “loaded” with the individuals so that two individuals ride in the same car if and only if they belong to the same family?
My work: I am thinking about 4*3*2*1 = 24 ways, but I don't know if that's correct.
The question is really saying that each family needs to stay together. There are four cars in the rollercoaster, so the first car will have four families, second car will have three, third car will have two, fourth car will have one.
A:
As you already pointed out, we assign each family to one of the four cars in $4!$ ways. Now, for each car we can arrange the six family members in the six seats in $6!$ ways. Therefore the rollercoaster can be “loaded” in
$$4!\cdot (6!)^4.$$
|
[
"stackoverflow",
"0000625557.txt"
] | Q:
iPhone simulator running invisible
iPhone simulator is running on my Mac but it's not showing the simulator. Two days back I had installed the Mac OS X 10.5.6 update. Xcode is launching the aplication in simulator, and it's running, as I can see the outputs on the gdb console window. But the simulator is not shown.
A:
Try to remove ~/Library/Preferences/com.apple.iphonesimulator.plist
A:
Had the same issue. The simulator was running somewhere way off screen. Could see it using expose, but couldn't move it.
Rather than using "Detect Displays", which didn't work for me, I changed the screen resolution and that popped it back into place.
|
[
"stackoverflow",
"0051199708.txt"
] | Q:
Fingerprint U Are U 4500 PowerBuilder
I have an application C# for fingerprint reader u are u 4500 , so now I need to develop it in Powerbuilder. My question is if someone has any example of this app in powerbuilder.
Kind regards .
A:
Without knowing anything about your fingerprint application I would recommend a COM visible dll (using C#) with exposed methods PowerBuilder can call to provide the functionality you require. There are many examples of using COM with PowerBuilder online. It is very unlikely someone will have 'any example of this app in powerbuilder' they are willing to share.
|
[
"stackoverflow",
"0007715514.txt"
] | Q:
Does Facebook return Album location information
When you fetch an album from Facebook, does it also return the WHERE information of the album so you will know where the album is taken?
A:
Yes, it does. If the album was tagged with the location where it was taken, it can be retrieved from the location in the JSON reponse.
For example:
{
"id": "...",
"from": {
"name": "...",
"id": "..."
},
"name": "Brussels",
"location": "Brussels",
....
}
|
[
"workplace.stackexchange",
"0000132345.txt"
] | Q:
Self promotion and its frequency
In a field like IT and software development, tips/help is always given and taken. Helping peers and juniors is expected from an employee in a decent senior position. I am one of such seniors who is guiding both my peers and juniors.
My questions are:
Is it important for my seniors to know how I help my team? Is this self promotion valid and necessary?
If yes, how frequently do I mention this to my seniors, esp about my help to the team?
P.S. I am good at small talk and I can do this sandwiched between other talks.
A:
If you help people, and help them well, then your actions and attitude will promote themselves.
I hate self promoting, but I've been successful gaining a reputation as a competent, helpful, senior team member in a number of companies by simply helping people and making friends with them.
This advice obviously won't be applicable everywhere (I've never worked in a company that used stack ranking for example), but I've worked in start ups and enterprises and I've found that people respond really well to someone who ignores the office politics and existing team rivalries and just helps out when something needs doing.
In my current (enterprise) environment I've used that attitude to get an openly hostile lead of another team on to our side (and now consider him a friend outside of work as well), and two of my managers (one replaced the other) have both commented on it. I've also had people from other teams who I don't know and have never actually talked to recommend me as a good person to ask advice from, so I can only assume that it is noticed and spread further than my immediate team or the teams I've directly helped.
It's worth pointing out that you'll need to put your own team first - no one's going to like or promote a teammate who neglects their work in favour of other teams, but your team will also notice and appreciate it if you're a person who brings support from others when they need it.
|
[
"stackoverflow",
"0041641952.txt"
] | Q:
Search Results and Aggregate Results in a Single View ASP.NET Core
This is my first ASP.NET Core project. I have a single page application where I have a search box and a button in the Index.cshtml page. If the search box text is empty then nothing is displayed else the records corresponding to pId is retrieved and displayed in the Index.cshtml page.So far this is working great.
I am looking to extend this view by aggregating the weight for a given day and display it at the top.So once the user enters a PId and submits the page, I would like to display
pId total_weight create_dt
Below it, would be the current view which I already have
pId bIdc rule_id weight create_dt
I am not sure where I should be aggregating the scores? How do I update my current view to display the aggregated values?
Controller
public IActionResult Index(string pId)
{
var scores = from ts in _context.Scores
select ts;
if (!string.IsNullOrEmpty(pId))
{
scores = scores.Where(p => p.p_id.Equals(pId)).OrderByDescending(p => p.create_dt);
return View(scores.ToList());
}
else
return View();
}
Index.cshtml
<div id="p-form">
<form asp-controller="Score" asp-action="Index" method="get">
<p>
<b> P Id:</b> <input type="text" name="pId">
<input type="submit" value="Submit" />
</p>
</form>
</div>
@if (Model != null)
{
<div id="p-scores">
<table class="table">
<thead>
<tr>
<th>
@Html.DisplayNameFor(model => model.p_id)
</th>
<th>
@Html.DisplayNameFor(model => model.b_idc)
</th>
<th>
@Html.DisplayNameFor(model => model.rule_id)
</th>
<th>
@Html.DisplayNameFor(model => model.weight)
</th>
<th>
@Html.DisplayNameFor(model => model.create_dt)
</th>
</tr>
</thead>
<tbody>
@foreach (var item in Model)
{
<tr>
<td>
@Html.DisplayFor(model => item.p_id)
</td>
<td>
@Html.DisplayFor(model => item.b_idc)
</td>
<td>
@Html.DisplayFor(model => item.rule_id)
</td>
<td>
@Html.DisplayFor(model => item.weight)
</td>
<td>
@Html.DisplayFor(model => item.create_dt)
</td>
</tr>
}
</tbody>
</table>
</div>
}
A:
You can use GroupBy method to group the data.
I would create a new class to represent the grouped data
public class WeightItemTotal
{
public int PId { set; get; }
public decimal TotalWeight { set; get; }
public DateTime CreatedDate { set; get; }
}
Now when you get data, first group by PId and for each item in that (each PId), group the results based on date and use the Sum method to get the Total Weight.
var resultGrouped = new List<WeightItemTotal>();
var pgrouped = _context.Scores.Where(c=>c.p_id==pId)
.GroupBy(a => a.PId);
foreach (var p in pgrouped)
{
var grouped = p.GroupBy(f => f.CreatedDate, items => items, (key, val)
=> new WeightItemTotal
{
PId = val.FirstOrDefault().PId,
CreatedDate = key,
TotalWeight = val.Sum(g => g.Weight)
}).ToList();
resultGrouped.AddRange(grouped);
}
return View(resultGrouped);
Now since we are returning the grouped result to the view, make sure it is strongly typed to that type
@model IEnumerable<WeightItemIotal>
<table>
@foreach(var p in Model)
{
<tr>
<td>@p.PId</td>
<td>@p.TotalWeight</td>
<td>@p.CreatedDate</td>
</tr>
}
</table>
|
[
"superuser",
"0000258208.txt"
] | Q:
Is it possible to setup and switch multiple wired IEEE802.1X authentication profiles in windows 7?
In Windows 7, I use two IEEE802.1X wired network authentitication settings which I have to change by hand, since every time I switch the configuration, the old one is lost. (In Linux this is trivially handled by the wpa-supplicant)
A:
I finally found a script-based solution which exploits the netsh utility in windows 7.
Firstly i used netsh lan export profile folder=<folder> for both 802.1 profiles (I manually set up the first, then exported its settings, then repeated the same for the second, etc), thus I got one XML file for each profile.
Then I wrote a simple script for each of them (have to be run as administrator)
chcp 1250
netsh lan add profile filename="<folder>\profile1.xml" interface="Local Area Network"
PAUSE
provided that the script file has the Windows cp1250 encoding and the network interface has name Local Area Network.
The interface name can be determined by running:
netsh lan show interfaces
|