text
stringlengths 0
6.48M
| meta
dict |
---|---|
Q:
Where can I find a good XMPP (Jabber) tutorial?
Where can I find a good XMPP (Jabber) tutorial with detailed information on the XML that's sent to/from a Jabber client and server. I've looked at the xmpp.org website, but what they show there is confusing and doesn't help me learn.
I want to write an XMPP client in C# that uses a TcpClient to connect to the server and send/receive XML data.
A:
I second "XMPP: The Definitive Guide" as a way to really understand what is happening behind-the-scenes. It's very accessible and does go into enough depth that you can figure things out for yourself afterwards.
I would recommend you not go with the "Professional XMPP Programming" book though. I purchased both of these together and I was not able to run even a single example app in the latter book as the BOSH stuff he is using would just not work (there is a problem with the newer browsers and his implementation of running cross-site AJAX). There are complaints on the forums for that book but are largely unanswered.
After going through all servers and libraries, I can recommend ejabberd as it seems to be the most stable and easy to set up. For libraries, I found MatriX to be the best (and only one that I could actually do any programming for). I am trying to use .Net though, so YMMV MatriX is the newer version of agsxmpp mentioned above.
A:
I can highly recommend XMPP: The Definitive Guide from O'Reilly. It goes into great detail about how stanzas are constructed and what the various major protocols require. It does not however have any code in it, aside from the final chapter.
I also recommend using one of the already available C# libraries for doing your XMPP programming instead of writing your own. Dealing with TLS, stream setup, and asynchronous XML parsing can be a hard way to get started. I can recommend Jabber-net for this.
If you want a tutorial that is more code focused, I wrote a book called Professional XMPP Programming that goes through a number of example applications using JavaScript as the implementation language. The main concepts all apply equally well to any XMPP development.
A:
Not a tutorial, but a great way to start is with the Agsxmpp library. http://www.ag-software.de/agsxmpp-sdk.html
That will help you gain familiarity with the flow of messages.
| {
"pile_set_name": "StackExchange"
} |
Q:
Django: How do I dynamically filter a dropdown box?
Say for example I have a three models. Content, Chapter and Page. Within the Content form there will be two dropdown boxes. One for chapters and the other for pages. If I was to select a chapter from the dropdown box, how do I then filter the page dropdown box only to show the pages within that chapter.
models.py
class Page(Models.Model):
# Some details about the page
class Chapter(models.Model):
# Some detail about the chapter
class Content(models.Model):
chapter = models.ForeignKey(Chapter)
page = models.ForeignKey(Chapter)
views.py
def create_contents(request):
if request.POST:
form = ContentForm(request.POST, request.FILES)
if form.is_valid():
form.save()
return HttpResponseRedirect('/books/all/')
else:
form = ContenttForm()
args = {}
args.update(csrf(request))
args['form'] = form
return render_to_response('content/content.html', args)
forms.py
class ContentForm(forms.ModelForm):
class Meta:
model = Content
A:
I suggest you two option:
using django-autocomplete-light
using jquery like this
| {
"pile_set_name": "StackExchange"
} |
Q:
If $f''\ge 0$, prove that $f(x+f'(x)) \ge f(x)$
Question:
$f$ and $f'$ are differentiable, and $f''\ge 0$. Then, prove that $\forall x \in \mathbb R$, $f(x+f'(x))\ge f(x)$.
Since $f''\ge 0$, I'd like to apply Jensen's theorem, which is:
$$f(tx_1 + (1-t)x_2) \le tf(x_1) + (1-t)f(x_2) $$
However, it was hard to determine the value of $x_1$ and $x_2$.
Another way came up to my mind was to set the new function
$$g(x)=f(x+f'(x))-f(x)$$
and prove that $g(x)\ge 0$ by using $g'(x)$. Unfortunately, when we calculate the derivation of $g(x)$ as following:
$$ g'(x)= f''(x)f'(x+f'(x))-f'(x)$$
eventually, there was nothing I can find.
Could you give some key points to this proof?
Thanks for your advice.
A:
$\int_x^{x+f'(x)} f'(t) dt \geq \int_x^{x+f'(x)} f'(x) dt=(f'(x))^{2}$ if $f'(x) \geq 0$. [I have used the fact that $f'$ is increasing]. Hence $f(x+f'(x))-f(x) \geq (f'(x))^{2}\geq 0$. A similar argument works when $f'(x) <0$.
A:
Hint: If $f$ is twice continuously differentiable then we have $f(x + f'(x)) = f(x) + {f'(x)}^2 + \frac{1}{2} {f'(x)}^2 f''(c) ,$ for some $c$.
| {
"pile_set_name": "StackExchange"
} |
lundi 10 novembre 2014
There's Darcy-Weisbach equation for estimation of a pressure drop inside a tube from its geometry and flow conditions. For circular tube in laminar regime pressure drop is equal to:
$$
\Delta{P} = f_D\frac{L}{d}\rho\frac{V^2}{2}
$$
where
\(f_D\) is Darcy friction factor
\(L\) is a length of the tube
\(d\) is a diameter of the tube
\(\rho\) is fluid density
\(V\) is average fluid flow velocity
In the post person was complaining about large difference between pressure drop estimated from Darcy-Weisbach equation and simulation results. As the simulation was rather obvious, and I had tube mesh nearby, so I've executed 3D case and got pressure drop value 12% off the estimation, attributed it to overall mesh quality. Cause its improvement would eventually lead to parallel simulation I've decided to run 2D simulations to check if I can get exact value of pressure drop.
According to Darcy-Weisbach pressure drop for these cases should be \(dp = 0.4~Pa\) and \(dp = 2~Pa\) correspondingly.
If in 3D there are certain inconvenient things during mesh construction, in axisymmetric 2D it's just a rectangle (though one needs to obey rules of wedge boundaries construction). Cases can be downloaded from GitHub. They are obvious simpleFoam cases with the following SIMPLE dictionary:
Well, k and epsilon residuals are not necessary for low Reynolds numbers. On figure below overall solution process is displayed. Around 1300 iteration convergence criterion was satisfied and simulation stopped.
So there were two stages of the solution: initial rapid pressure and residuals drop and further slow convergence to the pressure drop estimated with D-W equation. As the value of the pressure drop seems to be flat after 600 s, below are two zoomed versions of the plot: the first just changes range on y-axis, while the second also changes range on x-axis:
So on zoomed plots one can see that in fact pressure goes slightly below estimated value (guess, due to the way I estimated pressure drop in simulation). Also one can choose desired value of convergence criterion residuals and estimate the error. It would be quite interesting to check this thing in transient simulations as usually people select fixed number of outer iterations instead of residual control for convergence criterion.
The same figures for the case Re = 100 are shown below:
The pressure drop evolution looks almost like in the previous case, though it took longer to converge (due to higher Reynolds number?).
Zoomed versions of the plot again show that pressure drop went below estimated value and flatlining of the pressure is not actually flatlining.
dimanche 20 juillet 2014
As I've got tired of struggling with Blogger (funny layout, lack of ability to markup posts in Markdown, rather inconvenient commenting) I've decided to move patches repository to Github. So all questions and bugs can be filed there. Installation guides can be found in wiki.
Since last post I've added the following things:
Added patches for 2.1.x versions.
Finally I've got a fresh OS X 10.9 installation and found out there's no gdb, so I've added lldb support to addr2line_mac.py. Also cleaned code a bit so it passes pylint without complains.
Changed paraview from alias to function so now paraFoam doesn't complain about unknown command.
mardi 15 avril 2014
Previous patches I've posted in this blog (#1, #2, #3) have certain mistakes. Mainly because I forgot about DYLD_LIBRARY_PATH environment variable and did not test parallel execution of the solvers. Recently I've decided it is time to fix it. Also that tiny feedback I had, showed that there is two common errors during compilation: people create disk images with case insensitive file system and they try to apply patches in an arbitrary folders. The second problem is hard to solve by a blog post, while the first one can be solved if we change the process of disk image creation. Go from error-prone GUI way to just one command executed in terminal window. And here goes the guide.
Build process was tested on OS X version 10.9.2 with clang --version: Apple LLVM version 5.1 (clang-503.0.38) (based on LLVM 3.4svn).
Before we start
I've made certain assumptions about the software you have on your Mac:
Homebrew is used as a package manager to install open source software. If not, you can install it following instructions on the site.
All software from ThirdParty source pack will be installed using Homebrew.
All commands are executed in Terminal application (usually it is called Terminal.app).
$ in the beginning of the lines is a shell prompt and you don't need to copy it, if you're copy/pasting commands from the post. In general commands should be executed in the sequence shown in the post.
You've downloaded Paraview application from paraview.org and installed it in /Applications.
The last command will also install cmake, gmp, and mpfr as dependencies.
Downloading necessary pieces
At this point it is time to decide what version of OpenFOAM you'd like to build. As 2.3.0 introduced certain backward incompatible changes, I've made patches for 2.2.2 and 2.3.0. Further in the guide I assume you've downloaded source pack and patch into ~/Downloads/OpenFOAM folder.
Building OpenFOAM
!!! Cause certain file names in this part depend on the version of OpenFOAM you're building, I use N and M variables in the file names. For OpenFOAM version 2.2.2: N=2, M=2; for OpenFOAM version 2.3.0: N=3, M=0.
Also it is possible that you already have disk image with the same name in your home folder. Correct the names accordingly.
To build OpenFOAM first you need to create disk image with case sensitive file system. You can either follow the guide from OpenFOAM wiki, or just issue these commands in your terminal:
Last command will create disk image with the name OpenFOAM.sparsebundle with estimated size 4.4 G (as it is the sparse image, after creation the size of the file will be different), create HFS+ file system with a volume name OpenFOAM in the image. Finally -fsargs -s will instruct newfs_hfs utility to create case sensitive file system.
After the image was created you can mount it, I will assume it is mounted in $HOME/OpenFOAM
At this point maybe you'll need to edit etc/bashrc file. But since compiler settings are fixed (it is Clang), OpenMPI and other libraries settings are fixed (they are installed with Homebrew), in fact the only thing worth editing is path settings and only in case if you're installing OpenFOAM elsewhere (i.e. not in $HOME/OpenFOAM). Now you're more-or-less ready to compile.
$ source etc/bashrc
$ ./Allwmake > log.Allwmake 2>&1
The last command executes Allwmake script and redirects its output to log.Allwmake file; just in case, if anything goes wrong, it'll be easy to locate the source of error having this log file.
After Allwmake execution is finished, you can test the build with:
And finally third one is to transform second version in the set of functions. I.e. instead of automatic attachment of disk image upon terminal start, you'll need to execute function which will mount necessary disk image and source bashrc:
mardi 18 mars 2014
Paraview is good. But pictures produced by the software is in general raster (maybe I can't use it right but even if I build with gl2ps I can't find a way to export scene to PS or PDF). So usually I used sample utility to write a slice of interest to a RAW file, then load it with numpy and then use tricontourf to produce color map of a value. And then I can save it as a PDF (or EPS, or PS) so the figure can be scaled without quality loss.
Everything is good (though there's certain problems with matplotlib's triangulation sometimes) but in this case I had a hole in a flow field. So if I have a file with (x, y, value) tuples, the hole will be also triangulated with a value equal to zero and a final tricontourf figure will be quite far from the original flow field. Also as OpenFOAM's sample utility will anyway create a triangulation of the mesh, why do it again. So I've decided to try plotting VTK file with a triangular grid. And it was more or less easy. Though there's almost no documentation on python-vtk module. Well, I guess this module is just a wrapper around C++ so one need to write C++-python instead of python.
Surely this script can be improved to handle vectors and scalars automatically, to handle polymesh files in more general way (currently I suppose, all polygons in the mesh are triangles).
Here is a script for loading a vector field and plotting its X component:
This post appeared due to the question on cfd-online.com OpenFOAM forum. It has been started as yet another post on Kármán vortex street but then (around message #60) original poster revealed an article he was trying to compare with, so I've decided to check influence of different discretisation schemes and convergence criterions on final results.
Geometry and flow conditions
The first version of geometry and boundary conditions can be found in post #11, though they slightly differ from the geometry provided in the article (and as a result, from geometry used in simulations).
So we've got a cylinder with diameter D placed in the channel with a length 16D and a width 8D. At the inlet we have velocity 1 m/s, at the outlet zero pressure and zero gradient in velocity. Initially velocity inside the domain set to 1 m/s. With D = 1 m and fluid kinematic viscosity 100 m2/s, Reynolds number is equal to 100.
First attempt
Well, the first variant of the simulation was just a simulation for its own sake (even Re was 200). Just to produce neat pictures. And these videos:
Flow develops, vortices go.
Second attempt
After reading the original article I've decided to check different meshes, discretisation schemes, and convergence criteria to learn if there's any difference. In the article authors were developing new method in FEM framework, so theirs original mesh was triangular with increased density near a centerline of the channel. Now I think there's a need for another set of simulations: with and without increase of mesh density near the center-line of the channel.
Meshes
I've fallen in love with Gmsh for generating meshes, so these two meshed are produced with it. As Gmsh now has transfinite algorithm, it is more or less like blockMesh with GUI, variables and functions in a mesh file, more convenient way of grading description. Much like in blockMesh one needs to define vertices and then can use GUI to describe lines, surfaces and volumes. Then again switch to text editor for definition of transfinite lines, progressions (in the sense of Gmsh), etc. I will omit iterations during the mesh creation process and just provide GEO files with screenshots of the mesh:
The only difference between coarse and fine mesh is twice increased density in case of "Fine" mesh. So a number of cells in the fine mesh is 4 times higher than in coarse mesh.
fvSchemes
Typical fvSchemes file is provided below. The only line that is changed from simulation to simulation is div(phi,U) Gauss vanLeerV;, I've chosen to compare three different second order discretisation schemes with limiters: van Leer, QUICK, and Gamma.
Keeping everything else the same, I changed tolerance in residualControl dictionary to see if it influence results. I've checked values 1e-2, 1e-4, and 1e-6. This value defines the moment when PIMPLE loop will exit from outer corrector cycle. And usually in the beginning of a simulation number of outer correctors is around 4 (for the case of residual 1e-6 - 8), during the time this number falls to 2 (or 4 for the case of 1e-6). I don't know why all tutorial cases use fixed number of outer corrector iterations, sometimes it can lead to completely incorrect results. Maybe OpenFOAM decided that residualControl is "advanced" technique.
Also there're lots of scary posts on relaxation, so I've decided also to check the influence of relaxation factors on results of a simulation. So instead of standard
relaxationFactors
{
fields
{
}
equations
{
"U.*" 1.0;
"p.*" 1.0;
}
}
I used
relaxationFactors
{
fields
{
}
equations
{
"U.*" 0.7;
"p.*" 0.3;
}
}
Results
Color contours usually provide only schematic view of the flow, so further (surely after these pictures) I will be comparing oscillations of force coefficients. Sometimes it is possible to plot isolines of pressure and velocity on the same picture (when the phases of the oscillating flow do not differ much). But now, the color contours (time is 150 s):
Pressurevan Leer
QUICK
Gamma
Though values seem to be more or less the same, vortices are shifted in time. The same thing is for velocity components:
Uxvan Leer
QUICK
Gamma
Uyvan Leer
QUICK
Gamma
Discretisation schemes comparison
I've fixed final residual in fvSolution at 1e-6 and just changed discretisation scheme. On the graphs X-axis is time, Y-axis is value. All simulations were done on the 'fine' mesh.
There is phase shift between graphs, I've moved them to compare oscillation amplitude. There is difference but it's not so big.
And as there is phase shift between different scheme I will omit comparison of the pressure and velocity contours. While there is difference between Cdrag oscillation amplitudes, Clift oscillations go rather synchronously:
Final residuals comparison
After that I've decided to check sensitivity of a discretisation scheme on final residual. I've fixed the scheme and run simulations for different final residuals: 1e-2, 1e-4, and 1e-6.
While van Leer and Gamma showed almost no dependence (though there is slight shift when going from 1e-2 to 1e-4 for Gamma):
For QUICK switch from 1e-2 to 1e-4 leads to quite large time shift between oscillations:
But even if the time shift in oscillations seems to be not so big, positions of the vortices are strongly depend on this shift. Below is overlays of Ux isolines for van Leer and Gamma schemes at 150 s:
If for van Leer scheme contours are almost the same, for Gamma results with residual 1e-2 are sufficiently shifted.
Mesh density comparison
Also compared results acquired using meshes with different densities. Number of cells in the meshes differ 4 times as I've reduced densities twice in X and Y directions.
Gamma showed just decrease of Cdrag and Clift values on fine mesh, while with QUICK and van Leer schemes there's also phase shift between results on different meshes.
Cdrag
Clift
To relax or not
Finally I've decided to check if a relaxation is really crucial to results. I've fixed the discretisation scheme (van Leer, though now I think maybe it was better to choose Gamma), the final residual in PIMPLE dictionary and just modified relaxation part as shown in the beginning of the post. Surely due to the relaxation solution converges slowly, i.e. instead of 4-5 outer corrector iterations 9-11 iterations was necessary. Also there is a time shift between results.
Conclusion
So it goes.
As usual case files (if you'd like to reproduce results) are on bitbucket.org.
samedi 1 mars 2014
Introduction
After I've done simulations of vortex street in 2D, I kept thinking that it would be nice to do more or less the same thing in 3D. And after watching the movie about vortex flow meters by Endress+Hauser, I've decided it would be really funny to put something in a tube and watch vortices going by.
First of all we need to decide on operational parameters of the system. Let's assume that we have 10 m long tube with diameter of 1 m and let's install something at 2 m from inlet plane. Further calculations are based upon the article from Wikipedia about Kármán vortex street.
Re = uD/ν
where u is flow velocity, D is characteristic length, and ν is fluid viscosity. We'll get vortex street for Re values larger than 90. If we take, for example, water with nu around 1000000 it's not hard to get large enough Reynolds number. But then a question of the distance between vortices arises. We also want our vortices inside the tube. Vortex shedding frequency is
f = 0.198*u/d*(1 - 19.7/Re)
where d is cylinder diameter (correct for 250 < Re < 10^5). After playing with tube and cylinder sizes, fluid viscosity and inflow velocities (here is a script) I've decided to go with the following:
D = 1 m
d = 0.2 m
u = 1 m/s
ν = 0.0001 m^2/s
Meshing
For meshing I've used Gmsh and snappyHexMesh. I.e. I've created hexagonal mesh of the tube, then I've created triangular surface mesh for the cylinder and sphere, a
nd finally I've used snappyHexMesh to cut small cylinder (or sphere) tube.
Mesh for tube is based upon my previous post. The addition is a parametrization of the GEO file, i.e. one need to set tube leng
th and its diameter; gmsh will recalculate positions of the points in the mesh. GEO files for cutting shapes (cylinder, sphere) are even simpler cause we need only triangular surface mesh so there's no need to split a shape into volumes with 6 sides (this operation necessary for transfinite algorithm to work).
One of the problems I've encountered during "cutting" procedure is the need to use more or less cubical mesh for sHM to work. In case of sphere it wan't actually a problem, cause it lies inside "inner" part of the mesh which is rather regular. So one can have graded mesh near the walls of the tube. That was not so in case on cylinder. After several unsuccessful attempts to cut a cylinder hole in graded mesh (though sHM tried to do it but wasn't able to generate acceptable quality mesh, usually non-orthogonality on the new mesh was too high), I've removed grading.snappyHexMeshDict is quite simple, first I define the surface:
refinement is based upon distance from the mesh. Other parameters in the dictionary is just a copy from tutorial.
So after running mesh conversion (from gmsh to OpenFOAM) and then running snappyHexMesh, I've got the following meshes for the simulation:
Initial tube mesh
with grading towards the walls
without grading
Mesh with internal cylinder
Mesh with internal sphere
Unlike previous time I've decided not to compare turbulence models and just selected realizable k-epsilon.
Flow movies
After running the simulations I've got the following flow movies:
The cylinder in the tube, vorticity
The cylinder in the tube, velocity
The sphere in the tube, vorticity
The sphere in the tube, velocity
As expected after initial flow development I've got quite nice vortices going along the tube.
Probes
Since the movies are not quite enough I've added probes along the axis of the tube to record values of velocity, pressure, and turbulent viscosity during the simulations (this is a fragment of controlDict):
Below is a graphs of the pressure evolution at the location of the first probe (click on the figure for a larger version):
For internal cylinder
For internal sphere
Though evolution has periodic structure but the frequency differs from estimations.
How to run the cases on your own
I've put all necessary files into repository on BitBucket. So you can clone the repo or just download archive of the default branch. There is Cases folder where case files reside. Every case directory has Allprepare script which will do all the necessary utility invocations:
The script assumes that you have gmsh in your $PATH and first it creates mesh for tube and cylinder, then corrects STL file created by Gmsh, then places STL files to the directory where sHM will look for it, converts tube mesh to OpenFOAM format, corrects boundary type for walls, and finally runs sHM for final mesh creation. sHM procedure is rather lengthy and needs RAM as the limit on the number of cells is 1500000. | {
"pile_set_name": "Pile-CC"
} |
Wisconsin's Timbuk 3
was formed in 1984 by husband and wife team Pat and
Barbara MacDonald. They both played a number of
instruments and teamed up on vocal duty. The duo
was signed to a record deal after appears on an episode
of MTV's The Cutting Edge in 1986.
They cut their first
album, Greetings from Timbuk
3, shortly after landing the record deal, and it
was this album that featured their biggest hit:
"The Future's So Bright, I Gotta Wear Shades". It
was this song that landed them a Grammy nomination in
1987 for best new artist, and is an 80's top 40 staple
that continues to make its way on numerous compilation
albums.
Timbuk3 went on to
release two more albums in the 1980's: Eden Alley (1988) and Edge of
Allegiance (1989). Neither album produced
any future hits. Although the duo released a few
more albums in the 1990's, the band they had formed
eventually broke up (and they divorced). Both Pat
and Barbara went on to solo careers. The Timbuk3
Discography is as follows: | {
"pile_set_name": "Pile-CC"
} |
1
00:02:03,478 --> 00:02:06,478
≪すみません。
(さくら)はい。
2
00:02:11,536 --> 00:02:14,536
(峰子)御代川と 申します。
3
00:02:16,524 --> 00:02:19,524
(さくら)さっきまで お嬢さん
来てたんですよ。 ここに。
4
00:02:25,516 --> 00:02:27,568
(さくら)ああ。
5
00:02:27,568 --> 00:02:29,520
(峰子)あのう。
(さくら)はい。
6
00:02:29,520 --> 00:02:31,522
(峰子)娘のクラスの
子供たちは?
7
00:02:31,522 --> 00:02:33,474
(さくら)今日は 来てませんよ。
8
00:02:33,474 --> 00:02:36,461
(峰子)じゃあ 何で
娘は ここに?
9
00:02:36,461 --> 00:02:39,480
(さくら)ただ 親子丼
食べに来ただけですよ。
10
00:02:39,480 --> 00:02:45,470
親子丼を? どうして?
ああ。 さあ?
11
00:02:45,470 --> 00:02:47,472
(峰子)あの子ったら➡
12
00:02:47,472 --> 00:02:50,491
どうして こんな
吹きだまりみたいなところに。
13
00:02:50,491 --> 00:02:52,477
吹きだまり?
あの子から 聞いたんです。
14
00:02:52,477 --> 00:02:58,483
この辺の 不良とか 落ちこぼれが
集まる場所だって。
15
00:02:58,483 --> 00:03:04,489
(玄)吹きだまりだよ。
ここは 吹きだまりだよ。
16
00:03:04,489 --> 00:03:06,491
(峰子)えっ?
(雅弘)ばばあ。➡
17
00:03:06,491 --> 00:03:08,509
調子 乗ってんじゃねえぞ。 こら。
(峰子)ばばあ?
18
00:03:08,509 --> 00:03:10,511
(あざみ)すげえ
ムカつくんだけど。
19
00:03:10,511 --> 00:03:12,497
(あざみ)この くそばばあ。
まあ 嫌ですわ。
20
00:03:12,497 --> 00:03:14,515
あざみちゃんったら。
冗談ばっかり 言っちゃってね。
21
00:03:14,515 --> 00:03:16,517
≪(龍生・大樹・春彦)こんばんは!
はい。
22
00:03:16,517 --> 00:03:19,487
(春彦)さくらさん。 聞いてよ。
御代川 最悪。
23
00:03:19,487 --> 00:03:21,539
御代川?
(龍生)マジ くそだぜ。 御代川。
24
00:03:21,539 --> 00:03:24,525
(大樹)あいつ マジで アホ。
あれ? それ 何の話でしょう?
25
00:03:24,525 --> 00:03:27,512
(大樹)担任だよ。 俺たちの。
(春彦)あいつ 教師 失格じゃね?
26
00:03:27,512 --> 00:03:30,515
(龍生)首だよ。 首。 あの どブス。
(峰子)どブス?
27
00:03:30,515 --> 00:03:32,466
何を しゃべってんの?
ねえねえ。 手 洗った?
28
00:03:32,466 --> 00:03:37,455
(由希)九十九堂に 行ったの?
(峰子)ええ。
29
00:03:37,455 --> 00:03:42,460
(由希)どうして?
(峰子)ちょっと 気になってね。➡
30
00:03:42,460 --> 00:03:46,480
親子丼 食べてるそうね。
31
00:03:46,480 --> 00:03:49,467
(由希)クラスの生徒が
食べに 行ってるから➡
32
00:03:49,467 --> 00:03:51,469
どんな感じか
確かめようと 思って。
33
00:03:51,469 --> 00:03:53,469
(峰子)そう。
34
00:03:56,490 --> 00:04:00,478
(峰子)もう 行っちゃ 駄目よ。
あそこには。
35
00:04:00,478 --> 00:04:06,478
分かったわね?
(由希)はい。
36
00:04:08,486 --> 00:04:11,505
(由希)ママ。
(峰子)うん?
37
00:04:11,505 --> 00:04:15,509
トイレ 行っていい?
(峰子)どうぞ。
38
00:04:15,509 --> 00:04:28,506
♬~
39
00:04:28,506 --> 00:04:30,506
(鍵を掛ける音)
40
00:04:37,465 --> 00:04:41,469
(俊太)ちわ! 鳥八っす!
はい。 ご苦労さん。
41
00:04:41,469 --> 00:04:43,471
(俊太)冷蔵庫 入れとくよ。
ありがとね。
42
00:04:43,471 --> 00:04:45,456
いつも 悪いわね。
(俊太)いやいや いやいや。➡
43
00:04:45,456 --> 00:04:47,475
大変でしょう。
44
00:04:47,475 --> 00:04:49,477
おっ? 何 これ?
(俊太)あっ。 ちょっと。 駄目駄目。
45
00:04:49,477 --> 00:04:51,479
あら。 やだ。 俊太。
エッチな本でしょ?
46
00:04:51,479 --> 00:04:53,481
(俊太)いや。 違うって。
ちょっ ちょっ。 いいから。
47
00:04:53,481 --> 00:04:56,484
いいから いいから。
48
00:04:56,484 --> 00:05:00,471
あら。 これ 恭子んとこの
雑誌じゃない。
49
00:05:00,471 --> 00:05:04,471
(俊太)うん。
へえー。
50
00:05:14,468 --> 00:05:22,510
ハァー。 きつい話ね。
胸が 苦しくなるわ。
51
00:05:22,510 --> 00:05:35,439
♬~
52
00:05:35,439 --> 00:05:38,459
≪(ドアの開く音)
53
00:05:38,459 --> 00:05:46,450
♬~
54
00:05:46,450 --> 00:05:48,436
リエ。
55
00:05:48,436 --> 00:05:59,463
♬~
56
00:05:59,463 --> 00:06:04,463
(リエ)うちさ どうしたらいい?
57
00:06:06,487 --> 00:06:10,491
(リエ)少年院 行って➡
58
00:06:10,491 --> 00:06:14,512
あのことは もう
終わったんだと 思ってた。
59
00:06:14,512 --> 00:06:28,492
♬~
60
00:06:28,492 --> 00:06:30,511
(リエ)知らなかったんだよ。➡
61
00:06:30,511 --> 00:06:35,433
相手の子が
まだ こんなんだなんて。➡
62
00:06:35,433 --> 00:06:40,433
もう 元気になって
普通にしてんだと 思ってた。
63
00:06:45,459 --> 00:06:50,498
リエの気持ち 分かるよ。
64
00:06:50,498 --> 00:06:53,467
(リエ)分かる?
分かる。
65
00:06:53,467 --> 00:06:56,487
(リエ)何が 分かんだよ?
だから…。
66
00:06:56,487 --> 00:06:59,457
(リエ)終わってんなら
何で こんなん なってんだよ?
67
00:06:59,457 --> 00:07:01,475
(リエ)やっちまったもんは
しょうがねえじゃねえか!
68
00:07:01,475 --> 00:07:03,477
リエ。
分かってんなら 何か 答えろよ。
69
00:07:03,477 --> 00:07:06,480
どうすりゃ いいんだよ!
70
00:07:06,480 --> 00:07:11,469
うちの気持ち 分かんねえくせに
言ってんじゃねえよ!
71
00:07:11,469 --> 00:07:13,471
リエ!
放せよ!
72
00:07:13,471 --> 00:07:16,471
やめろ。 やめろ…。
73
00:07:18,509 --> 00:07:20,509
分かるんだよ。
74
00:07:23,497 --> 00:07:29,497
うちの母親 人 殺したんだ。
75
00:07:33,424 --> 00:07:38,429
殺された人の家族 恨んでる。
76
00:07:38,429 --> 00:07:48,456
私の母親も。 私のことも。
77
00:07:48,456 --> 00:07:51,456
何で 生きてんだよって。
78
00:07:54,512 --> 00:08:03,454
だからさ 私は 絶対に➡
79
00:08:03,454 --> 00:08:09,493
幸せになっちゃ いけないって
そう 決めて 生きてきた。
80
00:08:09,493 --> 00:08:13,464
いつ 死んでも いいって。
81
00:08:13,464 --> 00:08:22,464
だから リエの気持ち
分かるんだよ。
82
00:08:32,416 --> 00:08:38,439
(玄)いらない色は ありません。
みんな 大事です。
83
00:08:38,439 --> 00:08:42,426
いらない色か…。
84
00:08:42,426 --> 00:08:46,426
あるんじゃない?
いらない色って。
85
00:08:48,449 --> 00:08:50,449
ほら。
86
00:08:52,453 --> 00:08:55,453
例えば これ。
87
00:08:58,442 --> 00:09:05,442
塗っても 意味ないじゃん。 ねっ。
いらない色って あるんだよ。
88
00:09:10,471 --> 00:09:12,456
あっ…。
89
00:09:12,456 --> 00:09:21,465
(玄)いらない色は ありません。
みんな 大事です。
90
00:09:21,465 --> 00:09:24,468
みんな 大事か。
91
00:09:24,468 --> 00:09:26,468
(恭子)こんにちは。
92
00:09:28,489 --> 00:09:30,491
(恭子)正木リエの 居場所
教えてよ。
93
00:09:30,491 --> 00:09:32,409
知らない。
(恭子)待って。
94
00:09:32,409 --> 00:09:35,412
あんたさ リエに
どうしろって いうんだよ?
95
00:09:35,412 --> 00:09:37,412
死ねとでも いうのかよ?
96
00:09:42,436 --> 00:09:45,436
待って。
(恭子)分かった。
97
00:09:48,442 --> 00:09:54,448
知ってても 教えないわよ。
あんな記事 見たら。
98
00:09:54,448 --> 00:09:58,435
(恭子)読んだんだ?
うん。
99
00:09:58,435 --> 00:10:03,440
(恭子)どうだった?
そうね…。
100
00:10:03,440 --> 00:10:08,445
私も あざみちゃんと
同じかな。
101
00:10:08,445 --> 00:10:10,464
いや。 恭子は➡
102
00:10:10,464 --> 00:10:14,464
あの加害者の 女の子に
どうしろって 言いたいの?
103
00:10:16,470 --> 00:10:20,470
(恭子)人生を懸けて
罪を 償うべきよ。
104
00:10:23,494 --> 00:10:27,494
今度の事件
取材してて 思ったの。
105
00:10:29,466 --> 00:10:35,466
「お兄ちゃんを 殺した女
今 何やってるんだろう?」って。
106
00:10:39,426 --> 00:10:45,432
(恭子)きっと 楽しく
やってるんじゃないかなって。
107
00:10:45,432 --> 00:10:51,438
もしかして 再婚とかして
新しい家庭を 持って。
108
00:10:51,438 --> 00:10:55,442
あのときの 赤ちゃんだって
もう 大きくなってるわよね。
109
00:10:55,442 --> 00:11:00,447
私 そんなの 絶対 許せない。
だからって➡
110
00:11:00,447 --> 00:11:03,434
加害者を 不幸にする権利は
私たちには ないのよ。
111
00:11:03,434 --> 00:11:06,437
分かってるよ。 そんなこと。
112
00:11:06,437 --> 00:11:13,460
悠平の命を奪った 犯人は
許せないわよ。
113
00:11:13,460 --> 00:11:16,463
一生 許すこと ないと思う。
114
00:11:16,463 --> 00:11:26,463
だけどね 犯人を憎んで
生きていくのは 苦し過ぎるわ。
115
00:11:28,459 --> 00:11:35,466
だから 私は あの事件を
引きずって 生きるのは やめたの。
116
00:11:35,466 --> 00:11:38,466
じゃあ この部屋は 何?
117
00:11:41,488 --> 00:11:43,474
お母さんだって➡
118
00:11:43,474 --> 00:11:46,474
めちゃくちゃ お兄ちゃんのこと
引きずってるじゃない。
119
00:11:48,495 --> 00:11:50,481
私だって➡
120
00:11:50,481 --> 00:11:54,485
何度も お兄ちゃんのこと
忘れようとしたわ。
121
00:11:54,485 --> 00:12:01,475
小さいときから あの事件のこと
何度も 忘れようとしたの。
122
00:12:01,475 --> 00:12:10,517
でもね この家に 帰ってきて
この部屋を 見ると➡
123
00:12:10,517 --> 00:12:13,520
どうしても 忘れられなかった。
124
00:12:13,520 --> 00:12:21,528
この部屋 見るたびに
苦しかった。
125
00:12:21,528 --> 00:12:24,498
お母さんはさ➡
126
00:12:24,498 --> 00:12:28,535
死んだ お兄ちゃんのことばっかり
見てたのよ。
127
00:12:28,535 --> 00:12:32,473
私のことなんか
何も 見てなかったじゃない。
128
00:12:32,473 --> 00:12:36,473
それが 嫌で
お父さん 出てったんだよ。
129
00:12:41,465 --> 00:12:45,465
ごめん。 言い過ぎた。
130
00:12:49,506 --> 00:12:57,498
でも 私には 重過ぎるわ。
この部屋。
131
00:12:57,498 --> 00:13:15,498
♬~
132
00:13:35,452 --> 00:13:37,452
こんにちは。
133
00:15:42,462 --> 00:15:45,465
(由希)奇麗な お花ね。 玄さん。
134
00:15:45,465 --> 00:15:52,489
(玄)花。 花。 花。➡
135
00:15:52,489 --> 00:15:58,489
花は 誰のために 咲きますか?
136
00:16:00,464 --> 00:16:03,464
誰のため…。
137
00:16:06,486 --> 00:16:11,491
お待ち遠さま。
(由希)ありがとうございます。➡
138
00:16:11,491 --> 00:16:14,478
いただきます。
どうぞ。
139
00:16:14,478 --> 00:16:32,446
♬~
140
00:16:32,446 --> 00:16:34,431
(由希)ごちそうさまでした。➡
141
00:16:34,431 --> 00:16:37,431
お手洗い お借りします。
142
00:16:42,422 --> 00:16:44,474
吐いちゃ 駄目。
143
00:16:44,474 --> 00:16:48,445
えっ?
駄目。
144
00:16:48,445 --> 00:16:52,449
どいてください。
駄目。
145
00:16:52,449 --> 00:16:55,469
(由希)どうして?
146
00:16:55,469 --> 00:17:03,477
花は 自分のために 咲くのよ。
お母さんのためじゃないわ。
147
00:17:03,477 --> 00:17:08,465
これ以上 自分を
傷つけちゃ 駄目。
148
00:17:08,465 --> 00:17:11,468
どいて。
駄目!
149
00:17:11,468 --> 00:17:14,488
(由希)どいてください!
我慢できるって。
150
00:17:14,488 --> 00:17:16,473
(俊太)さくらさん?
151
00:17:16,473 --> 00:17:18,475
(由希)ほっといて!
(俊太)ちょっと。 どうしたの?➡
152
00:17:18,475 --> 00:17:20,494
ちょっと。 何?
落ち着いて。
153
00:17:20,494 --> 00:17:23,494
あっ。 由希さん。
大丈夫?
154
00:17:27,501 --> 00:17:30,501
(俊太)どしたの? どしたの?
由希さん。
155
00:17:34,441 --> 00:17:36,443
(俊太)摂食障がい?
うん。
156
00:17:36,443 --> 00:17:39,429
きっかけは
人それぞれみたいね。
157
00:17:39,429 --> 00:17:44,451
食べる快感で
嫌なことを 忘れる。
158
00:17:44,451 --> 00:17:48,438
吐く快感で その瞬間だけ
ストレスを 忘れる。
159
00:17:48,438 --> 00:17:52,438
そうやって 自分を痛めつけて
壊そうとするのね。
160
00:17:54,461 --> 00:18:01,461
そうすることで 誰かの気を
引こうとするのかも。
161
00:18:07,457 --> 00:18:13,480
《花は 自分のために 咲くのよ。
お母さんのためじゃないわ》
162
00:18:13,480 --> 00:18:15,465
《これ以上 自分を
傷つけちゃ 駄目よ》
163
00:18:15,465 --> 00:18:17,467
(峰子)由希ちゃん?
由希ちゃん?➡
164
00:18:17,467 --> 00:18:19,469
どうしたの?
ぼうっとしちゃって。
165
00:18:19,469 --> 00:18:22,456
(由希)えっ?
(峰子)何か あった?
166
00:18:22,456 --> 00:18:25,525
(由希)ううん。
(峰子)そういえば。➡
167
00:18:25,525 --> 00:18:29,529
今日の授業中も 何か
元気 なかったわね。
168
00:18:29,529 --> 00:18:31,465
(由希)えっ?
169
00:18:31,465 --> 00:18:33,433
廊下から 見てたのよ。➡
170
00:18:33,433 --> 00:18:37,421
校長先生に ご挨拶に
行ったついでに ちょっとだけね。
171
00:18:37,421 --> 00:18:40,457
校長先生に?
172
00:18:40,457 --> 00:18:43,427
(峰子)校長先生が 前に あなたに
お話があるって 言ってたでしょ。
173
00:18:43,427 --> 00:18:46,430
(峰子)ママ。 ちょっと
気になっちゃってね。
174
00:18:46,430 --> 00:18:50,434
(由希)《お願い。
誰か 助けて》
175
00:18:50,434 --> 00:18:52,452
(峰子)でも 何にも
心配いらないわ。➡
176
00:18:52,452 --> 00:18:55,455
子供たちなんてね
ちょっとぐらい 怒鳴りつけたり➡
177
00:18:55,455 --> 00:18:58,425
体罰が あるくらいで
ちょうどいいの。➡
178
00:18:58,425 --> 00:19:02,446
大丈夫だからね。 ママが ちゃんと
フォローしておきました。
179
00:19:02,446 --> 00:19:05,449
(由希)《助けて》
180
00:19:05,449 --> 00:19:09,469
(峰子)駄目じゃないの。
修学旅行の積立金。➡
181
00:19:09,469 --> 00:19:11,455
集金したら ちゃんと
経理に 渡しておかないと。➡
182
00:19:11,455 --> 00:19:14,491
ママが
お支払いしておきました。➡
183
00:19:14,491 --> 00:19:18,478
じゃあ いただきましょうか。
いただき…。
184
00:19:18,478 --> 00:19:31,508
♬~
185
00:19:31,508 --> 00:19:46,423
♬~
186
00:19:46,423 --> 00:19:57,434
♬~
187
00:20:01,454 --> 00:20:03,456
(由希)ママ?➡
188
00:20:03,456 --> 00:20:06,443
うん。 校長先生も
分かってくださったわ。➡
189
00:20:06,443 --> 00:20:10,443
ママの おかげ。
ありがとう。
190
00:20:12,465 --> 00:20:15,435
(由希)ママ。
授業 始まっちゃうから。
191
00:20:15,435 --> 00:20:18,435
電話 切るね。
192
00:22:08,465 --> 00:22:11,484
(峰子)カードローン。➡
193
00:22:11,484 --> 00:22:13,484
120万円!?
194
00:22:15,488 --> 00:22:18,475
☎
195
00:22:18,475 --> 00:22:21,494
(峰子)はい。
御代川でございます。
196
00:22:21,494 --> 00:22:24,497
☎(松山)ニチマルローンの
松山と 申します。
197
00:22:24,497 --> 00:22:27,484
ニチマルローン?
198
00:22:27,484 --> 00:22:41,431
♬~
199
00:22:41,431 --> 00:22:58,465
♬~
200
00:22:58,465 --> 00:23:01,434
(由希)ただいま。
(峰子)座りなさい。
201
00:23:01,434 --> 00:23:03,434
(由希)はい。
202
00:23:13,496 --> 00:23:17,484
(峰子)これは 何?➡
203
00:23:17,484 --> 00:23:24,491
120万も 銀行から借りて。
何に使ったの? あなた。
204
00:23:24,491 --> 00:23:31,481
大丈夫よ。
心配しないで。 ママ。
205
00:23:31,481 --> 00:23:33,416
(峰子)待ちなさい。➡
206
00:23:33,416 --> 00:23:35,435
他の ローン会社からも
電話があったのよ。➡
207
00:23:35,435 --> 00:23:39,456
由希ちゃん。 ママ 心配で
今日 学校に 電話したの。➡
208
00:23:39,456 --> 00:23:41,424
そしたら あなた
学校 休んでるっていうじゃない。
209
00:23:41,424 --> 00:23:46,446
(由希)お願い。
私に 構わないで。
210
00:23:46,446 --> 00:23:50,446
(峰子)由希ちゃん?
(由希)私の好きに させてよ!
211
00:23:52,452 --> 00:23:55,452
≪(ドアの開閉音)
212
00:23:58,475 --> 00:24:01,461
無断欠勤?
由希先生が?
213
00:24:01,461 --> 00:24:04,464
(大樹)よく 分かんないけど
ずっと 来ない。
214
00:24:04,464 --> 00:24:06,449
そう。
215
00:24:06,449 --> 00:24:08,468
(龍生)まっ その方が
平和で いいけどね。
216
00:24:08,468 --> 00:24:10,487
(春彦)このまま 来ないと
超 うれしいよな。
217
00:24:10,487 --> 00:24:13,456
(一同)いただきます。
218
00:24:13,456 --> 00:24:31,474
♬~
219
00:24:31,474 --> 00:24:33,426
(俊太)いつも
ありがとうございます。
220
00:24:33,426 --> 00:24:38,426
(女性)ありがとう。
(俊太)また お願いします。
221
00:24:44,454 --> 00:24:47,440
よっ!
222
00:24:47,440 --> 00:24:49,426
(恭子)うん。
223
00:24:49,426 --> 00:24:51,444
(俊太)恭子ちゃんが
言いたいことは 分かるよ。➡
224
00:24:51,444 --> 00:24:54,414
「人を 傷つけちゃったやつが
楽しく やってて➡
225
00:24:54,414 --> 00:24:58,435
それで いいのか?」って。
だけどさ…。➡
226
00:24:58,435 --> 00:25:00,453
だから どうしろっていうの?➡
227
00:25:00,453 --> 00:25:02,439
その加害者の子を
もっと 重い罪にすれば➡
228
00:25:02,439 --> 00:25:05,442
それで いいのか?
死刑にするとか?
229
00:25:05,442 --> 00:25:09,462
みんな おんなじこと 言う。
230
00:25:09,462 --> 00:25:13,462
母さんも
あの あざみって子も。
231
00:25:15,452 --> 00:25:22,492
(俊太)でも 恭子ちゃんの気持ち
俺は分かる。
232
00:25:22,492 --> 00:25:28,465
悠平のこと。 やっぱ
引きずっちゃうよ。 誰だって。
233
00:25:28,465 --> 00:25:41,478
♬~
234
00:25:41,478 --> 00:25:56,459
♬~
235
00:25:56,459 --> 00:25:58,428
(恭子)《お母さんはさ➡
236
00:25:58,428 --> 00:26:02,448
死んだ お兄ちゃんのことばっかり
見てたのよ》➡
237
00:26:02,448 --> 00:26:06,448
《私のことなんか
何も 見てなかったじゃない》
238
00:26:10,456 --> 00:26:12,425
≪(ノック)
≪さくらさん?
239
00:26:12,425 --> 00:26:14,444
どうぞ。
240
00:26:14,444 --> 00:26:17,444
電話。 警察から。
241
00:26:21,467 --> 00:26:25,488
あのう。
どうして 私の名前を?
242
00:26:25,488 --> 00:26:29,459
(刑事)さあ? 九十九堂の
さくらさんに 連絡してほしい。➡
243
00:26:29,459 --> 00:26:32,478
それだけしか 言わんのです。
はあ。
244
00:26:32,478 --> 00:26:35,415
(刑事)最近 風営法に
違反する 行為をさせる➡
245
00:26:35,415 --> 00:26:37,417
デリヘル業者が
増えてましてね。➡
246
00:26:37,417 --> 00:26:41,404
そこで 摘発した中に
彼女が いたわけです。
247
00:26:41,404 --> 00:26:44,424
あっ。 あのう。
それで 彼女は?
248
00:26:44,424 --> 00:26:47,424
(刑事)今日のところは
証拠不十分と いうことで。
249
00:29:07,450 --> 00:29:13,473
(由希)母は 私を産んだ後に
体調 崩して➡
250
00:29:13,473 --> 00:29:21,481
そのまま 教師を 辞めたんです。
お父さまは?
251
00:29:21,481 --> 00:29:27,487
≪(由希)教師でしたけど
退職して トラックの運転手に。➡
252
00:29:27,487 --> 00:29:31,474
保護者や 同僚との 人間関係が
うまくいかなくて。
253
00:29:31,474 --> 00:29:40,474
だから 母は 私を
どうしても 教師にしたかった。
254
00:29:43,436 --> 00:29:48,458
小さいころから ありと あらゆる
習い事を やらされてました。➡
255
00:29:48,458 --> 00:29:54,447
分刻みの スケジュールで
毎日毎日 急いでいました。➡
256
00:29:54,447 --> 00:29:58,434
雑踏の中を
何も考えず 何も見ず➡
257
00:29:58,434 --> 00:30:02,438
ただ ひたすら
習い事を こなすために➡
258
00:30:02,438 --> 00:30:05,441
私は 急いでいました。
259
00:30:05,441 --> 00:30:09,462
で ご飯は?
どうしてたの?
260
00:30:09,462 --> 00:30:13,449
ほとんど
コンビニの お弁当でした。
261
00:30:13,449 --> 00:30:16,436
それを 母と 2人で➡
262
00:30:16,436 --> 00:30:20,436
夜の公園の ベンチとかに座って
食べました。
263
00:30:22,492 --> 00:30:31,492
気が付くと 母は 私の中に
完全に 忍び込んでいました。
264
00:30:35,404 --> 00:30:41,410
私の生活の 全てを
母が 決めていたんです。
265
00:30:41,410 --> 00:30:50,410
食べるもの。 着るもの。
読む本。 聴く音楽。
266
00:30:52,405 --> 00:30:55,441
そして 未来まで。
267
00:30:55,441 --> 00:30:57,460
(由希)《私 決めたの》➡
268
00:30:57,460 --> 00:31:00,429
《大きくなったら
イラストレーターになる》
269
00:31:00,429 --> 00:31:03,449
(峰子)《由希ちゃんは
学校の先生になるの》
270
00:31:03,449 --> 00:31:08,454
(由希)いつしか 私自身も➡
271
00:31:08,454 --> 00:31:10,439
母の決めたとおりに
していれば 安心と➡
272
00:31:10,439 --> 00:31:13,439
思うように なっていたんです。
273
00:31:15,444 --> 00:31:25,444
だから トイレに行くことさえ
母に 決めてもらっていました。
274
00:31:27,507 --> 00:31:33,507
私は それが 母の愛だと思って
育った。
275
00:31:35,464 --> 00:31:38,484
母に言われたとおりの
大学に入り➡
276
00:31:38,484 --> 00:31:43,489
言われたとおりに 教員になった。
277
00:31:43,489 --> 00:31:49,478
あるとき 母が
私を見て 言ったんです。
278
00:31:49,478 --> 00:31:54,483
(峰子)《由希ちゃん。
ちょっと 太ったんじゃない?》
279
00:31:54,483 --> 00:31:57,503
その 一言で
ダイエットを 始めました。
280
00:31:57,503 --> 00:32:01,507
太ると 母に 叱られる。
281
00:32:01,507 --> 00:32:04,527
思ったとおり
体重は 落ちたけど➡
282
00:32:04,527 --> 00:32:08,514
拒食症に なってました。
すると 今度は…。
283
00:32:08,514 --> 00:32:13,519
(峰子)《由希ちゃん。
大丈夫? そんなに 痩せて》
284
00:32:13,519 --> 00:32:17,540
《どこか
具合 悪いんじゃない?》
285
00:32:17,540 --> 00:32:23,496
母に 心配をかけては いけない。
太らなくては。
286
00:32:23,496 --> 00:32:29,496
今度は
食べて 食べて 食べまくりました。
287
00:32:33,472 --> 00:32:37,460
そして 食べ吐き…。
288
00:32:37,460 --> 00:32:41,480
そうすることで 安心感を
得ることが できたんです。
289
00:32:41,480 --> 00:32:45,468
どうしてだか 分かりません。
290
00:32:45,468 --> 00:32:49,468
もう 私が 私で
なくなってしまったんです。
291
00:32:52,491 --> 00:32:56,479
私は ホントは…。
292
00:32:56,479 --> 00:33:00,499
教師になんか なりたくなかった。
293
00:33:00,499 --> 00:33:06,505
母は 私を支配し
私の人生に 乗り移ったんです。
294
00:33:06,505 --> 00:33:12,511
自分が挫折した 教師という夢を
私を使って 果たそうとした。
295
00:33:12,511 --> 00:33:14,511
だから…。
296
00:33:17,516 --> 00:33:23,516
だから 私は 母を裏切りたかった。
297
00:33:25,541 --> 00:33:28,511
あの人を 失望させたかった。
298
00:33:28,511 --> 00:33:31,530
だから 私は…。
もう いいわ。
299
00:33:31,530 --> 00:33:39,455
もう いい。
まだ あなた 若いんだから。 ねっ。
300
00:33:39,455 --> 00:33:42,455
これから やり直せるわよ。
301
00:33:47,463 --> 00:33:53,469
お母さんから 離れて
一人で やり直すの。
302
00:33:53,469 --> 00:33:56,469
母と?
ええ。
303
00:34:00,459 --> 00:34:06,499
そんなの 無理です。
えっ?
304
00:34:06,499 --> 00:34:09,485
母が いないと
生きていけないんです。
305
00:34:09,485 --> 00:34:13,522
ねえ。 しっかりして。
由希さん。
306
00:34:21,530 --> 00:34:23,532
駄目。 出ちゃ。
でも…。
307
00:34:23,532 --> 00:34:25,501
今 お母さん 断ち切らないと➡
308
00:34:25,501 --> 00:34:28,504
あなた 一生
人生 駄目になるわよ。 ねっ。
309
00:34:28,504 --> 00:34:30,523
返して!
駄目!
310
00:34:30,523 --> 00:34:32,441
返して!
311
00:34:32,441 --> 00:34:35,444
ちょっと 離して。
駄目!
312
00:34:39,465 --> 00:34:41,450
由希さん。 何…。
313
00:34:41,450 --> 00:34:45,471
助けて。 お願い。
314
00:35:14,500 --> 00:35:19,500
(由希)あざみちゃん。
何?
315
00:35:21,490 --> 00:35:25,490
あなたの お母さん。 どんな人?
316
00:35:29,498 --> 00:35:32,498
優しい人?
317
00:35:37,440 --> 00:35:44,480
<正直 言って 私は この人の
気持ちが よく 分からなかった>
318
00:35:44,480 --> 00:35:46,465
<だけど…>
319
00:35:46,465 --> 00:35:49,468
<下で 酔っぱらっている
さくらさんの気持ちは➡
320
00:35:49,468 --> 00:35:51,437
何となく 分かった>
321
00:35:51,437 --> 00:36:02,481
♬「狭いながらも
楽しい 我が家」
322
00:36:02,481 --> 00:36:08,481
♬「愛の火影の さすところ」
323
00:36:11,490 --> 00:36:18,481
♬「恋しい家こそ」
324
00:36:18,481 --> 00:36:21,481
♬「私の」
325
00:36:26,505 --> 00:36:31,494
(打つ音)
326
00:36:31,494 --> 00:36:46,458
♬~
327
00:36:46,458 --> 00:36:48,427
何 これ?
328
00:36:48,427 --> 00:36:50,429
≪(チャイム)
329
00:36:50,429 --> 00:36:52,448
(峰子)由希ちゃん!
330
00:36:52,448 --> 00:36:54,466
突然 お邪魔して
申し訳ありません。 あのう。
331
00:36:54,466 --> 00:36:56,485
いいとこで 会った。 手伝って。
は… はい。
332
00:36:56,485 --> 00:36:58,454
(峰子)娘が 昨日から
帰ってこないんです。
333
00:36:58,454 --> 00:37:00,472
電話にも 全然 出てくれないし。
334
00:37:00,472 --> 00:37:02,474
どこか 立ち寄りそうなところ
捜してるんです。
335
00:37:02,474 --> 00:37:04,476
手伝ってください。
336
00:37:04,476 --> 00:37:12,468
♬~
337
00:37:12,468 --> 00:37:15,468
あのう。 これ。
338
00:37:17,489 --> 00:37:20,476
これ 幼稚園のときの 記録です。
339
00:37:20,476 --> 00:37:24,480
お友達の名前とか 電話番号とか
あと 連絡網とか 捜してください。
340
00:37:24,480 --> 00:37:27,483
あのう。 お母さま。
実は…。
341
00:37:27,483 --> 00:37:30,502
私は 小学校時代を
捜しますから。
342
00:37:30,502 --> 00:37:41,447
♬~
343
00:37:41,447 --> 00:37:55,461
♬~
344
00:37:55,461 --> 00:38:01,467
(由希)《分刻みの スケジュールで
毎日毎日 急いでいました》
345
00:38:01,467 --> 00:38:08,440
♬~
346
00:38:08,440 --> 00:38:13,479
そっか。
そうだったのね。
347
00:38:13,479 --> 00:38:17,466
何か 分かりました?
いえ。
348
00:38:17,466 --> 00:38:19,485
やっぱり 警察に 電話します。
349
00:38:19,485 --> 00:38:21,470
何か 事件に
巻き込まれたのかも しれない。
350
00:38:21,470 --> 00:38:23,472
あのう。 ごめんなさい。
351
00:38:23,472 --> 00:38:28,477
由希さん。 うちに いるんです。
352
00:38:28,477 --> 00:38:30,477
えっ!?
353
00:40:04,456 --> 00:40:07,459
(峰子)売春?
ええ。
354
00:40:07,459 --> 00:40:15,451
嘘です。 あの子が そんな。
汚らわしい。
355
00:40:15,451 --> 00:40:19,455
そうね。 由希さんは
自分を 汚してしまったわ。
356
00:40:19,455 --> 00:40:23,459
何て バカなことを。
357
00:40:23,459 --> 00:40:26,462
でもね お母さん。
358
00:40:26,462 --> 00:40:31,467
そうさせてしまったのは
あなたなんですよ。
359
00:40:31,467 --> 00:40:35,404
私が?
由希さんは➡
360
00:40:35,404 --> 00:40:41,443
小さなときから 何でも
あなたに 決められてきた。
361
00:40:41,443 --> 00:40:49,435
食べるもの。 着るもの。
トイレ。 そして 未来まで。
362
00:40:49,435 --> 00:40:52,438
それが 母親の愛って
いうもんです。
363
00:40:52,438 --> 00:40:55,407
愛?
母親の愛は➡
364
00:40:55,407 --> 00:40:58,410
子供にとって
空気みたいなもんなんです。
365
00:40:58,410 --> 00:41:02,464
目には 見えないけど
いつでも 子供を包んで➡
366
00:41:02,464 --> 00:41:05,434
なくては 生きては
いけないようなものなんです。
367
00:41:05,434 --> 00:41:10,456
その空気が 濃過ぎたら
子供は 息が詰まるわ。
368
00:41:10,456 --> 00:41:13,459
息が詰まる?
だから 由希さんは➡
369
00:41:13,459 --> 00:41:18,447
食べては吐き 食べては吐き➡
370
00:41:18,447 --> 00:41:21,467
そうやって 自分を
壊そうとしてるんです。
371
00:41:21,467 --> 00:41:24,486
あなたに 何が分かるの?
私と 由希ちゃんはね…。
372
00:41:24,486 --> 00:41:27,486
分かってないのは あなたでしょ!
373
00:41:32,478 --> 00:41:36,465
あなた 由希さんの手首に➡
374
00:41:36,465 --> 00:41:42,488
一筋の傷痕が あるのは
知ってますか?
375
00:41:42,488 --> 00:41:44,488
傷?
376
00:41:46,508 --> 00:41:50,479
そんなに 古くない傷よ。
377
00:41:50,479 --> 00:41:53,482
まさか…。
あなた。
378
00:41:53,482 --> 00:41:57,486
ねえ?
由希さんの 何 見てきたの?
379
00:41:57,486 --> 00:42:00,506
あなたは 由希さんの全てを
知ってる。
380
00:42:00,506 --> 00:42:03,509
ありと あらゆることを 知ってる。
381
00:42:03,509 --> 00:42:09,498
こうやって 彼女の人生を
狭い部屋の中に 閉じ込めて➡
382
00:42:09,498 --> 00:42:12,501
全てを 分かってるつもりでいる。
383
00:42:12,501 --> 00:42:19,491
でも 誰でも 気が付くような傷を
あなた 見てない。
384
00:42:19,491 --> 00:42:25,514
そんな人間に 由希さんの
心の傷が 見えるはずが ないわ。
385
00:42:25,514 --> 00:42:32,454
何度も 死のうって
考えてたそうよ。
386
00:42:32,454 --> 00:42:34,473
でも 死ねなかった。
387
00:42:34,473 --> 00:42:39,461
だって そんなことしたら
あなたが 悲しむから。
388
00:42:39,461 --> 00:42:42,464
「お母さんを 悲しませたくない」
389
00:42:42,464 --> 00:42:49,488
だから 由希さんは
必死で 生きてきたんです。
390
00:42:49,488 --> 00:42:54,476
死なないために。
自分を 死に追い込まないために。
391
00:42:54,476 --> 00:42:59,476
彼女は 摂食障がいに
なったんです。
392
00:43:02,484 --> 00:43:05,521
あのう。
393
00:43:05,521 --> 00:43:08,521
これ 見てください。
394
00:43:13,529 --> 00:43:16,498
私の息子です。
395
00:43:16,498 --> 00:43:22,504
この写真を撮った 次の日に…。
もう。 死んじゃったんですよ。
396
00:43:22,504 --> 00:43:25,504
バイト先で 事件に遭ってね。
397
00:43:27,526 --> 00:43:32,526
明るくて 正義感の強い子でした。
398
00:43:36,451 --> 00:43:44,493
この子の部屋
16年前のまんまなんです。
399
00:43:44,493 --> 00:43:49,481
そうしておけば いつも 息子と
生きてるような気がして。
400
00:43:49,481 --> 00:43:56,471
いやぁ。 でも この前
娘に 言われちゃったんです。
401
00:43:56,471 --> 00:44:00,471
それが 自分には
とっても 重かったって。
402
00:44:07,499 --> 00:44:13,488
私 息子から
巣立とうと 思います。
403
00:44:13,488 --> 00:44:20,529
巣立つ?
よかったら 一緒に どうです?
404
00:44:20,529 --> 00:44:25,517
私も?
はい。
405
00:44:25,517 --> 00:44:36,461
♬~
406
00:44:36,461 --> 00:44:40,465
(カヨ子・キミ子)ちょっと
ちょっと…。 ちょっと待って。
407
00:44:40,465 --> 00:44:43,452
(キミ子)あの人
あれ 学校の先生よね?➡
408
00:44:43,452 --> 00:44:45,470
いつか 補導に 来てたわよね?
409
00:44:45,470 --> 00:44:49,474
(カヨ子)朝から ずっと この家に
いるわよね? どういうこと?
410
00:44:49,474 --> 00:44:53,474
うるせえ。
死ね。 この どブス。
411
00:44:56,481 --> 00:45:00,481
(由希)いただきます。
いただきます。
412
00:45:12,464 --> 00:45:14,499
これ…。
413
00:45:14,499 --> 00:45:19,504
何か いつもと 味が違う。
そう?
414
00:45:19,504 --> 00:45:23,504
いつもより 甘い。
そう?
415
00:45:27,496 --> 00:45:31,500
ずっと 疑問に思ってたの。
416
00:45:31,500 --> 00:45:37,456
何で 由希さん。 ここの親子丼
食べに来たのかって。
417
00:45:37,456 --> 00:45:44,463
初めは クラスの子供たちを
見張るためかなと 思った。
418
00:45:44,463 --> 00:45:48,463
その理由が やっと 分かったの。
419
00:45:55,440 --> 00:45:58,460
これ…。
420
00:45:58,460 --> 00:46:03,460
あなたの
子供のころの 予定表よ。
421
00:46:05,467 --> 00:46:14,509
ホント 毎日毎日
過密スケジュールだったわね。
422
00:46:14,509 --> 00:46:20,509
でもね ここ。 見て。
423
00:46:22,484 --> 00:46:26,488
毎日 習い事で 晩ご飯は
コンビニ弁当だったけど➡
424
00:46:26,488 --> 00:46:30,509
水曜の夜だけは
塾に行くまでに 時間があって➡
425
00:46:30,509 --> 00:46:34,446
おうちで 晩ご飯 食べてたの。
426
00:46:34,446 --> 00:46:40,446
その ご飯が
いつも 親子丼だった。
427
00:46:42,437 --> 00:46:45,424
お母さん 言ってたわ。
428
00:46:45,424 --> 00:46:52,464
短い時間で 作れるから
いつも 親子丼にしてたって。
429
00:46:52,464 --> 00:46:54,464
母が?
430
00:46:56,468 --> 00:47:06,478
週に一度の お母さんの味。
それが 親子丼だったのね。
431
00:47:06,478 --> 00:47:10,478
あなたは それが
とっても 楽しみだった。
432
00:47:18,490 --> 00:47:23,478
あのう。 もしかして これ…。
433
00:47:23,478 --> 00:47:29,468
お母さんに 作り方 教わったの。
どう?
434
00:47:29,468 --> 00:47:46,468
♬~
435
00:47:46,468 --> 00:48:01,466
♬~
436
00:48:01,466 --> 00:48:08,466
母の味です。
あのときの 母の。
437
00:48:12,444 --> 00:48:14,479
(由希)《おいしい》
438
00:48:14,479 --> 00:48:29,477
♬~
439
00:48:29,477 --> 00:48:32,430
≪ゆっくり食べて ゆっくり。
≪(由希)はい。
440
00:48:32,430 --> 00:48:46,411
♬~
441
00:48:46,411 --> 00:49:01,459
♬~
442
00:49:01,459 --> 00:49:15,490
♬~
443
00:49:15,490 --> 00:49:20,490
これで 最後だけど。
ありがとう。
444
00:49:23,465 --> 00:49:28,465
ホントに いいの?
お願い。
445
00:50:03,438 --> 00:50:05,457
さくらさん。
446
00:50:05,457 --> 00:50:14,466
♬~
447
00:50:14,466 --> 00:50:16,466
来たわね。
448
00:50:18,470 --> 00:50:20,470
大丈夫?
449
00:50:22,457 --> 00:50:24,476
たぶん。
450
00:50:24,476 --> 00:50:35,420
♬~
451
00:50:35,420 --> 00:50:42,460
<2人の母親は 何も話さずに
ずっと 炎を見ていた>
452
00:50:42,460 --> 00:50:47,432
<子供から 離れることは
とても 難しい>
453
00:50:47,432 --> 00:50:51,432
<母親にとって
一番の試練なのかもしれない>
454
00:50:55,440 --> 00:51:00,440
<私の母親は
どうだったのだろう?>
455
00:51:02,430 --> 00:51:07,452
<由希さんが 家に帰ると
お母さんは いなかった>
456
00:51:07,452 --> 00:51:13,458
<「お父さんと 2人で
古里の新潟で 暮らす」>
457
00:51:13,458 --> 00:51:17,462
<そんな 書き置きが
あったそうだ>
458
00:51:17,462 --> 00:51:20,465
<由希さんは 教師を辞めて➡
459
00:51:20,465 --> 00:51:25,470
小さいころの夢だった
イラストレーターを 目指すらしい>
460
00:51:25,470 --> 00:51:41,469
♬~
461
00:51:41,469 --> 00:51:47,469
♬~
| {
"pile_set_name": "Github"
} |
Q:
Device driver without open method
I was wondering that is it mandatory to have a open release method for device driver's. The only job that open does is allocation of structures and putting them into file->priv_data, so that other methods can access.
So if I have all static allocations and do not care about things that happen upon unload, Is my "question" possible. What will happen upon user space open..Will I still get a file descriptor. and able to read write to the device file (read, write methods are implemented).
A:
Probably is the only job that open does in the example that you saw, is it possible. It depends from driver to driver, for simple drivers without special requirement on open/release (we are talking about char device I suppose) there are internal linux helpers.
Of course, what you suggest is possible, but ... keep in mind that it is a very bad design idea to use static declarations unless it is what you really want (and usually it isn't). For example, with static allocations, multiple instances of the device driver will share the same data.
| {
"pile_set_name": "StackExchange"
} |
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package release
import (
"context"
"os"
"os/user"
"path/filepath"
"strings"
"cloud.google.com/go/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"k8s.io/release/pkg/gcp/gcs"
"k8s.io/release/pkg/util"
"k8s.io/utils/pointer"
)
// PushBuild is the main structure for pushing builds.
type PushBuild struct {
opts *PushBuildOptions
}
// PushBuildOptions are the main options to pass to `PushBuild`.
type PushBuildOptions struct {
// Specify an alternate bucket for pushes (normally 'devel' or 'ci').
Bucket string
// Specify an alternate build directory (defaults to '_output').
BuildDir string
// If set, push docker images to specified registry/project.
DockerRegistry string
// Comma separated list which can be used to upload additional version
// files to GCS. The path is relative and is append to a GCS path. (--ci
// only).
ExtraVersionMarkers string
// Specify a suffix to append to the upload destination on GCS.
GCSSuffix string
// Specify an alternate bucket for pushes (normally 'devel' or 'ci').
ReleaseType string
// Append suffix to version name if set.
VersionSuffix string
// Do not exit error if the build already exists on the gcs path.
AllowDup bool
// Used when called from Jenkins (for ci runs).
CI bool
// Do not update the latest file.
NoUpdateLatest bool
// Do not mark published bits on GCS as publicly readable.
PrivateBucket bool
// Specifies a fast build (linux amd64 only).
Fast bool
// Specifies if we should push to the bucket or the user suffixed one.
NoMock bool
}
type stageFile struct {
srcPath string
dstPath string
required bool
}
var gcpStageFiles = []stageFile{
{
srcPath: filepath.Join(GCEPath, "configure-vm.sh"),
dstPath: filepath.Join(GCSStagePath, "extra/gce"),
required: false,
},
{
srcPath: filepath.Join(GCIPath, "node.yaml"),
dstPath: filepath.Join(GCSStagePath, "extra/gce"),
required: true,
},
{
srcPath: filepath.Join(GCIPath, "master.yaml"),
dstPath: filepath.Join(GCSStagePath, "extra/gce"),
required: true,
},
{
srcPath: filepath.Join(GCIPath, "configure.sh"),
dstPath: filepath.Join(GCSStagePath, "extra/gce"),
required: true,
},
{
srcPath: filepath.Join(GCIPath, "shutdown.sh"),
dstPath: filepath.Join(GCSStagePath, "extra/gce"),
required: false,
},
}
var windowsStageFiles = []stageFile{
{
srcPath: filepath.Join(WindowsLocalPath, "configure.ps1"),
dstPath: WindowsGCSPath,
required: true,
},
{
srcPath: filepath.Join(WindowsLocalPath, "common.psm1"),
dstPath: WindowsGCSPath,
required: true,
},
{
srcPath: filepath.Join(WindowsLocalPath, "k8s-node-setup.psm1"),
dstPath: WindowsGCSPath,
required: true,
},
{
srcPath: filepath.Join(WindowsLocalPath, "testonly/install-ssh.psm1"),
dstPath: WindowsGCSPath,
required: true,
},
{
srcPath: filepath.Join(WindowsLocalPath, "testonly/user-profile.psm1"),
dstPath: WindowsGCSPath,
required: true,
},
}
// NewPushBuild can be used to create a new PushBuild instnace.
func NewPushBuild(opts *PushBuildOptions) *PushBuild {
return &PushBuild{opts}
}
// Push pushes the build by taking the internal options into account.
func (p *PushBuild) Push() error {
var latest string
// Check if latest build uses bazel
dir, err := os.Getwd()
if err != nil {
return errors.Wrap(err, "get working directory")
}
isBazel, err := BuiltWithBazel(dir)
if err != nil {
return errors.Wrap(err, "identify if release built with Bazel")
}
if isBazel {
logrus.Info("Using Bazel build version")
version, err := ReadBazelVersion(dir)
if err != nil {
return errors.Wrap(err, "read Bazel build version")
}
latest = version
} else {
logrus.Info("Using Dockerized build version")
version, err := ReadDockerizedVersion(dir)
if err != nil {
return errors.Wrap(err, "read Dockerized build version")
}
latest = version
}
logrus.Infof("Found build version: %s", latest)
valid, err := IsValidReleaseBuild(latest)
if err != nil {
return errors.Wrap(err, "determine if release build version is valid")
}
if !valid {
return errors.Errorf("build version %s is not valid for release", latest)
}
if p.opts.CI && IsDirtyBuild(latest) {
return errors.New("refusing to push dirty build with --ci flag given")
}
if p.opts.VersionSuffix != "" {
latest += "-" + p.opts.VersionSuffix
}
logrus.Infof("Latest version is %s", latest)
releaseBucket := p.opts.Bucket
if p.opts.NoMock {
logrus.Infof("Running a *REAL* push with bucket %s", releaseBucket)
} else {
u, err := user.Current()
if err != nil {
return errors.Wrap(err, "identify current user")
}
releaseBucket += "-" + u.Username
}
client, err := storage.NewClient(context.Background())
if err != nil {
return errors.Wrap(err, "fetching gcloud credentials, try running \"gcloud auth application-default login\"")
}
bucket := client.Bucket(releaseBucket)
if bucket == nil {
return errors.Errorf(
"identify specified bucket for artifacts: %s", releaseBucket,
)
}
// Check if bucket exists and user has permissions
requiredGCSPerms := []string{"storage.objects.create"}
perms, err := bucket.IAM().TestPermissions(
context.Background(), requiredGCSPerms,
)
if err != nil {
return errors.Wrap(err, "find release artifact bucket")
}
if len(perms) != 1 {
return errors.Errorf(
"GCP user must have at least %s permissions on bucket %s",
requiredGCSPerms, releaseBucket,
)
}
buildDir := p.opts.BuildDir
if err = util.RemoveAndReplaceDir(
filepath.Join(buildDir, GCSStagePath),
); err != nil {
return errors.Wrap(err, "remove and replace GCS staging directory")
}
// Copy release tarballs to local GCS staging directory for push
if err = util.CopyDirContentsLocal(
filepath.Join(buildDir, ReleaseTarsPath),
filepath.Join(buildDir, GCSStagePath),
); err != nil {
return errors.Wrap(err, "copy source directory into destination")
}
// Copy helpful GCP scripts to local GCS staging directory for push
for _, file := range gcpStageFiles {
if err := util.CopyFileLocal(
filepath.Join(buildDir, file.srcPath),
filepath.Join(buildDir, file.dstPath),
file.required,
); err != nil {
return errors.Wrap(err, "copy GCP stage files")
}
}
// Copy helpful Windows scripts to local GCS staging directory for push
for _, file := range windowsStageFiles {
if err := util.CopyFileLocal(
filepath.Join(buildDir, file.srcPath),
filepath.Join(buildDir, file.dstPath),
file.required,
); err != nil {
return errors.Wrap(err, "copy Windows stage files")
}
}
// Copy the "naked" binaries to GCS. This is useful for install scripts
// that download the binaries directly and don't need tars.
if err := CopyBinaries(
filepath.Join(buildDir, ReleaseStagePath),
); err != nil {
return errors.Wrap(err, "stage binaries")
}
// Write the release checksums
gcsStagePath := filepath.Join(buildDir, GCSStagePath, latest)
if err := WriteChecksums(gcsStagePath); err != nil {
return errors.Wrap(err, "write checksums")
}
// Publish container images
gcsDest := p.opts.ReleaseType
if p.opts.CI {
gcsDest = "ci"
}
gcsDest += p.opts.GCSSuffix
if p.opts.Fast {
gcsDest = filepath.Join(gcsDest, "fast")
}
logrus.Infof("GCS destination is %s", gcsDest)
copyOpts := gcs.DefaultGCSCopyOptions
copyOpts.NoClobber = pointer.BoolPtr(p.opts.AllowDup)
if err := gcs.CopyToGCS(
gcsStagePath,
filepath.Join(releaseBucket, gcsDest, latest),
copyOpts,
); err != nil {
return errors.Wrap(err, "copy artifacts to GCS")
}
if p.opts.DockerRegistry != "" {
if err := NewImages().Publish(
p.opts.DockerRegistry,
strings.ReplaceAll(latest, "+", "_"),
buildDir,
); err != nil {
return errors.Wrap(err, "publish container images")
}
}
if !p.opts.CI {
logrus.Info("No CI flag set, we're done")
return nil
}
// Publish release to GCS
versionMarkers := strings.Split(p.opts.ExtraVersionMarkers, ",")
if err := NewPublisher().PublishVersion(
gcsDest, latest, buildDir, releaseBucket, versionMarkers,
p.opts.PrivateBucket, p.opts.NoMock,
); err != nil {
return errors.Wrap(err, "publish release")
}
return nil
}
| {
"pile_set_name": "Github"
} |
---
abstract: 'Parkinson’s disease is a neurodegenerative disorder characterized by the presence of different motor impairments. Information from speech, handwriting, and gait signals have been considered to evaluate the neurological state of the patients. On the other hand, user models based on Gaussian mixture models - universal background models (GMM-UBM) and i-vectors are considered the state-of-the-art in biometric applications like speaker verification because they are able to model specific speaker traits. This study introduces the use of GMM-UBM and i-vectors to evaluate the neurological state of Parkinson’s patients using information from speech, handwriting, and gait. The results show the importance of different feature sets from each type of signal in the assessment of the neurological state of the patients.'
address: |
$^1$Pattern Recognition Lab. Friedrich-Alexander Universität, Erlangen-N[ü]{}rnberg, Germany\
$^2$ Faculty of Engineering. Universidad de Antioquia UdeA, Calle 70 No. 52-21, Medellín, Colombia\
$^3$ Technische Hochschule Nürnberg, Germany\
$^\star$corresponding author: [[email protected]]([email protected])
bibliography:
- 'strings.bib'
- 'refs.bib'
title: 'comparison of user models based on GMM-UBM and i-vectors for speech, handwriting, and gait assessment of Parkinson’s disease patients'
---
Parkinson’s disease, GMM-UBM, i-vectors, gait analysis, handwriting analysis, speech analysis.
Introduction {#sec:intro}
============
Parkinson’s disease (PD) is a neurological disorder characterized by the progressive loss of dopaminergic neurons in the midbrain, producing several motor and non-motor impairments [@Hornykiewicz1998]. PD affects all of the sub-systems involved in motor activities like speech production, walking, or handwriting. The severity of the motor symptoms is evaluated with the third section of the movement disorder society - unified Parkinson’s disease rating scale (MDS-UPDRS-III) [@Goetz2008]. The assessment requires the patient to be present at the clinic, which is expensive and time-consuming because several limitations, including the availability of neurologist and the reduced mobility of patients. The evaluation of motor symptoms is crucial for clinicians to make decisions about the medication or therapy for the patients [@Patel2010]. The analysis of signals such as gait, handwriting, and speech helps to assess the motor symptoms of patients, providing objective information to clinicians to make timely decisions about the treatment.
Several studies have analyzed different signals such as speech, gait, and handwriting to monitor the neurological state of the PD patients. Speech was considered in [@tu2017objective] to predict the MDS-UPDRS-III score of 61 PD patients using spectral and glottal features. The authors computed the Hausdorff distance between a speaker from the test set and the speakers in the training set. The neurological state of the patients was predicted with a Pearson’s correlation of up to 0.58. In [@smith2017vocal] the authors predicted the MDS-UPDRS-III score of 35 PD patients with features based on articulation and prosody analyses, and a Gaussian staircase regression. The authors reported moderate Spearman’s correlations ($\rho$=0.42). Handwriting was considered in [@mucha2018identification], to predict the H&Y score of 33 PD patients using kinematic features and a regression based on gradient-boosting trees. The H&Y score was predicted with an equal error rate of 12.5%. Finally, regarding gait features, in [@aghanavesi2019motion] the authors predicted a lower limbs subscore of the MDS-UPDRS-III from 19 PD patients, using several harmonic and non-linear features, and a support vector regression (SVR) algorithm. The subscore for lower limbs was predicted with an intra-class correlation coefficient of 0.78.
According to the literature, most of the related works consider only one modality. Multimodal analyses, i.e., considering information from different sensors, have not been extensively studied. In [@barth2012combined] the authors combined information from statistical and spectral features extracted from handwriting and gait signals. The fusion of features improved the accuracy of the classification between PD and healthy control (HC) subjects. Previous studies [@vasquez2017gcca; @vasquez2019multimodal] suggested that the combination of modalities also improved the accuracy in the prediction of the neurological state of the patients. This study proposes the use of different features extracted from speech, handwriting, and gait to evaluate the neurological state of PD patients. The prediction is performed with user models based on Gaussian mixture models - universal background models (GMM-UBM) and i-vectors. To the best of our knowledge, this is one of the few studies for multimodal analysis of PD patients, and the first one that considers multimodal user models to evaluate the neurological state of the patients.
Methods {#sec:methods}
=======
The methods used in this study are summarized in Figure \[fig:method\]. Speech, handwriting, and gait signals are characterized using different feature extraction strategies. Then, data from HC subjects are used to train user models based on GMM-UBM and i-vector systems. For the case of the GMM-UBM, data from PD patients were used to adapt the UBMs into GMMs, creating a specific GMM for each patient. On the other hand, for the i-vector modeling, a reference i-vector was created with data from HC subjects with similar age and gender of the patients, thus i-vectors extracted from the patients can be compared with a personalized reference model. Finally, distance measures are computed between the reference models and those adapted/extracted from the PD patients. The computed distance is correlated with the neurological state of the patients based on the MDS-UPDRS-III scale.
![General methodology followed in this study.[]{data-label="fig:method"}](method.pdf){width="\linewidth"}
Speech features
---------------
**Phonation:** these features model abnormal patterns in the vocal fold vibration. Phonation features are extracted from the voiced segments. The feature set includes descriptors computed for 40ms frames of speech, including jitter, shimmer, amplitude perturbation quotient, pitch perturbation quotient, the first and second derivatives of the fundamental frequency $F\raisebox{-.4ex}{\scriptsize 0}$, and the log-energy [@orozco2018neurospeech].
**Articulation:** these features model aspects related to the movements of limbs involved in the speech production. The features considered the energy content in onset segments [@orozco2018neurospeech]. The onset detection is based on the computation of $F\raisebox{-.4ex}{\scriptsize 0}$. Once the border between unvoiced and voiced segments is detected, 40ms of the signal are taken to the left and to the right, forming a segment with 80ms length. The spectrum of the onset is distributed into 22 critical bands according to the Bark scale, and the Bark-band energies (BBE) are calculated. 12 MFCCs and their first two derivatives are also computed in the transitions to complete the feature set.
**Prosody:** for these features, the log-$F\raisebox{-.4ex}{\scriptsize 0}$ and the log-energy contours of the voiced segments were approximated using Lagrange polynomials of order $P=5$. A 13-dimensional feature vector is formed by concatenating the six coefficients computed from the log-$F\raisebox{-.4ex}{\scriptsize 0}$ and the log-energy contours, in addition to the duration of the voiced segment [@Dehak2007]. The aim of these features is to model speech symptoms such as monotonicity ad mono-loudness in the patients.
**Phonological:** these features are represented by a vector with interpretable information about the placement and manner of articulation. The different phonemes of the Spanish language are grouped into 18 phonological posteriors. The phonological posteriors were computed with a bank of parallel recurrent neural networks to estimate the probability of occurrence of a specific phonological class [@vasquez2019phonet].
Handwriting features
--------------------
Handwriting features are based on the trajectory of the strokes in vertical, horizontal, radial, and angular positions. We computed the velocity and acceleration of the strokes in the different axes, in addition to the pressure of the pen, the azimuth angle, the altitude angle, and their derivatives. Finally, we considered features based on the in-air movement before the participant put the pen on the tablet’s surface. Additional information of the features can be found in [@rios2019].
Gait features
-------------
**Harmonic:** these features model the spectral wealth and the harmonic structure of the gait signals obtained from the inertial sensors. We computed the continuous wavelet transform with a Gaussian wavelet. The feature set is formed with the energy content in 8 frequency bands from the scalogram, three spectral centroids, the energy in the in the 1st, 2nd, and 3rd quartiles of the spectrum, the energy content in the locomotor band (0.5–3Hz), the energy content in the freeze band (3–8Hz), and the freeze index, which is the ratio between the energy in the locomotor and freeze bands [@zach2015identifying; @rezvanian2016towards].
**Non-linear:** gait is a complex and non-linear activity that can be modeled with non-linear dynamics features. The first step to extract those features is the phase space reconstruction, according to the Taken’s theorem. Different features can be extracted from the reconstructed phase space to assess the complexity and stability of the walking process. The extracted features include the correlation dimension, the largest Lyapunov exponent, the Hurst exponent, the detrended fluctuation analysis, the sample entropy, and the Lempel-Ziv complexity [@perez2018non].
User models based on GMM-UBM
----------------------------
GMM-UBM systems were proposed recently to quantify the disease progression of PD patients [@arias2018speaker]. We propose to extend the idea to multimodal GMM-UBM systems. The main hypothesis is that speech, handwriting, or gait impairments of PD patients can be modeled by comparing a GMM adapted for a patient with a reference model created with recordings from HC subjects. GMMs represent the distribution of feature vectors extracted from the different signals from a single PD patient. When the GMM is trained using features extracted from a large sample of subjects, the resulting model is a UBM. The model for each PD patient is derived from the UBM by adapting its parameters following a maximum a posteriori process. Then, the neurological state of the patients is estimated by comparing the adapted model with the UBM using a distance measure. We use the Bhattacharyya distance, which considers differences in the mean vectors and covariances matrices between the UBM and the user model [@you2010gmm].
User models based on i-vectors
------------------------------
I-vectors are used to transform the original feature space into a low-dimensional representation called total variability space via joint factor analysis [@Dehak2011]. For speech signals, such a space models the inter- and intra-speaker variability, in addition to channel effects. For this study we aim to capture changes in speech, handwriting, and gait due to the disease [@garcia2018multimodal]. I-vectors have been considered previously to model handwriting [@christleinhandwriting] and gait data [@san2017vector]. Similar to the GMM-UBM systems, we train the i-vector extractor with data from HC subjects, and compute a reference i-vector to represent healthy speech, handwriting, or gait. Then, we extract i-vectors from PD patients, and compute the cosine distance between the patient i-vector and the reference.
Data {#sec:data}
====
We considered an extended version of the PC-GITA corpus [@Orozco2014DB]. This version contains speech, handwriting, and gait signals, collected from 106 PD patients and 87 HC subjects. All of the subjects are Colombian Spanish native speakers. The patients were labeled according to the MDS-UPDRS-III scale. Table \[tab:people\] summarizes clinical and demographic aspects of the participants included in the corpus.
Speech signals were recorded with a sampling frequency of 16kHz and 16-bit resolution. The same speech tasks recorded in the PC-GITA corpus [@Orozco2014DB], except for the isolated words, are included in this extended version. Handwriting data consist of online drawings captured with a tablet Wacom cintiq 13-HD with a sampling frequency of 180Hz. The tablet captures six different signals: x-position, y-position, in-air movement, azimuth, altitude, and pressure. The subjects performed a total of 14 exercises divided into writing and drawing tasks. Additional information about the handwriting exercises can be found in [@vasquez2019multimodal]. Gait signals were captured with the eGaIT system, which consists of a 3D-accelerometer (range $\pm$6g) and a 3D gyroscope (range $\pm$500$^\circ$/s) attached to the external side (at the ankle level) of the shoes [@barth2015stride]. Data from both feet were captured with a sampling frequency of 100Hz and 12-bit resolution. The exercises included 20 meters walking with a stop after 10 meters, 40 meters walking with a stop every 10 meters, *heel-toe tapping*, and the *time up and go* test.
Experiments and results {#sec:results}
=======================
Data from HC subjects were used to train the UBMs and the i-vector extractors. For the GMM-UBM system, data from PD patients were used to adapt the UBMs into GMMs. The Bhattacharya distance is used to compare the GMM and the UBM. For the i-vectors, a reference was created by averaging the i-vectors extracted from HC subjects that have same gender and similar age of the patients (in a range of $\pm$ 2 years). I-vectors extracted from PD patients are compared to the reference i-vector using the cosine distance. The computed distances are correlated with the MDS-UPDRS-III score of the patients.
User models from different modalities
-------------------------------------
The correlation between the neurological state of the patients and the user models based on GMM-UBM and i-vectors is shown in Table \[tab:results1\] for the speech, handwriting, and gait features. The results indicate that for gait and speech signals, the user models based on GMM-UBM systems are more accurate than the i-vectors. This can be explained because the distance between the adapted GMM and the UBM considers more information about the statistical distribution of the population than for the case of i-vectors, where the reference for healthy subjects is reduced to a single vector. A “strong" correlation is obtained with the harmonic features (gait analysis) modeled with the GMM-UBM system ($\rho$=0.619). This is expected because most of the items used by the neurologist in the MDS-UPDRS-III are based on the movement of the lower limbs. For handwriting features, “weak" correlations are obtained both with the GMM-UBM and the i-vector systems. The correlations obtained with speech features are not robust to model the general neurological state of the patients. This result can be explained because the MDS-UPDRS-III is a complete neurological scale that consider speech impairments in only one of the 33 items of the total scale [@Goetz2008].
\[tab:results1\]
Multimodal user models
----------------------
The user models extracted from all feature sets using the GMM-UBM system were combined by concatenating the distance between the user model and the UBM for each feature set. A linear regression was trained with the matrix of distances to predict the MDS-UPDRS-III scale. The model was trained following a leave one subject out cross-validation strategy. The results of the fusion are shown in Table \[tab:results2\]. The Spearman’s correlation increases by 2.4%, absolute with respect to the one obtained only with the harmonic features. Additional regression algorithms based on random forest regression or SVRs were considered, however, they overfitted the test set, predicting only the mean value of the total scale.
\[tab:results2\]
Figure \[fig:fusion\] shows the error in the prediction of the MDS-UPDRS-III score of the patients. Most of the patients are in initial or intermediate state of the disease (10$<$MDS-UPDRS-III$<$50), and they were predicted with the same distribution. The outlayer in the top of the figure corresponds to the eldest patient in the corpus. The patient has a score of 53, and it was predicted with a score of 78. Although the intermediate value of the MDS-UPDRS-III score of the patient, his MDS-UPDRS-speech item is 4, i.e., the patient was completely unable to speak, which highly affected his speech features.
![Prediction of the MDS-UPDRS-III score using multimodal user models based on GMM-UBM systems.[]{data-label="fig:fusion"}](fusion_linear_reg_error1.pdf){width="0.8\linewidth"}
Figure \[fig:fusion\_bar\] shows the contribution of each feature set to the multimodal user model. Each bar indicates the coefficient for the linear regression associated to each feature set. Harmonic features were the most important for the multimodal model, followed by prosody and articulation features, which have shown to be the most important features to evaluate the dysarthria associated to PD [@vasquez2017gcca]. Handwriting features were less important than expected; however, this fact can be explained because the extracted features are based on a standard kinematic analysis that might not be completely related to the symptoms associated with PD. The results for handwriting could be improved with a feature set more related to the handwriting impairments of the patients, like those based on a neuromotor analysis [@impedovo2019velocity].
![Contrbution of each feature set to the multimodal user model system.[]{data-label="fig:fusion_bar"}](fusion_linear_reg_bar.pdf){width="0.75\linewidth"}
Conclusion {#sec:conclusion}
==========
The present study compared user models based on GMM-UBM and i-vectors to evaluate the neurological state of PD patients using information from speech, handwriting, and gait. Different features were extracted from each bio-signal to model different dimensions of PD symptoms. Gait features were the most accurate to model the general neurological state of the patients, however, the combination of different bio-signals improved the correlation of the proposed method. In addition, user models based on GMM-UBM were more accurate than those based on i-vectors. Better results could be obtained with the i-vector system if more training data from HC subjects were available to create the reference model, especially for handwriting and gait. Further studies will consider additional features to model other aspects of PD symptoms, especially from handwriting signals. At the same time, additional models based on representation learning, and additional fusion methods can be considered to evaluate the neurological state of the patients.
Acknowledgments {#sec:ack}
===============
This project received funding from the EU Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 766287, and from CODI from University of Antioquia by Grant No. 2017–15530.
| {
"pile_set_name": "ArXiv"
} |
Trending Tags
“It’s a people’s world”: an interview with Elgar Feddema of Corlido Group
What does it mean to make it in the north? This is part of a series of portraits of local people, organisations, and companies working to further internationalise Groningen, Friesland, and Drenthe. This time we spoke to Elgar Feddema, Sales Manager at multi-national procurement company Corlido, based in Emmen.
By Thomas Ansell
Corlido Group is based in Emmen, and began as a fairly normal wholesaler in the late 1990’s. Now, though, its reach has spread to 14 countries across three continents, and its clients include everyone from the Hanzehogeschool Groningen to Shell.
“We have five main areas of business activity: procurement, warehousing, logistics; invoicing, and E-procurement and P-card solutions”, says Elgar Feddema, Sales Manager, “naturally these can be split up even further, and out Procurement Services division especially handles long-term agreements. Within Project – and Piping supply we can act as an international partner in buying project-related materials; mostly technical materials.”
As a company, Corlido Group’s strength is in the diversity of its services: these are end-to-end in terms of procurement, and make it able to provide assistance and management across diverse sectors from the oil and gas industry, to education, healthcare; public sector, the energy industry, and the transport industry.
Emmen may not be the first location when you think of a multinational company with 16 subsidiary offices around the world, but Feddema notes that Corlido places a high value on intercultural understanding. “Creating synergy in a cross-cultural environment is often challenging. It’s a peoples world. The overall vision does of course unite the staff, and Corlido strives to create cohesion and positive cross-cultural working environment.” The company has International staff across its network, including in Emmen, but looks at its staff as a cohesive whole: “it’s about mindset, language, personality, diversity, and many other things’, says Feddema.
In having such an international network, staff, and supply chain, Corlido is taking advantage of the connections and innovations offered by being more International as a company. Indeed, Feddema’s advice to businesses in the North is simple: “employers could have a more open mind-set. More accepting of a world changing faster than ever before. Things that matter to internationals are to be considered an opportunity; make use of their know-how, and the cultural diversity international people bring with them.”
And to those considering moving to a company like Corlido: internationally focused, highly successful, and based in the Northern Netherlands? “The three Northern provinces can be quite different from the rest of the Netherlands. On the other hand, the things that define us are the similarities we have: not the differences. Work together, because together is better!”
In terms of job-hunting, and hoping to find positions in forward-thinking companies, Feddema has a simple but hugely effective piece of advice: “Get in contact! We are an open-minded bunch of people!”
To find out more about companies such as Corlido (www.corlido.nl) in Drenthe, Groningen, and Friesland, or to find highly-skilled jobs in areas such as Procurement and Supply Chain management, head to www.makeitinthenorth.nl
The Northern Times
NorthernTimes.nl is an English-language news site for internationals and anyone interested in news and activities in the three northern Dutch provinces of Groningen, Friesland and Drenthe. The website provides daily coverage and summaries of local news in the north, as well as human interest stories, backgrounds and a regional events calendar. The Northern Times is an initiative of the International Welcome Center North in cooperation with Persbureau Tammeling BV, which owns and operates the website.
Privacy statement and disclaimer
NorthernTimes.nl is an English-language news site for internationals and anyone interested in news and activities in the three northern Dutch provinces of Groningen, Friesland and Drenthe. The website provides daily coverage and summaries of local news in the north, as well as human interest stories, backgrounds and a regional events calendar. The Northern Times is an initiative of the International Welcome Center North in cooperation with Persbureau Tammeling BV, which owns and operates the website.
Newsletter
Once or twice a week, we send out a digital newsletter to our subscribers. We will not and cannot share email addresses or any other personal data provided by our subscribers with any third parties, nor will we use this information for any purposes other than providing the newsletter.
Questions or comments?
If you have any questions concerning your privacy or comments about ways that you think we could further protect and inform you about your privacy, please let us know by contacting us at [email protected]
Disclaimer
The Northern Times reserves the right to change its privacy policy, but will inform our subscribers and readers in advance prior to implementing any such changes. These changes will be published in a timely manner on The Northern Times’ website. | {
"pile_set_name": "Pile-CC"
} |
As a proof of concept, Nielsen developed an Alexa Skill that lets a user ask Amazon Echo questions such as, “What are the five best selling brands of tea in the U.K.?” Other examples of Alexa Skills include setting a timer or a thermostat, ordering a pizza and calling an Uber.
The Nielsen Alexa Skill may be a party trick at the moment, but the potential for a data interface that “just works” for business users, instead of them having to adapt their behavior to satisfy the interface, is too good to ignore.
The user interface is impressive, but the skill relies on the underlying API (application programming interface). For a long time, no one outside IT showed much interest in APIs, but MIT research shows that the most successful digital companies make above-average investments in APIs (1); these companies know that APIs are fundamental to their strategic success. Why do they think that?
Why APIs?
Gartner estimates that by 2019, three quarters of an enterprise’s analytics will combine enterprise data with 10 or more data sources that belong to partners or third-party data providers outside the enterprise (2) These data sources include Twitter, Facebook, econometrics data, weather, market research and others.
Big data, combined with the rapid turnover of data sources as companies rise and fall, will eventually overwhelm even the capacity of the data lake, constrained as it is by the need to copy data into a centralized repository. APIs that connect to data remotely are the best tools we have for assembling a broad span of digital data in a timely and responsive manner.
APIs are not just a way to connect to data; they connect companies. It’s estimated that by 2018, more than 75% of new multi-enterprise processes, such as supply chains, will be implemented as distributed, composite apps. Integration Brokers, Integration Software as a Service (iSaaS) and Integration Platform as a Service (iPaaS) providers offer tools for wiring up the virtual enterprise—using APIs.
APIs are also the foundation of digital innovation and agility. The core competencies of an enterprise evolve slowly. Banks will always look after credits, debits and transactions—that’s what make them banks. But the way they make those services available to customers, and combine them with third-party services, needs to evolve rapidly. APIs are a means of exposing core capabilities in a variety of ways: through online and mobile banking, contactless payments using cards and mobile devices, integration with blockchain-based trading systems and so on. The API both separates and connects; it lets the user experience of managing a bank account evolve rapidly and cheaply, independently of the business-critical, slowly changing core processes of the bank.
The Pitfalls
To be successful, an API must satisfy both the needs of its primary consumers, developers and the needs of its ultimate customer, the business.
Developers are no different than their non-technical counterparts; they want an API that’s easy to use. But they have a multiplicity of requirements that are not part of standard user experience (UX) design—for example the ability of the API to integrate with their favorite programming language. For this reason, some UX practitioners identify a unique problem domain, which they call “developer experience” (DX). Good developer experience, including the availability of a software development kit, documentation, sample code and so on, will do a lot to promote the adoption of an API.
But even an excellent developer experience doesn’t guarantee success; the ultimate consumer of an API is the business, not the developer. APIs that simply expose an intelligence-free connection to a data source are unlikely to be successful. Good APIs add value by presenting data in a generally understandable form, for example based on industry norms, rather than a company’s proprietary view of the world. They make it easy to integrate data into external business processes.
How Do You Design a Compelling API?
One way to create a compelling API is to capitalize the ability of APIs to deliver intelligence and business value. A company with an extensive database of map data could create an API that allowed them to sell maps. Third parties would be able to grab a map and build their own navigation or routing algorithms on top. The map API is useful, but adds no value or intelligence.
A more enterprising company would offer an API that provides turn-by-turn directions for a journey, rather than just a map. It’s possible to update the map as roads get built and improve the routing algorithm without changing the API—and disrupting the software using it.
This sort of API could beget an ecosystem by encouraging the creation of supplementary services: showing the location of nearby hotels and gas stations, plus the availability of rooms and the price of gas, for example. An analysis of journeys taken using the API, and traffic information inferred from this, could further improve the service. By being thoughtfully designed, the API has engendered an ecosystem.
Another pattern that’s proven successful in the real world is having foundational or universal APIs and using these as the basis for creating an API tailored to an individual client. Netflix has popularized these "Experience APIs" that tailor the API "experience" to the needs of each consumer.
These are a few pointers, but the use of public APIs is still in its infancy among traditional businesses. There is no proven playbook for creating digital APIs, and companies are struggling to invent solutions. Eventually best practice will crystallize and things will improve, but until then we’re in for a bumpy ride. There is no doubt, however, that APIs are the foundation needed to connect independent systems and processes. And without the ability to connect, nothing else can happen in the digital economy.
What’s Next?
Being able to connect data and processes is, of course, only the first step in understanding the environment in which the enterprise exists and realizing the value of these connections. Being able to connect to a myriad of different data sources does nothing to resolve the heterogeneity in the underlying data sources, where the same real-world entity may be identified and described in a multiplicity of different ways and must be reconciled in order to perform any kind of meaningful integration. | {
"pile_set_name": "Pile-CC"
} |
Today's television receivers are increasingly likely to provide special graphics features such as closed captioning and on-screen displays. The information for closed captioning is a part of the standard television broadcast signal, for example, line 21 of the NTSC signal. The information for on-screen displays is provided by a hand-held remote control unit, with which the user changes operating parameters such as channel and volume. The remote control unit transmits infra-red signals, which the television receiver detects to display channel and other control-related information on the display screen.
In existing receivers, devices for detecting closed captioning and on-screen display information provide analog outputs. These devices are suitable for analog displays, such as cathode ray tubes. However, their analog output is not useful for digital displays, such as spatial light modulators, which have an array of pixel elements that are addressed with digital display data. For SLMs, the analog closed captioning and on-screen display signals must be converted to digital form.
Another characteristic of SLMs is the use of "staggered" pixel arrays, in which the pixel elements are not laid out in square grid patterns. These staggered patterns are advantageous in terms of overall picture quality. However, when the image contains a vertical line, the line may appear jagged, an effect that is especially apparent in the characters used for closed captioning and on-screen displays. | {
"pile_set_name": "USPTO Backgrounds"
} |
Immunization of pigs with a type 2 modified live PRRSV vaccine prevents the development of a deadly long lasting hyperpyrexia in a challenge study with highly pathogenic PRRSV JX143.
Porcine reproductive and respiratory syndrome virus (PRRSV) has been confirmed to be the underlying cause of the so-called 'porcine high fever disease' (PHFD), a disease that emerged in China in 2006 and subsequently spread over South East Asia. The aim of this study was to investigate whether animals challenged with the Chinese highly pathogenic PRRSV JX143 would be protected by vaccination with single dose of a type 2 modified live virus (MLV) vaccine. Forty-four pigs 17-19 days of age were weighed and randomly assigned to either vaccination with subsequent challenge (V/C, n=20), challenge only (NV/C, n=12) and no vaccination and no challenge (strict controls, n=12). Pigs of the challenged groups (V/C and NV/C) were inoculated intranasally 27 days post-vaccination with PRRSV JX143. Animals were monitored during the subsequent 21 days post challenge and were necropsied at the end of the experiment on day 49. Observations and measurements included body temperature, clinical scores for behavior/general condition, cough and breathing pattern, mortality, serological response and PRRSV viremia via RNA detection. Challenge in the NV/C pigs resulted in 100% morbidity and 67% mortality whereas all vaccinated pigs survived. There was a close association between hyperpyrexia (fever over 41°C) and incidence in mortality, which was completely prevented by vaccination. Clinical symptoms were less severe, and of transient nature only, in the vaccinated pigs. Vaccination did not prevent infection, but reduced the impact of clinical disease and prevented hyperpyrexia associated mortality. | {
"pile_set_name": "PubMed Abstracts"
} |
p**7/560 + p**6/72 - 13*p**5/10 + 42*p**2 + p. Let k(x) be the third derivative of o(x). Calculate k(10).
0
Let z(s) = -292*s**2 + 8*s - 9. Let p(g) = 251*g**2 - 8*g + 8. Let b(j) = 7*p(j) + 6*z(j). Give b(3).
23
Let g(m) = m**3 - 5*m**2 + 6*m - 1. Let x = 5386 - 5382. Give g(x).
7
Suppose 4*u = 548*x - 553*x + 5, -2*x + 4 = 2*u. Let m(j) = -j**2 - 7*j - 4. What is m(x)?
8
Let i(l) = 2*l - 7. Let x be (2/(-4))/((-5)/(-300)*-5). Let n be i(x). Let r(z) = -z**3 + 3*z**2 + 6*z - 6. Give r(n).
-26
Let p(l) = 85*l**2 + 45*l + 50. Let t(f) = -19*f**2 - 10*f - 11. Let b(i) = 9*p(i) + 40*t(i). Determine b(-2).
20
Let j = 981 - 968. Suppose -29*s - q = -32*s - j, 2*s = 3*q - 18. Let u(x) = -2*x**3 - 4*x**2. Determine u(s).
18
Let k(l) = -9*l - 7. Let m(t) = 17*t + 11. Let u(q) = 1. Let n(h) = m(h) + u(h). Let b(c) = -5*k(c) - 3*n(c). Suppose 0 - 3 = 3*o. Give b(o).
5
Let u(x) = 4*x**2 + 10*x + 4. Let d(h) = -2*h**2 + 22*h - 22. Let z be d(8). Let w(b) = b**3 - 17*b**2 - 43*b - 17. Let g(j) = z*u(j) + 6*w(j). Determine g(-2).
-42
Let l(f) = -10*f + 1. Let h(m) = -m**2 - 18*m - 63. Let x = 79 + -92. Let k be h(x). Give l(k).
-19
Suppose 3*l - 3 = 0, -4*u - 3*l - l = -4. Suppose u = -3*o + 4*w - 4, 6*o - 4*o + 12 = 5*w. Let i(n) = n**2 - 5*n + 1. Give i(o).
-3
Let y(j) = -54*j + 18 + 209 + 261 + 112. Calculate y(11).
6
Let g(w) = w + 31. Let u(t) = 8*t**3 + 23*t**2 + 4*t + 10. Let x(r) = -7*r**3 - 20*r**2 - 5*r - 10. Let f(o) = -6*u(o) - 7*x(o). Let k be f(-1). Give g(k).
31
Let o(z) = -z**3 + 6*z**2 - 7*z - 4. Let i be o(3). Let g(s) = s**i + s + 332 + 335 - 666 - s**3. Let q be (2/4)/((-3)/6). What is g(q)?
2
Let r(i) be the second derivative of -1/12*i**4 - 33*i + 1/2*i**3 + 0 + 3/2*i**2. What is r(4)?
-1
Let t(u) = u**3 - u. Let f(g) = -5*g**3 - 5*g**2 - 5*g + 28. Let r(w) = f(w) + 6*t(w). Let n be r(6). Let v(x) = -10*x - 2. Give v(n).
18
Let a = 476 - 473. Suppose a*z + 4*x = 1, 2*z + 5*x - 4*x - 4 = 0. Let o(h) = -h**2 + 3*h - 2. What is o(z)?
-2
Let m be 11 - (-5 - (-1 + -3)). Let y(n) = 0*n - n + m + 3 - 17. What is y(0)?
-2
Let m(b) = 4*b**2 - 6*b - 128*b**3 + 7 + 129*b**3 + 0 - 13*b**2. What is m(10)?
47
Let f(p) be the second derivative of p**4/12 + 5*p**3/6 + 8*p**2 + 18*p - 10. Calculate f(-7).
30
Let m be (4/(-5))/(2/(-10)). Let v be -3 - (-1 + (-1 - m)). Let g(w) = 6*w**v + 4*w**2 + 2*w**3 - 2*w**2 - 6*w**3 + 2. Give g(-2).
-6
Let q(b) = -41*b - 48*b + 88*b + 4. Let p be 1/(1 - 39/36). Let r be (-3 - p)/(-9*(-3)/18). Calculate q(r).
-2
Suppose -5*b + 0*y + 3*y = 180, 3*b + 2*y + 127 = 0. Let j = -40 - b. Let l(w) = 6*w**3 + w**2 - 1. What is l(j)?
-6
Suppose j - 44*j + 26 = -146. Let w(s) = -6*s + 0 - 2 + s**2 + 0*s**2. Give w(j).
-10
Let d(o) = -6*o - 61. Suppose -5*i - 99 = 8*h, 58 - 100 = 3*h - 3*i. Give d(h).
17
Let n(i) = -i**3 - 12*i**2 + 16*i + 17. Suppose -p = 4*a + 25, -25*p + 21*p = -4*a + 40. Determine n(p).
-22
Suppose -2*u + 3 = 5*f, 4*u - 7*u = 3*f. Let d(v) = -2*v + 6*v - f - 5*v. Let w = 5 + -1. Determine d(w).
-5
Let w(p) be the first derivative of -p**3/3 + 16*p - 1924. Determine w(-4).
0
Let x(j) = -20 + 0*j - 19 - j + 65 - 27 + 2*j**2. Give x(1).
0
Let r(j) = -j**2 - j. Let u(k) = -59*k**3 + 3*k**2 + 3*k + 1. Let i(z) = -4*r(z) - u(z). What is i(1)?
60
Let v(h) = 2*h - 21*h**3 + 24 + 19*h**3 + 4*h - 15 - 16 + 3*h**2. Calculate v(2).
1
Let h = 2400 + -2394. Let s(g) = g**3 - 5*g**2 - 8*g + 18. What is s(h)?
6
Let a be -7*(798/98 - 9). Let o(t) = -t**3 + 6*t**2 + 7*t - 54. Calculate o(a).
-12
Let m be 3813/(-372) - 6/8. Let y(o) = -o**3 - 13*o**2 - 22*o - 2. Give y(m).
-2
Let k(w) be the third derivative of w**5/20 + w**4/6 + w**3/6 - w**2 + 224. Suppose 2*b + 4 + 9 = 5*h, -4*b = h - 7. Determine k(h).
40
Let f(c) = -c**3 - 3*c**2 + 2. Let z be ((-25)/5)/(4/16). Let q be (-99)/(-15) - 8/z. Let n = q - 10. Calculate f(n).
2
Let b(w) = -29*w + 65*w - 37*w. Suppose 16*j = 14*j + 6. Suppose 0 = 2*g - p - j*p - 2, -2*p + 2 = 0. Determine b(g).
-3
Let n(a) = -24*a + 2. Let r(s) = -26*s - 47. Let t(q) = -n(q) + r(q). Calculate t(-21).
-7
Let c = -9 + 11. Suppose -4*n = -5*l - 32, 23 - 1 = c*n - 4*l. Let o(h) = -4*h**2 - 7*h + 5*h - 4*h - h**n - 4 + 2*h. What is o(-3)?
-1
Let i(z) = 79*z + 950. Let y be i(-12). Let v(s) be the first derivative of s**3/3 + s**2 - 3*s + 14. Determine v(y).
5
Let m(i) = -7*i**3 + i**2 + 7*i - 40. Let z(l) = -6*l**3 + l**2 + 6*l - 39. Let q(j) = -5*m(j) + 6*z(j). Let o be 1*(-3 - (-78)/26). Give q(o).
-34
Let z(u) = 16*u + 3. Let q(g) = g**2 - 55*g - 116. Let d be q(-2). What is z(d)?
-29
Let x(y) = -250 + 27*y - 54*y + 48 + 37*y + 13*y. Determine x(9).
5
Let h be (1950/18)/(-13) - 54/(-6). Let r(o) be the first derivative of 4*o + 22 + h*o**3 - 5/2*o**2. Give r(3).
7
Let c(x) be the second derivative of -x**4/12 + 13*x**3/3 - 39*x**2 + 203*x - 5. What is c(23)?
-9
Let u(i) = -23*i**2 + 4*i - 752. Let g(z) = -4*z**2 - 126. Let d(y) = -6*g(y) + u(y). Let k = 14 + -8. Let a = -11 + k. Determine d(a).
9
Let r(u) = -4*u**2 - 13*u - 36. Let j(l) = 5*l**2 + 15*l + 40. Let t(o) = 6*j(o) + 7*r(o). Calculate t(-3).
9
Let h(w) be the second derivative of -13*w**4/24 - w**3/6 - 17*w**2 + 68*w. Let u(i) be the first derivative of h(i). Let a be 2/(-4) - 1/2. Determine u(a).
12
Let d = -8 + 14. Suppose 5*p - 3*r = 6, 4*r - 3 - 9 = 0. Let n(k) = 4*k + k - 2 + 13*k**3 - 12*k**p - 7*k**2 + 0. Give n(d).
-8
Let a(s) = 10*s + 118. Let y(c) = -2*c - 25. Let i(t) = -2*t - 26. Let r(m) = -2*i(m) + 3*y(m). Let g(p) = 3*a(p) + 16*r(p). Determine g(-10).
6
Suppose 0 = -t + s, 2*t - 24 = -32*s + 28*s. Let r(h) = -2*h**3 + 6*h**2 - 3*h - 2. Calculate r(t).
-46
Let z(b) be the first derivative of -b**4/4 - 2*b**3/3 + 5*b**2/2 + 5*b - 840. Calculate z(-2).
-5
Let m be 1/(1/12*2). Suppose -6 = -3*p, 0 = 2*k + p - 3*p - 54. Let j(g) = g**3 + 0*g - 35*g**2 + k*g**2 - g. Determine j(m).
-6
Let w(t) = t**3 - 8*t**2 + 3*t - 7. Let s(p) = -31*p - 279. Let l be s(-9). Let u = 17 + -1. Suppose -2*a + 0 + u = l. What is w(a)?
17
Suppose i - 87 = -0. Let v(z) = -88 - i + z**2 + 4*z + 176. Let k = -193 + 190. Give v(k).
-2
Let s be 14*(12 + 1394/(-119)). Let v(u) be the first derivative of 2*u**2 + 24 + 1/4*u**s + 3*u + 5/3*u**3. Determine v(-3).
9
Let j(f) = -4*f**2 + f**2 + 2*f**2 + 23 - 4*f. Let c(o) = -3*o**2 - 2478*o + 7453. Let x be c(3). Give j(x).
-9
Let z(n) = -30 + 121 + 19*n - 32*n + 62*n**2 + 2*n**3 + 16*n. Let h be z(-31). Let i(b) = 4*b**3 + 2*b**2 - 2*b. Give i(h).
-20
Suppose -254*o + 248*o = -228. Let t(b) = -56*b**2 + 8 + o*b**2 + 17*b**2 - 3*b. Determine t(-7).
-20
Let g(v) be the first derivative of 7*v**3/3 - v**2/2 + 59*v - 110. Let n(k) be the first derivative of g(k). Give n(2).
27
Let f(o) = 2*o - 11. Let z be f(8). Let d(m) = 262 + m**3 + 250 - z*m**2 - 499. What is d(4)?
-3
Let q(l) = l**2 - 15*l + 12. Let f = 456 + -467. Let m be (14/f)/(108/(-1188)). What is q(m)?
-2
Let s(j) = -2*j**2 - 23*j + 12. Suppose 4*p = -2*k + 25 - 61, 0 = -p - k - 6. What is s(p)?
0
Let k(a) = a**3 + 8*a**2 + 5*a - 15. Let p = -269 - -265. Let i be ((-8)/(-12))/((-1)/((-42)/p)). Calculate k(i).
-1
Let a(d) = -d**3 + 13*d**2 + 14*d + 3. Let b be a(14). Suppose o - 5*o + 23 = v, -5*v = -b*o. Suppose n = 1, -z - v*z - 3*n = -11. Let c(q) = q + 2. Give c(z).
4
Let g(y) = -4*y + 10. Let n(j) = -j**3 - j**2 + 2*j - 2. Let x(u) = -5*u**3 - 13*u**2 + 22*u - 9. Let b(k) = 6*n(k) - x(k). Let p be b(3). Calculate g(p).
-2
Let b(t) = -t**2 - 58*t - 4. Let z be b(-58). Let f(c) be the second derivative of 7/6*c**3 + 5/12*c**4 + 0 - 2*c + 1/20*c**5 + 3*c**2. Calculate f(z).
-6
Let b(s) = 6*s**2 - 2*s + 3*s**3 - 4*s**3 + 8 - 2*s. Suppose 9*h + 60 = 11*h. Suppose 0 = -49*n + 54*n - h. What is b(n)?
-16
Let k(b) = 4*b**2 - 15*b + 24. Let f(n) = 5*n**2 + n - 3. Let v(c) = f(c) - k(c). Calculate v(-17).
-10
Let z(t) = -t - 3. Let w = 3 - 0. Let p be (-2 - -2) + (-3 - w). Give z(p).
3
Let o(s) = 37*s + 572. Let r(u) = -9*u - 127. Let b(v) = 4*o(v) + 18*r(v). Give b(7).
-96
Suppose -2*w - h = 6, -3*w + 2*h - 4 + 9 = 0. Let k(o) = o**3 - 11*o**2 + 11*o - 5. Let t be k(10). Let b(a) = a - 2 + 1 + 0 - t*a. Calculate b(w).
3
Let q(m) be the first derivative of 1/4*m**4 - 7*m - 5/2*m**2 - 1 + 4/3*m**3. Let h = -41 + 36. What is q(h)?
-7
Let y be -3 - 4/(2 | {
"pile_set_name": "DM Mathematics"
} |
Sesame oil, which comes from sesame seeds, is a lesser-known vegetable oil but is, in fact, one of the healthiest alternatives to normal vegetable oils. Sesame seeds, known by the scientific name Sesamum indicum, are small yellowish brown seeds that are primarily found in Africa, but they also grow in smaller numbers on the Indian subcontinent. | {
"pile_set_name": "Pile-CC"
} |
Pristimantis eremitus
Pristimantis eremitus is a species of frog in the family Craugastoridae. It is found in the Cordillera Occidental in northwestern Ecuador from the Cotopaxi Province northward and on western slope of the Colombian Massif in the Nariño Department, extreme southwestern Colombia. The specific name eremitus is Latin for "lonely" or "solitary" and refers to this species being the only western-Andean species among its closest relatives. Common names Chiriboga robber frog and lonely rainfrog have been coined for it.
Description
Adult males measure and adult females in snout–vent length. The snout is moderately long and subacuminate in dorsal view, rounded or weakly protruding when seen laterally. The tympanum is visible; the supra-tympanic fold is indistinct because of warts. Skin is dorsally areolate. The fingers and toes bear discs and prominent lateral fringes but no webbing. The dorsum is green and has darker markings (either reddish brown reticulations or brown flecks and dark canthal stripe that continues to the flank. The venter is white to pale yellow.
Habitat and conservation
Pristimantis eremitus occurs in montane cloud forests at elevations of above sea level. It occurs in both primary and secondary forests. It is associated with both terrestrial and epipytic bromeliads as well as herbaceous vegetation and shrubs. Individuals have been found active during the daytime as high as above the ground.
This species is threatened by habitat loss caused by logging and agricultural development. It is known from the La Planada Reserve in Colombia, and its range overlaps with the Los Illinizas Ecological Reserve in Ecuador.
References
eremitus
Category:Amphibians of the Andes
Category:Amphibians of Colombia
Category:Amphibians of Ecuador
Category:Amphibians described in 1980
Category:Taxa named by John Douglas Lynch
Category:Taxonomy articles created by Polbot | {
"pile_set_name": "Wikipedia (en)"
} |
Categories
Friday, January 30, 2015
{ Cancer's Hidden Blessings: See & Sip }
Today I took my dad to one of his post-chemo appointments... Something I have decided I don't mind helping out with... Thankfully UCI Medical Center has a beautiful campus! It is actually fun to just go and sit outside and enjoy the scenery. I had more time to kill than expected, as they decided to hook him up with an hour drip of fluids... With the room pretty full, I decided to go down to the cute little coffee shop right outside the building. (Another major plus!) While sitting there, sipping my French Vanilla Ice Blended coffee (yum!) I was thinking about some new things to blog or reflect on in my life. Hmm... I decided that I should start journaling/blogging every "hidden blessing" moment that I come across. That way I (and anyone reading who is going through a similar situation and needs to see a glimmer of light in what can be such a dark tunnel) can visually look back and see all of the small blessings God has hidden beneath the words "cancer" and "chemo". My mom has also helped to inspire the idea, as she has started writing down the things that she/we can still do while going through this adventure. For example, taking a bubble bath with the lights dimmed and a candle burning... Going shopping for a few hours... Or enjoying a delicious cup of coffee. Even bigger trips such as going to Disneyland and major traveling trips (that we will do when summertime comes around) are even more special and meaningful to us. These little reminders will pop up every now and then on the blog so keep an eye out! Thanks for listening!
2 comments:
Oh I'm excited about this theme! Yes! I have recently been reflecting on hidden blessings of Cora's time in the cast a year ago. Deepening relationships with others who felt called to serve us with meals or other help during that time, more time at home together as a family, that I was still able to host a girls' weekend that was important to me with Cora in the cast, etc. I love you! Kelly
Courtney, You and your mom are so wise. You are so right, God shows us many blessings during hard times. We just have to be aware of them. One of the greatest blessings is seeing his hand in helping us through the tough times. I know that I also appreciated the little things in life so much more. Love you, Karen | {
"pile_set_name": "Pile-CC"
} |
Second of two in a series, following The Origins of Political Order. This covers history from the Industrial and French Revolutions onward, though it does take a brief projection beyond that in the section on the Inca and Aztec empires.
Fukuyama's thesis, put forward in the first book, continues to be central. The ideal government consists of three things: 1) a centralized state, 2) a rule of law, and 3) democratic accountability.
Being an American, I was most eager for the discussion of the United States that he promised to bring in the first book (why Donald Trump, why???). The attempt to take things from the long view, I thought, might help clear it up a bit. Why can't Congress get anything done? Why are our state bureaucracies so confusing, slow, and useless? How did this all come to be?
Apparently, it wasn't all bad. He highlights the beginning years of the Forest Service as a shining example of efficient and functional bureaucracy when Gifford Pinchot managed to keep it staffed by highly skilled, uncorrupt professionals. The American military has generally high ratings of approval, especially compared to Congress. A bureaucracy needs to have a certain degree of autonomous power and discretion to make its own decisions to achieve objectives for the common good, but the US never achieved it in its national government because it was founded on a distrust for an overly powerful state.
The chronology of how the three important components of government--the state, rule of law, and accountability matters. In the liberal democracies of Europe, it was the rule of law, the state, then democracy. In the United States, it was the rule of law, democracy, then the state. Since political participation was expanded beyond the educated, class elites to the masses before state institutions were fully consolidated in the Progressive Era and New Deal, political candidates and parties were able to appeal to short-term goals at the expense of long-term benefits. The government can't get too many useful things done without being lobbied and hampered by multiple competing interest groups, theoretically intended to give everyone an equal voice but in practice favors the most coherently organized and best funded groups. Today, that usually means large corporations.
As an example, Obama was only able to pass the Affordable Care Act by abandoning any role in legislation and making multiple concessions to congressional committees, insurance companies (among other groups I can't remember off the top of my head).
Another fascinating question I never knew I had was about the Aztec and Inca Empires. How was it that such large, impressive, and wealthy kingdoms were so easily toppled by European conquistadors? Multiple theories have been made: disease, superiority of weapon technology, the influence of geography (put forward by Jared Diamond's famous Guns, Germs, and Steel).
Fukuyama goes again back to his thesis of a universal pattern of political development here. His answer is that the Aztec and Inca Empires, though large and impressive, were actually not that far developed politically. His analogy is to China during its feudal period during the Eastern Zhou, prior to Qin Shihuang's unification in 221 B.C. Or India under the Mauryas, the unification of which did not last long. Unlike China in its more mature imperial period, the Aztecs and Incas lacked a unified written language and culture around which its people could coalesce to oppose European colonization. Power distribution resembled feudalism, decentralized around entrenched nobles that were not (yet) united around and fully loyal to a centralized state and made it easier for the European conquistadors to divide and conquer.
He agrees with Diamond that the north-south geography of longitude lines played a role in political development, but points out that the diseases took a toll on native populations after the empires of Mexico and Peru were toppled--finishing the job instead of starting it, so to speak. (I haven't read Diamond by the way, just FYI).
Finally, Fukuyama tries to explain why in the modern world after colonization, it is the Asian countries who have been most successful, and why that part of the continent was most successfully able to resist colonization.
Again, it comes down to institutions. China, despite its large population and diverse geography, has a long history of centralized state-building. It lacks an organically developed rule of law, due to the absence of transcendental monotheistic religions and lets the employees of its bureaucracy to manage things on local, municipal, and provincial level based on individual discretion and circumstantial context while still staying accountable to the national levels of power. Japan was ruled by the same imperial dynasty for over 2500 years and inherited its bureaucratic tradition from China, in addition to a unified language and culture.
The last element is especially important for one crucial ingredient for state-building: nationalism. Nationalism provides a narrative of legitimacy that further binds people to a country's state power. It also overrides any potential ethnic and racial divisions (not really a problem for China and Japan, both very ethnically homogenous countries). I can't remember much about the discussion of other Asian countries unfortunately, China and Japan is all I can recall.
Anyway, the result is that even after a brief period of submission and humiliation, China and Japan were quickly able to oust the Western powers, get back on their feet, and join the globalized world in their own right. Japan is a first-world country today, and the size of China's rapidly growing economy ranks among the top with the United States and European Union.
This is a big book...too much to cover completely here. I will say that I'm still slightly skeptical of Fukuyama's assertion of democracy as part of the inevitable ideal of political development and wish he'd gone over more thoroughly the many times in recent history in which exportation of democracy completely failed by the United States in Iraq, Afghanistan (can't remember them all) as a counterbalance to his thesis. He acknowledges that "getting to Denmark" is a path that must be built on indigenous traditions, but it still feels like there's a stronger point to be made when the points in this book are applied in practice. Isn't it possible that developing countries are forced to take risky institutional gambles in the name of Western ideological perfection before their national identity, ability to eat, ability not to have civil wars etc. are fully consolidated? Hasn't it already happened, in fact? That's just my point of view, and I don't disagree with Fukuyama (one look at his bibliography and he's obviously WAY better read than I am). Overall, one of the more interesting works I've read in a while. | {
"pile_set_name": "Pile-CC"
} |
Probing centrifugal barriers in unimolecular dissociation of the allyl radical.
Time-resolved photoionization of the hydrogen atom product from the allyl radical, C3H5, dissociation with 115 kcal/mol total energy provides information on the unimolecular dissociation dynamics. Vibrationally hot ground-state allyl radicals in both low and high J-states are prepared by electronic excitation to selected rovibrational states of C-state allyl followed by internal conversion. The measured dissociation rates and kinetic energy release are independent of the allyl parent rotational energy and suggest that centrifugal effects are unimportant in allyl radical dissociation at 115 kcal/mol. | {
"pile_set_name": "PubMed Abstracts"
} |
Decoding Apparent Ferroelectricity in Perovskite Nanofibers.
Ferroelectric perovskites are an important group of materials underpinning a wide variety of devices ranging from sensors and transducers to nonvolatile memories and photovoltaic cells. Despite the progress in material synthesis, ferroelectric characterization of nanoscale perovskites is still a challenge. Piezoresponse force microscopy (PFM) is one of the most popular tools for probing and manipulating nanostructures to study the ferroelectric properties. However, the interpretation of hysteresis data and alternate signal origins are critical. Here, we use a family of scanning probe microscopy (SPM) techniques to systematically investigate the ferroelectric behavior of electrospun potassium niobate (KNbO3) nanofibers. Band Excitation (BE) SPM scans reveal that PFM signals are dominated by changes in resonant frequency due to rough nanofiber surfaces, rather than the actual local piezoelectric strength. We investigate the bias-induced charge injection properties and electrostatic interactions on the PFM response of the nanofiber using contact mode Kelvin probe force microscopy (cKPFM). Furthermore, the impact of relative humidity on the KNbO3 nanofiber's piezoresponse, switching behavior, and tip-induced charges are explored. The resultant data from BE scans were utilized to estimate the piezoelectric constants of the KNO nanofiber. These observations will provide clarity in studying newly developed ferroelectric nanostructures and unambiguously interpreting the PFM data. | {
"pile_set_name": "PubMed Abstracts"
} |
This thesis aims to show that usually neglected compositional changes of the labour force over the last 30 years (more young workers, more women, more workers with low attachment to the labour market, less experienced workers, more educated workers), can explain important stylized facts. First, about unemployment: the countries with high female unemployment are also countries with high young unemployment (the correlation across OECD countries between the two rates is positive and quite high at about 0.84). This can be explained by a shift towards more competition among young / women in a secondary labour market due to increased labour supply of the two groups. About 4 points of total unemployment are associated with observed changes in labour supply, if endogeneity of participation variables with respect to unemployment is properly accounted for. Second, about wage inequality: the most important trend in wage inequality in the US is the rising return to experience, or equivalently the deterioration of the relative position of younger workers. The second fact can be explained by a historical change in the skill composition of the labour force - workers though more educated are yet less experienced. This substitution of skills can also explain a significant part of the unexplained rise in the return to education, depending on the substitutability between education and experience in human capital: more inexperienced workers in the labour force generate a relative scarcity of human capital that increases the demand for education. In the theoretical part of the thesis, two trends of the OECD labour markets are explored within matching models. First, it is shown that more short-term employment can be explained by higher active population growth and lower productivity growth. Second, stronger urban unemployment gradients and higher aggregate unemployment can be shown to reinforce each other when location choices within agglomerations are endogenous. | {
"pile_set_name": "Pile-CC"
} |
Q:
Drupal 8. Error join() Twig function
I try to print field_tags inline and comma separated. My code in node.html.twig:
<p>{{ content.field_tags | join(', ') }}</p>
This return error message: The website encountered an unexpected error. Please try again later.
Whats wrong? Help.
A:
Anything in content is a render array, you can't use functions like that on those directory.
You can loop over it and print out each item, that works as it is then automatically rendered.
| {
"pile_set_name": "StackExchange"
} |
Hot off the Press
History Talk
More than just peace and love, 1968 was a global clash against political and business establishments, which resorted to violent repression. Its legacy remains inconclusive today.
Editor's Note:
For Americans, 1968 was arguably the most traumatic year of the second half of the 20th century. Two assassinations, a failing and divisive war, and a political system that seemed unable to respond to the crises of the day. But 1968 was truly a global phenomenon, which saw unrest from Prague to Paris, Tokyo, Moscow, and Rome, to name a few. To mark the 50th anniversary of 1968, David Steigerwald, Elena Albarran, and John Davidson take us to the United States, Mexico, and Germany to examine the events of that year and the long shadow they still cast.
Given the number of half-century retrospectives I’ve read about it, 1968 was quite a year.
Writers have compared it to the most dramatic moments in modern history— 1789, 1848, 1914, 1989. In the United States, the turmoil of ’68 probably compares to 1861 and 1919.
There was an A-list of nearly monthly events, from the Tet Offensive for January and February to the triumph of Richard Nixon in November’s presidential election. Between these came the abdication of Lyndon Johnson and not one but two major assassinations. Martin Luther King, Jr. was murdered in early April, and Robert Kennedy was killed in June, almost two months to the day after King.
U.S. Marines in the ancient imperial capital of Vietnam, Huế, during the Tet Offensive (left). Richard Nixon giving his “victory” sign at a 1968 campaign rally in Pennsylvania (right).
A string of urban rebellions erupted after King’s death; Kennedy’s brought almost stunned silence as a mourning train delivered his casket from New York to Washington. Columbia University came under siege in late April and early May. In August, the Democratic Party melted down in Chicago. Then came Nixon, who implausibly offered to “bring us together.”
That’s just the A-list. The B-list included the passage of the Fair Housing Act, the third important civil-rights measure of the Sixties, the publication of the Kerner Commission report on urban disorders, and Black Power protests on the Olympic medal stand in Mexico City.
A 1964 protest against housing discrimination in Seattle, Washington (left). President Lyndon Johnson with members of the Kerner Commission in 1967 (right).
With its pictorial retrospective, USA Today reminded me that a number of things could make up C- and D-lists. The Fifth Dimension won a Grammy for “Up, Up, and Away.” The Big Mac was introduced to a then-svelte public. “Hair” premiered, as did “Night of the Living Dead,” which could probably rise to a B-list if treated as an allegory for the times—surely more suitable for it than for “Hair.”
Denny McClain won 31 games for the Detroit Tigers, which only Detroiters would rate as historically important, though he remains major league baseball’s last 30-game winner.
The theatrical poster for the Night of the Living Dead (left). John Gordon Mein, the American ambassador to Guatemala (right).
Some events seem as though they ought to be memorable but don’t show up on anyone’s lists. Who remembers the assassination of John Gordon Mein, the American ambassador to Guatemala, in August, on the very day the Democrats imploded in Chicago?
The sheer number of convulsive “happenings,” to misuse a word from the day, suggests that the undercurrents driving events were wide and deep. They were also varied. The Vietnam War and the nation’s racial nightmare stand out as the most obvious underlying causes of tumult, but they were coincidental, not interrelated.
A 1967 protest against the Vietnam War in Washington, D.C. (left). A soldier standing guard in Washington, D.C. after riots following the assassination of Martin Luther King, Jr. in April 1968 (right).
The urban crisis, which itself developed from several different mid-century developments, also was causal, and while it was indisputably connected to the racial crisis, those two forces also were independent of each other.
At the time, many Americans believed that the “generation gap” was triggering campus uprisings, antiwar agitation, and urban riots. Social scientists diagnosed America’s social and cultural ills as evidence of an epidemic of alienation.
So maybe the way to understand 1968 is simply to note that many powerful forces came together at the same time and leave it at that.
There are ways, nonetheless, to try to apprehend the entire picture, though doing so necessarily depends on one’s willingness to work in broad strokes.
Carole Fink and her colleagues argued in the 1998 book A World Transformed that the upheaval was a protest against the Cold War order. There is much to this view, particularly when we think globally, as the authors were doing. Whether a regime was tied to the U.S. or the USSR, the “establishments” in those nations that played host to protests shared many qualities.
They were rationalized, bureaucratized warfare states. Visionless hacks and ideological automatons ran them. Because those establishment bosses mostly had come of age during World War II, generational conflict ran through the rebellions everywhere as outdated “isms” lost their credibility among the children of the postwar. From Poland to Berkeley, people were fed up with stifling systems, particularly those who had gotten a taste of consumer affluence.
If the first virtue of this explanation is its breadth, its greatest liability is in positing 1968 as an epic fight between the forces of order and the forces of change. I admit to using this language when I teach the period because it is handy, illustrative, and seductive. It also works as a way of describing the camps that flesh-and-blood people very often aligned themselves with.
Huey P. Newton, the Black Panther Party for Self Defense’s Minister of Defense in 1967 (left). A 1968 campaign poster for George Wallace’s presidential campaign (right).
Huey Newton certainly believed that he was an agent of change, and George Wallace was self-consciously a demagogue for the forces of order. In 1970 when Ben Wattenberg and Richard Scammon defined the “average” voter as a 47-year-old housewife of a factory worker in Dayton, Ohio, they put her firmly among the forces of order.
A useful descriptor, the dichotomy hovers near the simplistic as historical explanation. It invites the presumption that one side was good and the other bad. Dubious in any case, that sort of binary is in the eye of the beholder: what was good for one person was often bad for another. That Dayton housewife feared change, not because the established order favored her so much as she feared what disorder would do to her already fragile life.
A Chicago sign as part of Mayor Richard J. Daley’s campaign for a cleaner Chicago in 1968 (left). A U.S. stamp issued in 1968 (right).
The order vs. change binary can’t even explain one of the year’s greatest Freudian slips, when Chicago Mayor Richard Daley defended his police at an August 30 press conference: “Gentlemen, get this straight, once and for all: The policeman isn’t there to create disorder. The policeman is there to preserve disorder.” Daley, apparently, had a camp of his own.
The formulation, moreover, doesn’t explain the momentum of demands for change nor account for the durability of the establishment order. With the great exception of African American activism, the forces of change in the United States were products of the technologically driven consumerism we associate with post-industrialism.
The economic processes of postwar affluence dissolved much of the foundation beneath classic bourgeois society and in the process opened up important terrain for the development of new forms of family life, sexual expression, and creative cultural forms. A new subjectivity was settling in by 1968, by which individuals could make important freedom claims against prevailing social institutions.
While obviously the momentum of change gave plenty of inspiration to those demanding an end to the war and denouncing racism, most of this post-industrial emancipation was taken in the realms of society and culture. By contrast, the structures of power in state and economy lay outside those pressures, and to the considerable extent that they continued to hold office and support wealth, the forces of order could command those structures.
A button for the Youth International Party, a radical, youth-oriented, countercultural offshoot of the free speech and anti-war movements of the 1960s (left). A 1968 anti-war march in Chicago ahead of the Democratic National Convention (right).
How individuals accommodated themselves to a world of constant change and constant stasis varies as much as do individuals. Counterintuitive though it sounds, I suggest that Richard Nixon was a crystalline reflection of the state of America, circa 1968.
As the presidential tapes make abundantly clear, Nixon was an agglomeration of mainstream bigotries, which he often folded into rants against the forces of change that lumped together the “long hairs,” “the blacks,” the “homosexuals,” the “hippies.” They weren’t just co-conspirators; they were parts of the same animal by his lights.
Yet in the midst of the 1968 election, Nixon, the least-cool man in America, appeared in a cameo on “Laugh-In,” the comedy-variety show that in ripping off hippy imagery showed the inroads of cultural change in mainstream America. “Sock It to Me?” he said, repeating one of the show’s standard jokes.
In his obliviousness, Nixon showed that post-industrial cultural change had seeped well beyond Greenwich Village, but the structures of power—wealth accumulation, state control, the military—remained in place, even after the good buffeting that a year of turmoil brought.
This has been a recipe for stalemate, and to a large extent, the United States remains frozen in those binaries. That is the legacy of 1968.
Bullets ricocheted across the Tlatelolco neighborhood Plaza of the Three Cultures in Mexico City on the evening of October 2, 1968.
They struck hundreds of civilian protestors, the vast majority college and high-school-aged activists who had gathered that afternoon in the working-class barrio to express their mounting discontent with the government. Eyewitnesses and later photographic evidence testified to the volcanic stone and concrete plaza awash with blood and to the bodies stacked in adjacent building hallways.
But on the morning of October 3, national newspapers reported only a skirmish provoked by disgruntled students. According to the official story, the demonstration was a ploy to sabotage the upcoming XIX Olympic Games, hosted for the first time by a developing country.
Just ten days later and 20 kilometers away, with civil order restored, sprinter Norma Enriqueta Basilio lit the Olympic cauldron—the first woman to do so—against the backdrop of thousands of doves (pigeons, actually).
That image of peace, broadcast around the world, seemed particularly charged given the political climate of 1968. In Mexico, Olympic fever and globally inspired student mobilization converged in a maelstrom of mutual antagonism, opening a permanent fissure in the social contract between the official party (the Partido Revolucionario Institucional, or PRI) and Mexican citizens.
The student movement did not occur in a vacuum, nor was it a spontaneous uprising. Historians have demonstrated the long-term politicization of high school and college students dating back to the 1950s. Young people forged alliances with the labor sector’s National Strike Committee to try to hold their government accountable for the revolutionary promises that kept the PRI in power.
Prior to the massacre, the 1968 student organizing committee had released a list of six demands directed to President Gustavo Díaz Ordaz: the release of political prisoners, reduction of the Olympic budget, elimination of the especially punitive Penal Code, compensation for police brutality, firing the abusive police chief, and the disbanding of the Granaderos (riot police).
They wanted transparency; they wanted mass media participation; they demanded that negotiations be broadcast on radio and television. (The negotiations never happened.)
Despite the fact that Mexico’s student movement had been a serious enterprise seeking legitimate political goals for more than a decade, the image of youth as destabilizing agents framed the political narratives of 1968.
The Olympics offered an opportunity to change the image of young people. One television announcer proclaimed at the opening ceremonies: “This is the beautiful capital of Mexico. Mexico, very old and today especially very young, plays host to the cream of the world’s athletes.”
Close up on the detail of the main Olympic Stadium for the 1968 games (left). A student protester poster referencing the logo design of the 1968 Olympics and military oppression (right).
Young bodies at their most appealing and promising were on display at every turn. Pixie-cut teen models bedecked in miniskirts printed with the psychedelic ’68 logotype sashayed around the ruins of pre-Aztec pyramid structures as part of the Olympic committee’s local publicity campaign. Meanwhile, hundreds of non-conformist young bodies culled from Tlatelolco’s bloodied plaza languished unseen in the nearby Lecumberri penitentiary.
But once the world’s athletes and the international press hordes had left, Mexicans had to confront the realities of the onset of its Dirty War. Many historians consider the event that came to be known as the “massacre at Tlatelolco” the beginning of the end for the PRI, which had claimed uncontested control as the nation’s ruling party since 1929.
A meeting of graduate students in August 1968 (top left). Students protesting in September 1968 with one student’s sign reading: Everything is possible in peace (top right). Students demonstrating in August 1968 (bottom left). A teacher speaking with soldiers as students demonstrate in the background in July 1968 (bottom right).
Undeniably, the government had betrayed the public’s confidence by opening fire on its own citizens even as the PRI continued to downplay or deny the scale of civilian fatalities at Tlatelolco. After 1968, the party’s legitimacy deteriorated until the PRI lost its first election in 2000, and briefly recovered only to succumb to what contemporaries deem the final death knell in the 2018 elections.
Since the massacre, there has been a vigorous struggle over how the events of 1968 are remembered. In 1975 writer Elena Poniatowska published La Noche de Tlatelolco, a collection of oral history testimonies from students, parents, government officials, police officers, union organizers, bystanders, and other witnesses to the massacre. The book’s availability in translation as Massacre in Mexico(1991) renewed global interest in the subject of state-sanctioned violence perpetrated in Mexico.
The plot of the 1988 feature film Rojo Amanecer (Red Dawn) unfolds on the evening of October 2 in a Tlatelolco family apartment overlooking the plaza. Their intimate space is invaded by brutal riot squad violence that strips the family of their innocence, with faith in the government being the ultimate casualty. Government censors drove Rojo Amanecerunderground, where a vigorous pirating network facilitated its widespread distribution. The film was finally released in 1990.
Survivors and human rights advocates cobbled together a Truth Commission in 1993 to try to determine the number of dead, a task made difficult by the corruption and cover-up that took place in 1968. A monument erected in the Plaza of the Three Cultures in 1993 bears the inscription of the names of the officially recognized victims, and serves as the terminus for the annual commemorative march through the city every October 2. That number stands at 40, though many people believe the total number killed to be 300 or more.
Student protesters on a burned-out bus in July 1968 (left). A tank and soldiers meeting student protesters on the Plaza de la Constitución in August 1968 (right).
Efforts toward meaningful reconciliation materialized in 2007 with the inauguration of the Centro Cultural Universitario Tlatelolco (CCUT), a multipurpose interactive cultural space that houses the official documentary and oral history archives and serves as a living memorial to the 1968 student movement. The center publishes curriculum guides for educators from elementary to college levels and encourages field trips to the cultural center.
Fifty years later, the events of October 2, 1968 continue to pull on the imagination of Mexicans.
The tragicomic drama Tlatelolco: Verano del 68 (2013) depicts a cross-class love affair between students caught up the heady political climate in the summer of 1968. The icon designating the Tlatelolco stop on the city’s expansive metro system is taken and altered on the massacre’s anniversary to depict a bloody handprint. It is a wordless symbol that nevertheless conveys the memory of state oppression to the millions of passengers that pass through.
In September 2018, the CCUT will release to the public the “Colección M68: Ciudadanías en movimiento” initiative, a major archival project touted as the “digital brain of 1968” that will bring together testimonies, documents, and interactive Google maps into a free, open-access digital platform. Similarly, the National Autonomous University of Mexico (UNAM) has made available online a massive collection of primary documents from the student movement.
In Mexico’s 50th anniversary of 1968, the memory of the student movement has been brought into the service of the current political climate.
The controversial populist politician Andrés Manuel López Obrador won his third bid for the presidency in July 2018, and will take office in December with his newly formed party MORENA in a victory that jettisoned the heavy mantle of the PRI.
In a keynote address inaugurating the conference “50 Years, Tlatelolco, and ‘68 in 2018,” conspicuously held in the Casa Elena Poniatowska, former UNAM rector Juan Ramón de la Fuente proclaimed that López Obrador’s victory would not have been possible if not for the path for pluralistic democracy opened by the student activists and martyrs of 1968. Though such political statements draw perhaps too tidy a chronology, clearly the legacy of 1968 still resonates.
In May 1968, nearly 100,000 West German students and workers marched in the capital city of Bonn against an emergency law being debated in Parliament. It was the climax of the protest movement of 1968, a campaign against deceptive and repressive practices of the political status quo.
To understand that moment in Bonn, we need to look back at the political and social tensions that had been building since the late 1950s. At the core of the upheaval was a fierce debate over how to define the German “people” and shifts in the meaning and attribution of the loaded term “enemy of the people.”
The story of “FRG 1968” really took off with the Spiegel Affair of 1962. Der Spiegel,a Hamburg-based weekly news magazine, published a cover story on the inadequacy of the Federal Republic of Germany’s (FRG) armed forces in relation to those of the Warsaw Pact members. Informed by inside sources, the article and was particularly critical of Defense Minister Franz Josef Strauss.
Cries of “treason” from the highest levels of government led to the occupation and search of the Spiegel offices, the arrest of the chief editor, and the interrogation of journalists. No evidence to support criminal charges emerged.
The public reacted with spontaneous and broad-based outrage at this attempt to censor the free press. Observers across the spectrum heard echoes of Nazi jack-boots stomping across the FRG. West Germany’s postwar “Economic Miracle”—which had brought the country from wartime devastation to robust economic growth and development in a matter of a decade—had made it easy for citizens to gloss over their memories of the Nazi years.
The Spiegel outcry doubled when Strauss worked with the Franco dictatorship to arrest the article’s author while he was on vacation in Spain. Eventually, the public’s insistence on freedom of the press and on restricting the arbitrary use of state power triumphed over the claim that criticizing a high-ranking government official made journalists “enemies of the people.”
The Spiegel Affair not only chased Strauss out of the Cabinet but led to the retirement of Federal Chancellor Konrad Adenauer, the widely respected “Old Man” of FRG politics, a year later. It also started West Germany on the road to 1968 by weakening the iron grip that the Christian-Democratic Union (CDU) had held on the government.
During Kurt Kiesinger’s tenure as German Chancellor (1966-1969), his CDU party lost sufficient public support that it found it necessary to devise a “Grand Coalition” with its rival, the Social Democratic Party (SPD). Kiesinger’s Nazi past fueled opposition and worry that the CDU was a tool to reassert authoritarian government. Kiesinger had been former chief liaison to Joseph Goebbels’ Ministry of Propaganda when he worked as deputy director of the Foreign Office’s broadcasting department.
During the collaboration, the left-leaning wing of the SPD lost all hope that the CDU might be a vehicle of change. Rather the SPD followed the Socialist German Students (SDS) movement, which had for years questioned the establishment’s connections to the militaristic past and the Holocaust. They joined other students, radical unions, and a broad spectrum of intellectuals to form an Extra-Parliamentary Opposition (APO).
A 1965 poster for the Christian-Democratic Union with Kurt Georg Kiesinger (left). The logo of the Social Democratic Party of Germany (right).
Here, terrifying memories of the Nazi past and student fears that elites connected with the CDU would bring back the oppressive practices of the radical right led to the growth of social movements in West Germany that took to the streets in 1968.
The APO sought to bring pressure upon the ruling coalition and effect social change from outside the institutions of government. They strove for public forums independent of state control and the capitalist marketplace where they could voice their alternative vision for Germany.
The first classes of the Berlin film school, established in 1966, included some of the vanguard of this “counter-public sphere” hoping to re-inform, and thus to reshape and win over, the German public (such as Harun Farocki, Holger Meins, and Helke Sander). They used their short films to critique the conservative tendencies of the West German government and the threats they saw of American imperialism and the capitalist press—labeling them “enemies of the people.”
In particular, they condemned the corporatization of the German press, which they felt was not only being muzzled by government forces (like in the Spiegelcase) but also controlled by a small contingent of wealthy media companies. The Springer Press in Berlin, which had cornered roughly 70% of the market, became one of their primary targets.
In the ever more mobilized and tense environment of the 1960s, West Berlin became a center of agitation. It became more radicalized not just because of its two universities and film school, but also because its status as an occupied city protected Germans from the FRG’s compulsory military service while there. Young men who wanted to avoid service, many of whom held strong anti-war sentiments, took up residence there and gravitated to its oppositional scene.
While the war in Vietnam was its most glaring example of U.S. imperialism, the APO rejected U.S. intervention in other places as well. In June 1967, the Shah of Iran, returned to power in 1953 by a coup backed by the United States and Great Britain in order to maintain their oil interests, and his third wife made an official state visit to West Berlin.
A 1967 state reception for the Shah of Iran in West Germany (left). Protestors in 1967 at the Schöneberg Town Hall in Berlin (right).
During the ensuing demonstrations, the police shot protester Benno Ohnesorg, a university student, who died before reaching the hospital. Photographers had crowded over him for some time before an ambulance was called, solidifying the sense that the state and press were in cahoots.
Countercultural student groups such as the Kommune 1, a politically mobilized commune in opposition to the traditional family, had previously limited themselves to symbolic actions like “pudding assassinations” and property damage. After the Ohnesorg shooting, they accepted violence as necessary in the face of the West German government’s complicity with U.S. actions in Iran and its violent suppression of public opposition.
At Ohnesorg’s funeral, student activist Rudi Dutschke advocated peaceful avenues to change but cautioned that the APO must be ready for the violence that the state was prepared to use and the conservative press helped the people accept.
An anti-communist activist shot Dutschke on April 11, 1968, leaving him with severe brain damage. Dutschke had been labeled an “enemy of the people” in the Springer tabloid Bild-Zeitung, and much of the counterculture attributed the attempt on “Red Rudi”’s life to campaigns of misinformation and character assassination. Demonstrations and clashes with police increasingly occurred at the Springer building in Berlin.
The German government exacerbated the situation in May when it considered “Emergency Acts” allowing the Cabinet to suspend parliamentary rule and enact laws in times of (perceived) crisis. This raised the specter of the Weimar Constitution’s self-cancelling statute, which had contributed significantly to the legitimation and entrenchment of the Nazis in the early 1930s. These worries brought tens of thousands of demonstrators to Bonn, but not the same broad cross-section of the populace that had reacted to the heavy-handedness of the Spiegel Affair.
The Emergency Acts passed and dealt a blow to the protest movements because the German people—manifested in the elected government—clearly resented the possibility of radicals’ incursions into their comfortable lives more than they feared the return of authoritarianism. By the end of May, the refusal of the German population to coalesce around the APO’s causes was unmistakable—diminishing the momentum of the ’68 movement in Germany.
Fault lines appeared within the APO, and the three film school students mentioned above represent the tendencies of the resulting split. Sander became a central figure in the burgeoning women’s movement that mobilized because the APO mirrored the traditional gender inequities of bourgeois society. Farocki undertook the long “march through the institutions” as a filmmaker, critical intellectual, and, eventually, teacher at the Berlin Film School.
Most radically, Meins joined the Red Army Faction (RAF, aka the Baader-Meinhof Group, a radical, belligerent left-wing organization) to take up armed resistance against a West German state he considered fascistic. Captured along with the ringleader, Andreas Baader, in 1972, he died as the result of a hunger strike protesting the conditions of the RAF’s incarceration in 1974.
Sporadic violent insurgency continued through the 1970s, but the opportunity for militants to win over the public and lead the people never materialized, partly because the state could sell its anti-democratic actions as defensive measures.
Meanwhile, the Springer Press, its sales unshakeable, relentlessly tarred all three sets of leftist as “enemies of the people.”
Nonetheless, many of the emancipatory aims of the APO came to fruition in the 1970s and have taken root in Germany’s public self-conception: curricular improvement and less-authoritarian educational methods; better conditions for students; attention to the legacies of World War II and the Holocaust; police and judicial reform; recognition of women’s rights; and support of human rights throughout the non-western world.
The contradictions of “FRG 1968” discussed here suggest that labeling a group the “enemy of the people” facilitates its immediate marginalization but may actually foster its goals long term.
Suggested Reading
Mark Kurlansky, 1968: The Year that Rocked the World, New York: Random House, Inc., 2005.
David Farber, Chicago '68, Chicago: The University of Chicago Press, 1988.
Carole Fink, Philipp Gassert, and Detlef Junker, 1968: The World Transformed, Cambridge: The University of Cambridge, 1998. | {
"pile_set_name": "Pile-CC"
} |
Assessment of patients' awareness and factors influencing patients' demands for sedation in endodontics.
Endodontic therapy is perceived by many as a procedure to be feared. Many studies have reported that fear and anxiety are major deterrents to seeking dental care in general, but only a few deal with the use of sedation in endodontic therapies. The purpose of this study was to assess patients' awareness of and factors influencing the potential demand for sedation in endodontics. We hypothesized that there is an association between demographic factors and the demand for sedation in endodontics. A survey consisting of 24 questions was given to patients 18 years and older who presented to the graduate endodontic clinic. Results were collected and statistically analyzed. Thirty-six percent of patients reported that their perception of sedation was being put to sleep, and 27% perceived it as related to or reducing pain. Concerns associated with endodontic therapy were the fear of pain (35%), fear of needles (16%), difficulty getting numb (10%), and anxiety (7%). The 2 major demographic factors that influenced the demand for sedation were cost and the level of anxiety (P < .05). Fifty-one percent showed a positive interest in sedation for endodontic therapy if the option of sedation was available. The demand for sedation in endodontics is high. Patients' understanding of sedation varies. More patients would consider having endodontic procedures if sedation was available. The provision of sedation by endodontists could result in more patients accepting endodontic therapies. | {
"pile_set_name": "PubMed Abstracts"
} |
Q:
Django: Assigning variables in template
How can I assign a variable inside the django templating system ?
Assuming Restaurant is a Model:
{% restaurant_id as restaurant.id %} or {{ restaurant_id as restaurant.id }} are not working.
A:
You could use the with template tag, and assign an internal template variable like this:
{% with restaurant_id=restaurant.id %}
... use restaurant_id in this template section ...
{% endwith %}
A:
Note: You can use filters within the "with" template tag:
foo is {{ foo }}
{% with bar=foo|slice'6:9' %}
bar is {{ bar }}
{% endwith %}
| {
"pile_set_name": "StackExchange"
} |
Preparation / Directions:
Combine all ingredients in a non-aluminum sauce pan and bring just to a boil.
Lower heat immediately and simmer 15 minutes.
Makes about 1 1/2 cups.
Lightly brush on barbecue sauce.
Do no let chicken burn. | {
"pile_set_name": "Pile-CC"
} |
Primary Menu
Google Buzz, OAuth And Python
Google Buzz is a social networking and messaging tool from Google Inc. that’s integrated into GMail. Google Buzz was released in early February this year (9th Feb 2010), and since then it has emerged as an important social network for GMail users, and many people (including me) prefer it now to other social networking platforms such as Facebook.Logos of Google Buzz, OAuth And Python
In May Google revealed the Buzz API to the public so developers around the world could write applications to interact with Google Buzz to read and/or write.
Google uses OAuth for authentication and authorization to their services. OAuth is an open standard that allows users to share their content (or data generally) from one site to another (or to an application generally) without having to hand the other site their credentials. For more information about OAuth Open Standard go here
I was digging around to make a small application to notify me of new Buzzes or new comments on Buzzes, so I searched and read the API to interact with Google Buzz. In this post I’ll share my Python code that uses OAuth to authorize the application to use Google Buzz data, and soon I’ll share my application when it’s done 🙂
The Process
The process of authentication and authorization according to OAuth consists of three steps:
Asking Google for a
Request Token. The token is bound to a scope, this scope identifies the interests of your application with the user’s data.Google Buzz has two scopes; Full Access scope
https://www.googleapis.com/auth/buzz and Read Only Scope
https://www.googleapis.com/auth/buzz.readonly, you must select the scope that suits your needs.
Asking the user to authorize your needs by redirecting him/her to a page in Google’s website to log in and authorize your application. As a result the user gets a verification code that must be delivered to your application so it can continue to step 3. There are three ways of getting the verification from the user listed here.
Exchanging the
Request Token for an
Access Token that can be used later for authorizing your requests for data. Your application should save this token to use it later, this token has unlimited expiration time, so you don’t need to bother the user with authorization more than once (Unless the user revokes the access of your application from his/her account).
Prerequisites
My class depends on Python-OAuth2, so you need to download and install it first.
The code is written using Python 2.6.
General Parameters
All OAuth requests must have the following parameters:
oauth_version: Which defines the version of OAuth to use (since when writing this post there were only 1.0 and 1.0a I’ll be using 1.0).
oauth_nonce: A pseudo-random number.
oauth_timestamp: The timestamp when generating the request.
oauth_signature_method: The signature method used to sign the base string that identifies the request, which is either
HMAC-SHA1 or
RSA-SHA1 (It can be
PLAINTEXT but Google doesn’t support it).
oauth_consumer_key and oauth_consumer_secret: The key and secret of your application, if you have registered your application then you’ll have a pair of key and secret for your application. If you don’t have a registered application you can use
anonymous for both key and secret.
oauth_signature: Which is generated using the chosen signature method for the base string which identifies the request
So basically we’ll need to add those parameters to every request we make and we’ll have to sign the request. The former is done internally by calling
__getParams, while the latter is done internally by calling
__signRequest for each request.
Getting The Request Token
This request has the following parameters added to the general parameters:
oauth_callback: The URL that the user would be redirected to after authorizing your application, and when redirected the verification code is passed to this URL as a parameter along other token parameters. If you don’t have a URL or can’t redirect the user for some reasons you can use the special value
oob (Out Of Bounds) so that Google redirects the user to a page in Google’s website that has the verification code inside it.
xoauth_displayname: The friendly name of your application that will Google will use when asking the user to authorize it.
This procedure is done by sending a
POST request to the following URL
https://www.google.com/accounts/OAuthGetRequestToken with the previous parameters while setting the
ContentType header to
application/x-www-form-urlencoded.
The response must be
Request Token data with status code
HTTP200.
This method
__destroyConnection is used to destroy the connection because it becomes invalid after a few requests.
Redirecting The User To Authorization Page
Here we’ll have to know what is the URL of the page to redirect the user to, and that’s done by sending a
GET request to the following URL
https://www.google.com/buzz/api/auth/OAuthAuthorizeToken with the following parameters:
oauth_token: The key of the
Request Token received in the previous step.
scope: Described earlier.
domain: The domain that your application uses, this is used for web applications, you must set it to anonymous for desktop applications.
xoauth_displayname: Described earlier.
The response would be a page with
HTTP302 status code, this page contains the URL to redirect the user to in the
location header.
This method
webbrowser.open is used to open the URL in the web browser of the user, it is -obviously- contained in the
webbrowser Python standard module.
As a result of the authorization Google gives you (or the user) the verification code for the token, after acquiring it (automatically or by asking the user to give it to your application somehow) we give it to the token by calling
setTokenVerifier method.
Exchanging The Request Token For The Access Token
This request has the following parameters added to the general parameters:
oauth_token: Described earlier.
oauth_verifier: The verification code acquired from the previous step.
This procedure is done by sending a
POST request with the previous parameters to the following URL
https://www.google.com/accounts/OAuthGetAccessToken.
The result is the
Access Token data in a page with
HTTP200 status code.
Authorizing Requests
Authorizing your requests for data is done by adding
Authorization header to your request which includes the OAuth parameters along with the
Access Token data, and by setting the
ContentType header to either
application/json or
application/atom+xml:
Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
params=self.__getParams()
params.update({
'oauth_token':self.token.key,
})
oauthRequest=oauth.Request(method=method,url=url,parameters=params)
self.__signRequest(oauthRequest)
ifisJSON:
headers={'Content-Type':'application/json'}
else:
headers={'Content-Type':'application/atom+xml'}
headers.update(oauthRequest.to_header())
self.connection.request(method,url,body=body,headers=headers)
resp=self.connection.getresponse()
data=resp.read()
self.__destroyConnection()
returnresp.status,data
Example On Using BuzzOAuth Class
This example uses the BuzzOAuth class to get the list of Buzzes for the user to read (The list of his/her friends’ Buzzes).
The method
request uses the
Access Token to do the request passed to it and returns a tuple of the status code and data returned by the request.
In line 8 the call for method
saveTokenToFile saves the
Access Token to a binary file called
token.tok by default, so that the application can later just use this token like this:
Post navigation
7 thoughts on “Google Buzz, OAuth And Python”
Wow ,such a nice work, it helped my immense problem, I was looking for it for a long time, o man I have nothing to give you in return without just saying may god bless you, I will recommend you to get it listed on http://blogolb.net so that a huge community can easily get their problems solved
I’ve seen your blog URL on one of FIT forums, and I’m really proud that one of our colleagues, one who’s actually still a student, is doing such a great job. I’m not talking about this article in particular but about the whole blog.. I skimmed through many of the articles, and I really liked what I’ve seen. Hope your blog or whatever you’re doing receives the proper attention it deserves.
Best of Luck
Good job, I’m working on some API and I think it is better to be corresponded to a standard link OAuth, however, I’ve read the RFC of OAuth, I know it is usually awful to read an RFC, but I gave it a shot, which wasn’t good. Also many articles talk about OAuth but all that I found weren’t good at describing the philosophy behind it, I found step-by-step articles like yours, it is helpful to achieve a particular task, but not enough. Will you point me to a useful article, please?
Thanks in advance 🙂
Welcome Muaz,
I know reading an RFC is painful enough, but after doing that you still have questions?!
I can’t point you to a specific article, because I read about it from many sources and used it with Google Buzz, but I can summarize what I understand OAuth is from my point of view:
OAuth is a standard that lets you share your account with software without specifying your credentials so that you make sure they won’t be stolen. And by doing that you’ll allow the software to use your account to do its job (Which must be specified during the authorization phase), and you get a page to revoke the access for this software from the website you’re sharing your account on.
So the philosophy is all about allowing software to use your account to read/write data while keeping your credentials secured.
Hope this helps 🙂 | {
"pile_set_name": "Pile-CC"
} |
---
abstract: 'Standard practice attempts to remove coordinate influence in physics through the use of invariant equations. Trans-coordinate physics proceeds differently by not introducing space-time coordinates in the first place. Differentials taken from a novel limiting process are defined for a particle’s wave function, allowing the particle’s dynamic principle to operate ‘locally’ without the use of coordinates. These differentials replace the covariant differentials of Riemannian geometry. With coordinates out of the way ‘regional conservation principles’ and the ‘Einstein field equation’ are no longer fundamentally defined; although they are constructible along with coordinate systems so they continue to be analytically useful. Gravity is solely described in terms of gravitons and quantized geodesics and curvatures. Keywords: covariance, invariance, geometry, metric spaces, state ; 03.65.a, 03.65.Ta, 04.20.Cv'
author:
- 'Richard Mould[^1]'
title: '**Trans-Coordinate Physics**'
---
Introduction {#introduction .unnumbered}
============
James Clerk Maxwell was the first to use space-time coordinate systems in the way they are used in contemporary physics. They play a role in his formulation of electromagnetic field theory that makes them virtually indispensable. Einstein embraced Maxwell’s methodology but devoted himself to eliminating the influence of coordinates because they have nothing to do with physics. However, the influence of coordinates is not eliminated by relativistic invariance as will be evident below where these space-time representations are removed *entirely* from physics.
Trans-coordinate physics proceeds on the assumption that space-time coordinates should not be introduced at any level. As a practical matter, and for many analytic reasons, coordinates are very useful and probably always will be. But if nature does not use numerical labeling for event identification and/or analytic convenience, and if we are interested in the most fundamental way of thinking about nature, then we should avoid space-time coordinates from the beginning.
Without coordinates the domain of relativity lies solely in the properties of the embedding metric space, and the domain of quantum mechanics resides in properties of local wave functions that are assigned to particles. These two domains overlap ‘locally’ where Lorentz invariant quantum mechanics is assumed. Photons in the space ‘between’ massive particles have a reduced function and definition.
As a result, the variables of a particle’s wave packet are wholly contained inside the packet and are coordinate independent. They move with a particle’s wave function in the embedding metric space, but they do not locate it in that space. No particle has a *net velocity* or *kinetic energy* when considered in isolation, for these quantities require a coordinate framework for their definition. This alone reveals the radical nature of removing coordinates *entirely* from physics, and the inadequacy of general relativistic invariance for that purpose.
Another consequence of this program is that energy and momentum are not propagated through the empty space between particles. Although particle energy, momentum, and angular momentum are conserved in local interactions, we say that nature does not provide for the exchange of energy and momentum between separated particles. We are the ones who arrange these transfers through our introduction of regional coordinates that we use to give ourselves the big picture. It facilitates analysis. The organizing power of coordinates and an opportune distribution of matter in space and time often allows us to find a system of coordinates that supports regional conservation; however, we can also find coordinates that do not support conservation. Therefore, regional conservation is coordinate dependent. It is not an invariant idea. It follows from a favorable construction on our part rather than from something intrinsic to the system[^2].
General relativity is a product of energy-momentum conservation that relies on regional coordinates for its meaning. It therefore joins regional conservation principles as something coming from coordinate construction rather than something fundamental. It is found for instance that while the metric tensor $g_{\mu\nu}$ can be defined at any event inside the wave packet of a massive particle, there is no trans-coordinate continuous function $g_{\mu\nu}$ associated with it. That is, a continuous metric tensor is not *physically* defined. Therefore derivatives of $g_{\mu\nu}$ at an event are not physically defined. General relativity suffers accordingly. The separation we establish between quantum mechanics and general relativity avoids a clash of these mismatched disciplines \[[@tH; @JM]\], and weighs in favor of quantum mechanics.
And finally, a new definition of state is proposed in this paper. In the absence of regional coordinates there is no common time for two or more particles, so a state definition is proposed that spans the no-mans-land between particles. It is shown in another paper how to write the Hamiltonian for a system of separated particles of this kind \[[@RM1]\]. The new definition of state and the Hamiltonian that applies to it imposes a consistent framework on a system of trans-coordinate particles.
If an atom emits a photon, then the system’s energy and momentum will be locally conserved. If that photon is not subsequently detected in another part of the universe it will essentially disappear from the system because a photon in isolated flight is energetically invisible. This does not violate conservation principles because those principles are satisfied at the emission site.
If the photon is detected somewhere else, then the energy and momentum at the detector site will also be conserved. The difficulty is that the energy emitted by the atom and the energy received by the detector might not be the same, so there is no general basis for claiming that energy conservation holds for the entire two-site system. That’s because nature, we say, does not care about conservation over more than one interaction. It cares only about *conservation at individual interactions*. However, regional coordinates often make it possible to compose energy differences of this kind in such a way as to validate regional conservation; and hence the great advantage of regional coordinates. They give us a useful analytic tool and a satisfying big picture as well as (sometimes) regional conservation laws. But these laws are not fundamental. They are only products of a fortunate coordinate construction.
This treatment is primarily concerned with electromagnetic interactions.
Partition Lines {#partition-lines .unnumbered}
===============
In Minkowski space one must choose a single world line to define the future time cone of an event **a**. If there is a non-zero mass particle present in the space it should be possible to choose a unique world line at each location inside the particle’s wave packet that is specific to the particle at that location. That world line corresponds to the direction of *square modular flow* at that event. The collection of these world lines over the particle’s wave packet can be thought of as the *streamlines* of its square modular flow in space and time. They will be called *partition lines*. We also define *perpendiculars* that are space-like lines drawn through each event perpendicular to the local partition line. We will first develop the properties of partition lines in a 1 + 1 space, and then in and 3 + 1 spaces.
Figure 1 is a 1 + 1 Mnkowski surface with light paths given by $45^\circ$ dashed lines. Partition lines of an imagined particle wave packet are represented in the figure by the five slightly curved and more-or-less vertical lines. They tell us that the wave packet moves to the left with ever decreasing velocity and that it spreads out as it goes. This description is not trans-coordinate because it is specific to the Lorentz frame in the diagram; but these lines provide a scaffold on which it is possible to hang a trans-coordinate wave function.
![image](tcphysfig1.eps)
Partition lines pass through every part of the particle’s wave packet and do not cross one another. They are not defined outside of a wave packet. Just as the space is initially given to us in the form of a metric background, any particle is initially given in the form of partition lines with the above characteristics. The interpretation of these lines is given in the next paragraph where values are assigned to them in a way that reflects the intended *given conditions*. These conditions are not ‘initial’ in the usual temporal sense, but are rather ‘given’ over the space-time region of interest.
Let the third partition line from the left (i.e., the middle line in Fig. 1) portion off 1/2 of the packet, so half of the particle lies to the left of an event such as **a** in the figure. That is, there is a 0.5 probability that the particle will be found on the perpendicular extending to the left of **a**. This statement is assumed to have objective invariant meaning. Of course, the other half of the particle lies to the right of event **a** on the perpendicular through **a**. The middle partition line is made up of all the events in the wave packet that satisfy this condition, so they together constitute a continuous line to which we assign the value of 1/2. There is a 0.5 probability that the particle will be found *somewhere* on the left side of this line when the included events are all those on both sides of the line.
In a similar way we suppose that the second partition line in Fig. 1 portions off, say, 1/4 of the packet on the perpendicular to the left of an event **b**, and that the first line portions off 1/100 of the particle or some other diminished amount. We further assume that the fifth line goes out to 99/100 of the particle packet, so the entire particle is represented by streamlines that split the particle into objectively defined fractional parts.
When a wave function is finally assigned we will show that its total square modulus remains ‘constant in time’ between any two partition lines in 1 + 1 space, and is similarly confined in higher dimensions.
Neighborhoods {#neighborhoods .unnumbered}
=============
Every event inside the wave packet has a unique time direction defined for it by the partition line passing through the event. This allows us to define unique *inertial* neighborhoods associated with each event.
![image](tcphysfig2.eps)
Consider a flat space inside the wave packet of a massive particle, and assign a Minkowski metric that is intrinsic to that space. Beginning with an in Fig. 2a, proceed up the particle’s partition line through **a** by an which is the magnitude of the invariant interval from event **a** to an event **b**. This interval **ab** is negative and identifies the chosen time axis inside the particle packet at event **a**. Then find event $\textbf{b}'$ by proceeding down the partition line the same invariant interval $-\Delta$ . Construct a backward time cone with **b** at its vertex and a forward time cone with $\textbf{b}'$ at its vertex and identify the intersection events **c** and $\textbf{c}'$. Since these events are embedded in a flat space, the positive space-like interval $\textbf{cc}'$ will pass through event **a** and will be bisected by it with $$\textbf{ca} = \textbf{ac}' = \textbf{cc}'/2 = \Delta > 0$$ For any $\Delta$, all of the events included in the intersection of the light cones of **b** and **b**$'$ are defined to be a *neighborhood* of event **a**. The events along the line **cc**$'$ are defined to be a *spatial neighborhood* of **a**. The limit as $\Delta$ goes to zero is identical with the limit of small neighborhoods around **a**.
Curved Space {#curved-space .unnumbered}
============
The above considerations for a ‘flat’ space also apply *locally* in any curved space, so we let the conditions in Fig. 2a be generally valid in the limit as $\Delta \rightarrow 0$. Figure 2b shows the resulting Minkowski diagram in the local inertial system with as the space and time unit vectors in the directions $\textbf{ac}'$ and $\textbf{ab}$ respectively.
The unit of these vector directions is given by $\sqrt{\Delta}$ in meters, although we have not established coordinates in those units along those directions. Specifically, we have not established a unique numerical value attached to an or a distant zero-point for that value; so the development so far is consistent with the trans-coordinate (or coordinate-less) aims of this paper.
The unit vectors at event **a** will be referred to as the *local grid* at , where the time direction is always along the partition line going through **a**. These definitions have nothing to do with the curvature of the space in the wave packet at or beyond the immediate vicinity of **a**. Every event inside a particle packet has a similar local grid. The local grids of other events in the neighborhood of event **a** will be continuous with the local grid at **a** in this 1+1 space, but not for higher dimensions as we will see.
The Wave Function {#the-wave-function .unnumbered}
=================
We specify the quantum mechanical wave function at each event **a** in a particle wave packet over the space-time region of interest $$\varphi(\textbf{a})$$ which is identified in the manner of Euclid’s geometry since there are no coordinate numbers involved. There are four auxilary conditions on this function.
**First**: The function $\varphi(\textbf{a})$ is a complex number given at event **a** that is continuous with all of its neighbors. The units of $\varphi$ are $m^{-1/2}$ in this 1 + 1 space.
**Second**: Partial derivatives of $\varphi(\textbf{a})$ are defined in the limit of small neighborhoods around **a** (i.e., for small values of $\Delta$). $$\begin{aligned}
\partial\varphi(\textbf{a})/\partial x &=& \lim_{\Delta\rightarrow 0} \frac{\varphi(\textbf{c}') - \varphi(\textbf{c})}{2\sqrt{\Delta}}
\\
\partial\varphi(\textbf{a})/\partial t &=& \lim_{\Delta\rightarrow 0} \frac{\varphi(\textbf{b}) - \varphi(\textbf{b}')}{2\sqrt{\Delta}}
\nonumber \end{aligned}$$ The second spatial derivative is then $$\partial^2\varphi(\textbf{a})/\partial x^2 = \lim_{\Delta\rightarrow 0} \frac{\partial\varphi(\textbf{c}')/\partial x -
\partial\varphi(\textbf{c})/\partial x}{2\sqrt{\Delta}}$$
Notice that we have defined derivatives in the directions $\hat{x}$ and $\hat{t}$ without using coordinates to ‘locate’ or numerically ‘identify’ events along either of those directions. Only $\Delta$ *intervals* between events along the time line are taken from the invariant metric space.
**Third**: The value of $\varphi$ at event **a** is related to its neighbors through the *dynamic principle*. This principle determines how $\varphi(\textbf{a})$ evolves relative to its own time against the metric background, and how it relates spatially to its immediate neighbors.
**Fourth**: The objective fraction of the particle found between the partition line through event **c** in Fig. 2a and a partition line through event $\textbf{c}'$ is equal to $f_{cc'}$. In the limit as $\textbf{cc}'$ = $ 2\Delta$ goes to zero the fraction of the particle between differentially close partition lines goes to $df$. Normalization of $\varphi(\textbf{a})$ is stictly ’local’ and requires $$\varphi^*(\textbf{a})\varphi(\textbf{a}) = \lim_{ \Delta \rightarrow\ 0}\frac{f_{cc'}}{2\Delta}$$
It follows that $$\varphi^*(\textbf{a})\varphi(\textbf{a}) = \varphi^*(\textbf{b})\varphi(\textbf{b}) = \varphi^*(\textbf{b}')\varphi(\textbf{b}')$$ because the fractional difference between any two the partition lines is the same over any perpendicular. Therefore, the square modular flow will be *constant in time* between any two partition lines as previously claimed.
These four auxiliary conditions must be satisfied when taken together with the initally given partition lines, but there is no guarantee that there exists a wave function that qualifies. *Finding a solution* therefore consists of varying the partition lines (i.e., the given conditions) until a wave function exists that satisfies these conditions.
The choice of a world line based on partition lines is not a coordinate choice, nor is the limiting procedure that follows. So these definitions are not just coordinate invariant, they are fully *coordinate free*. They allow us to find physically creditable derivatives of any continuous function in a way that is independent of the curvature of the surrounding space, and to found a physics on that basis.
One Particle {#one-particle .unnumbered}
============
Partition lines do not extend beyond the particle, so in the absence of ‘external’ coordinates that do extend beyond the particle (in an otherwise empty space) there is no basis for claiming that the particle has a *net velocity, kinetic energy, or net momentum*. This will be true of both zero and non-zero mass particles. It is a consequence of a trans-coordinate physics that particles take on these dynamic properties only in interaction with other particles.
A massive particle has an ‘internal’ energy defined at each event in its wave packet, but since that may differ from one event to another there is no single internal energy representing the particle as a whole. Similarly, each part of the particle’s wave packet follows its own world line, so the there is no single world line for the particle as a whole as shown in Fig. 1. It is our claim that nature attends to the particle as a whole by dealing separately with each part. One exception is that the particle as a whole does produce a gravitational disturbance in the background invariant metric that has its origin in the regional distribution of the particle’s internal mass/energy.
Two Particles {#two-particles .unnumbered}
=============
Figure 3 shows the partition lines of two separated massive particles where each has its own definition of a grid that is different from the other particle. It is a consequence of the trans-coordinate picture that these particles in isolation will seem to have nothing to do with one another. However, the positional relationship of one to the other is objectively defined in the metric space in the background of both. Every event in the wave packet of each particle has a definite location in the metric space, and that fixes the positional relationship of each part of each particle with other parts of itself and with other particles. In addition, each massive particle produces a gravitational disturbance that has an invariant influence on the other. That influence is a function of the relative velocity between the two, even though kinetic energy is not defined for either one. Kinetic energy is a coordinate-based idea as has been said, whereas metrical positions and gravitational disturbances in the metric are invariant. We assume that the latter are based solely on graviton activity.
![image](tcphysfig3.eps)
A Radiation Photon {#a-radiation-photon .unnumbered}
==================
The pack of four lines that rise along the light line in Fig. 3 are intended to be the partition lines of a radiation photon that has a group velocity equal to the velocity of light. Photons can have partition lines as do massive particles. They separate the photon into its fractional parts, which is a separation by phase differences. The photon in Fig. 3 is confined to the packet that is distributed over the perpendicular (dashed) light path $l$.
Normally in physics we do not hesitate to use coordinates in empty space, so a photon by itself will be given a period and wavelength relative to that coordinate frame, and hence an energy and momentum. But if coordinates in empty space have no legitimate place in physics, than like any other particle a photon by itself will lack translational variables (e.g., energy and momentum); and since it has no internal energy (i.e., rest mass/energy), the gravitational perturbation of its light line will be zero. There is no photon mass/energy to perturb it. It should also be clear from the diagram in Fig. 3 that the photon bundle has *no definable* wavelength or frequency at event **k**.
Vacuum fluctuations exist in the ‘empty’ space between massive particles and their polarizing effects are physically significant. But if vacuum fluctuation particles are not themselves polarized they will not interact with a passing photon (resulting in a scattering of the photon). So the photon cannot use these particle grids to define its period and wavelength. Fluctuation particles do not contribute in any other way to the discussion, so their presence is ignored.
Information Transfer {#information-transfer .unnumbered}
====================
It is the photon’s phases that affect a transfer of energy and momentum from one particle to another. This is shown in Fig. 4 where two particles are narrowly defined to be moving over world lines $w_1$ and $w_2$. The two dashed lines represent the partition lines of a passing photon with ‘relative’ phase differences given by $\delta\pi$. If the photon wave is a superposition of two different frequencies 1 and 2, then $\delta\pi = \delta\pi_1 + \delta\pi_2$.
![image](tcphysfig4.eps)
A photon interacting with the first particle at event $\bf{a}$ will have a local energy and momentum given by $e_\gamma(\bf{a})$, $p_\gamma(\textbf{a}),$ and as it interacts with the second particle at event $\bf{b}$ it will have a local energy and momentum given by $e_\gamma(\bf{b})$, $p_\gamma(\textbf{b})$. These quantities are related through the phase relationships that are transmitted between particles, and are articulated in the local grid of the interacting particle. $$\begin{aligned}
a \hspace{.1cm} photon \hspace{.1cm} at \hspace{.1cm} event &\textbf{a}:& e_\gamma(\textbf{a}) =
\hbar\Sigma_i\omega_i(\textbf{a}) \hspace{.5cm} p_\gamma(\textbf{a}) = \hbar\Sigma_ik_i(\textbf{a}) \\
a \hspace{.1cm} photon \hspace{.1cm} at \hspace{.1cm} event &\textbf{b}:& e_\gamma(\textbf{b}) =
\hbar\Sigma_i\omega_i(\textbf{b}) \hspace{.5cm} p_\gamma(\textbf{b}) = \hbar\Sigma_ik_i(\textbf{b}) \nonumber\end{aligned}$$ where $\omega_i(\bf{a})$ = $\partial_t\pi_i(\bf{a})$ and $k_i(\bf{a})$ = $\partial_x\pi_i(\bf{a})$. These derivatives refer to the local grid of each event in each particle, and are defined like those in Eq. 2.
Electromagnetic Variables {#electromagnetic-variables .unnumbered}
=========================
The parallel lines passing by event **k** in Fig. 3 are lines of constant ‘relative’ phase of the photon. Differential phase changes $\delta \pi$ over a light line like $l$ are preserved across the length of the photon wave packet. However, since the photon in flight between two particles does not have its own local grid, components cannot be defined for the electromagnetic field any more than can for energy and momentum.
In empty space the *electromagnetic potential* of a radiation photon is normally given by a fourvector $A^\mu(\textbf{a})$, where the d’Alembertian operating on $A^\mu(\textbf{a})$ is equal to zero. However, trans-coordinate physics cannot use the d’Alembertian in empty space although the photon’s behavior there is lawful – it follows a dynamic principle of some kind. Where a grid exists we can give analytic expression to the dynamic principle; but where there is no grid we must settle for another kind of description. All we can do in this case is notice the physical manifestations of the dynamic principle, and there are just four in 3 + 1 space. First, different relative phases appear on different parallel layers along a light line as in Figs. 3 and 4. There is a definite phase relationship between any two of these layers. Second, the probability that a photon goes into a particular solid angle from an emission site **a** depends on the distribution given by an atomic decay at **a**, or by the interaction of $A_\mu(\textbf{a})$ with the current $j(\textbf{a})$ at that site. The only mid-flight indication of the strength of a signal in a given solid angle is the probability of a photon emission in that direction. Third, we say that the magnitude of $A_\mu$ arriving at a material target is *determined by* that probability – rather than probability determined by magnitude. In the case of a single photon (or for any definite number of photons) the components of $A_\mu$ at a material destination are indeterminate, and the magnitude of the transmission diminishes with square distance from the source by virtue of the constancy of photon number in a solid angle. The fourth property provides for Huygens’ wavelets. So far we have considered a photon as moving undeflected in an outward direction from a source along a light cone. We now say that an event such as **k** in Fig. 3 acts as a point source of radiation is all directions. The wavelet from **k** has the same (relative) phase as event **k**, and it reradiates the “probability intensity" at **k** uniformly in all directions with a velocity $c$. Two wavelets that arrive at a third event **m** have a definite phase difference that produces interference there.
Notice that a Huygens’ electromagnetic wavelet is a ‘scalar’ like the primary wave that gives rise to it. The vector nature of an EM wave does not appear until it interacts with matter, and only then when an indefinite number of photons are phased in such a way as to make that happen.
Photon Scattering {#photon-scattering .unnumbered}
=================
If a photon scatters at an event **a** inside the wave packet of a particle, the grid for that purpose will be the particle’s grid at **a**. There will be no quantum jump or wave collapse in a scattering of this kind. Instead, some fraction of the particle $p$ and photon $\gamma$ will evolve continuously into a scattered wave that consists of a correlated particle $p'$ and a photon $\gamma'$. Energy and momentum will be defined for each of the four particles $p$, $p'$, $\gamma$, and $\gamma'$ that are mapped together on that common grid of $p$ at event **a**, and the dynamic principles of these particles (plus their interaction) will insure that total conservation applies to all four. Each component of the scattered wave of $p'$ will also have a grid that is well defined at event **a**, and is a Lorentz transformation away from the grid of $p$. Energy and momentum will be conserved on the grid of each component of $p'$. The velocity of any component of $p'$ relative to $p$ is not explicitly given in the trans-coordinate case; however, it is implicit in the Lorentz transformation that is required to go from the locally evolving grid of $p$ to that of $p'$.
Virtual Photons {#virtual-photons .unnumbered}
===============
So far we have talked about *radiation* photons that travel at the velocity of light. *Virtual* photons (in a Coulomb field) do not bundle themselves into wave packets, so they do not have a ‘group’ velocity that requires the identification of a world line over which the group travels. It makes no sense to say that they travel over light lines. It may therefore be possible to give the virtual photon a local grid in the same way that we created a grid for particles with non-zero mass. Its vector nature would then be more evident. However, we choose not to do that. It is unnecessary and would put the virtual photon grid in competition with the particle grid during an interaction between the two. That would necessitate a choice between one or the other in any case; so *all* photons will be considered gridless in this treatment – just like radiation photons. They all lack internal energies. They also lack translational variables such as energy and momentum when in transit between particles; and they acquire these values only when they overlap the charged particles with which they interact. We say in effect that there is no fundamental difference between ‘near’ field photons and ‘far’ field photons in an electromagnetic disturbance.
Gravity {#gravity .unnumbered}
=======
If a photon in transit (radiation or virtual) has no frequency or translational energy $h\nu$, it will not have a weight in the presence of a gravitating body or create a curvature in the surrounding metric space. However, massive objects having rest energy *do* create curvatures in their vicinity in which *light line geodesics* are well defined. We claim that radiation photons follow these geodesics without themselves contributing to the curvature of space. Although photons in transit are massless and hence weightless, they nonetheless behave as though they are attracted to gravitational masses.
This does not mean that current photon trajectories are in error, or that particle masses have to be adjusted. The mass of an electron found from the oil drop experiment is currently assumed to include the mass of the accompanying electromagnetic field. From a trans-coordinate point of view the electric field surrounding a charged particle is not defined, so this experiment reveals the ‘bare’ mass of the electron. The mass of the Sun obtained from the period of a planet is normally assumed to include the mass of the radiation field surrounding the sun. From a trans-coordinate point of view the radiation field is not defined, so this calculation reveals the ‘bare’ mass of the sun – that is, the total number of each kind of solar particle times its mass. These changes will not result in observational anomalies in particle theory or astronomy, for we have no way to separately weigh the electromagnetic field of a charged particle, or to count the number of particles in the Sun.
Binding Energy {#binding-energy .unnumbered}
==============
Even in coordinate language we are able to give up the idea of electromagnetic field energy, so the binding energy of particles in a nucleus can be considered a property of the particles themselves. Imagine two positive particles of rest mass $m_0$ that approach one another in the center-of-mass system with kinetic energy $T$. The momentum of one of these particles decreases as a result of virtual photon exchange; however, its energy will not change. A virtual photon leaving one particle will carry away a certain amount of energy, but that energy is restored in equal amount by the virtual photon that is received from the other particle. This means that the net energy of the advancing particle will be unchanged during the trip. When the particle reaches the point at which it has lost all of its kinetic energy and has combined with the other particle due to nuclear forces, we would say that the initial kinetic energy of one of them has become its binding energy $BE$, where $$E = BE + m_0c^2 = T + m_0c^2$$ As the particle moves inward its energy square $E^2 = P^2 + m_0^2c^4$ remains constant while $P^2$ goes to zero. Therefore $E^2$ becomes identified with an increased mass $M^2c^4$ giving $E = Mc^2 = BE + m_0$. Then $$Binding \hspace{.2cm}energy = Mc^2 - m_0c^2$$ In relativity theory a particle’s (relativistic) mass is a function of kinetic energy. We can also say it is a function of an interaction with other particles, thereby avoiding any notion of ‘field’ energy.
These ideas are peculiar to the center-of-mass coordinate system but are not correct from a trans-coordinate point of view. Fundamentally there is no energy associated with the particle as a whole. There is only the time derivative of $\varphi$ at each separate event inside the particle’s wave packet. There is also no kinetic energy of the particle or binding energy of a captured particle. The ‘correct’ trans-coordinate account of a coulomb interaction is given below.
Virtual Interaction {#virtual-interaction .unnumbered}
===================
The virtual (Coulomb) interaction cannot be thought of as a single virtual photon interacting with a single charged particle because that is not energetically possible. However, the interaction is *continuous* like Compton scattering; so in spite of the fact that the theory is based on photons the interaction does not manifest itself as discrete quantum jumps. A particle in a Coulomb interaction is therefore continuously receiving and transmitting equal amounts of energy, which means that it undergoes a change of momentum with no change of energy. The resulting behavior of the charged particle is given by a continuum of particle grids along its partition line that are related by infinitesimal Lorentz transformations. Energy and momentum are conserved on any one of these grids. Since each particle is well localized in the background metric space, predictable continuous transformations of the world line of each event in the packet are *all that is necessary* to determine the packet’s complete behavior. Nature is not concerned with the coordinate-based energies of the previous section does not need to be.
Regional Coordinates and Conservation {#regional-coordinates-and-conservation .unnumbered}
=====================================
Trans-coordinate physics does not provide for energy and momentum conservation in the region between particles. We cannot assign frequency or wavelength to a radiation photon in an otherwise empty space as we have seen, so we cannot say that it carries energy $h\nu$ or momentum $h/\lambda$ from one part of space to another. Also, a massive particle has no velocity or acceleration when it is considered in isolation. It moves into its future time cone over the invariant metric background following its dynamic principle, but that path does not break down into spatial and temporal directions relative to which the wave packet can be said to be moving with a kinetic energy or velocity $v$. So it cannot be said to carry a net momentum $mv$.
Regional conservation of these quantities is therefore related to the possibility of system-wide coordinates that *we* construct. Having done that we can define a metric tensor throughout the region. That is, from the background invariant metric it is generally possible to find the continuous metric tensor $g_{\mu\nu}$ that goes with the chosen coordinates. If that tensor is time independent then *energy* will be conserved in the region covered by those coordinates. If it is independent of a spatial coordinate such as $x$, then *momentum in the $x$-direction* will be conserved in the region covered by the coordinates. If the metric is symmetric about some axis (in 3 + 1 space) then *angular momentum* will be conserved about that axis \[[@RM2]\]. It is therefore useful for us to construct system-wide coordinates in order to take advantage of these regional conservation principles. It is important to remember however that we do this, not nature. Nature has no need to analyze as we do over extended regions. For the most part it only *performs* on a local platform.
If there is a difference in energy between $e_\gamma(\textbf{a})$ and $e_\gamma(\textbf{b})$ in Eq. 4, it is possible that the photon in Fig. 4 is *Doppler shifted* because of a relative velocity between the two particles, or that particle \#2 is at a different *gravitational potential* than particle \#1. When a coordinate system is chosen the velocity of one particle is decided relative to the other particle, and only then will the extent of the Doppler influence be determined. Only then will it be clear how the organizing power of a coordinate system makes use of gravity to explain the non-Doppler difference between $e_\gamma(\textbf{a})$ and $e_\gamma(\textbf{b})$.
Trans-Coordinate Tensors {#trans-coordinate-tensors .unnumbered}
========================
Every event in a massive particle wave packet has a grid associated with it. In 3 + 1 space the spatial part is a three dimensional grid. When this is combined with the metric background the metric tensor $g_{\mu\nu}$ is determined at each event. One can therefore raise and lower indices of vectors inside the wave packet in the trans-coordinate case.
However, we do not assign derivatives to $g_{\mu\nu}$ because it is not a uniquely continuous function. For a given $g_{\mu\nu}(\textbf{a})$ at event **a** there are an infinite number of ways that a continuous $g_{\mu\nu}$ field ‘might’ be applied in the region around **a**, corresponding to the infinite number coordinate systems that ‘might’ be employed in that region. But if we do not attach physical significance to coordinates, then physical significance cannot be attached to a continuous metric tensor. Derivatives of that tensor are therefore not defined in trans-coordinate physics. This applies to the derivatives in Eq. 2 as well as to the covariant derivatives of Riemannian geometry. Therefore, Christoffel symbols are not defined in trans-coordinate physics.
It follows that the Riemann and Ricci tensors and the field equation of general relativity are also not fundamentally defined. Like energy and momentum conservation from which it is derived, the gravitational field equation is a regional creation of ours that is analytically useful and that gives us a satisfying big picture – but that is all. Of course the ‘curvature’ is objectively defined everywhere because it follows directly from the invariant metric background in which everything is embedded.
We can be guided by our experience with general relativity when choosing the most useful coordinate system in a given region of interest. A metric tensor can then be defined; and from the symmetry of its components, energy, momentum, and angular momentum conservation can be established over the region. However, there is no assurance that one can always find an ÔagreeableÕ system, for general relativity does not guarantee that the chosen coordinates will conserve energy, momentum, or angular momentum without introducing special pseudo-tensors that are devised for that purpose \[[@LL]\].
Gravitons {#gravitons .unnumbered}
=========
If general relativity is not fundamental then gravitons must be the exclusive cause of gravitational effects. The geodesics that result from graviton interactions between massive particles are not the smooth curves of general relativity, but are quantized by discrete graviton interactions. The wider effect of gravitons is to bend the background metric space between geodesics. Their influence will spread through the invariant metric space; and as a result, the curvature produced by gravitons will follow the average curvature of general relativity except that it will have the jagged edge of quantization. General Relativity is therefore a science that only approximates the underlying reality. It is a science we initiate when we introduce the coordinates that permit the definition of metric tensor derivatives and allow the formulation of EinsteinÕs field equation.
Internal Coordinates {#internal-coordinates .unnumbered}
====================
In additional to regional coordinates that cover the space between particles, we want to give ourselves an *internal picture* of the particle. We want the wave function $\varphi{(\bf{a)}}$ in Eq. 1 in a form that permits analysis. To do this starting at event **a**, integrate the minus square root of the metric along the partition line going through **a** and assign a time coordinate $t_a$ with an origin at **a**. Then integrate the square root of the metric over the perpendicular going through event **a** and assign a space coordinate $x_a$ with an origin at **a**. The coordinates $x$ and $t$ may be extended over the entire object yielding a wave function that can be written in the conventional way $\varphi(x, t)$. These internal coordinates will have the same status as external coordinates. They are only created by us for the purpose of analysis.
With internal coordinates we can integrate across one of the perpendiculars to find the *width* of the wave packet. It should also be possible to integrate the square modulus over a perpendicular to find the *total normalization*. That total will be equal to 1.0 if $df$ is equal to the fraction of the particle sandwiched between two differentially close partition lines as claimed. We can also use internal coordinates to give expression to the internal variables of a particle, such as its total internal energy and net momentum.
Three and Four Dimensions {#three-and-four-dimensions .unnumbered}
=========================
Imagine that a particle’s wave packet occupies the two-dimensional area shown on the space-like surface in Fig. 5. The surface is divided into a patchwork of squares, each of which is made to contain a given fraction of the particle, like 1/100th of the particle.
![image](tcphysfig5.eps)
Each of these squares has four distinguishable crossing points or corners. A similar two-dimensional scaffold is constructed on all of the space-like surfaces through which the particle passes in time, thereby creating a continuous 2 + 1 scaffold. Each of the enclosed areas generated in this way is required to contain $1/100$ of the particle, and its corners will constitute the partition lines of the particle. As in the 1 + 1 case, these lines may be thought of as streamlines of the square modular flow of the particle through time. In the limit as this fraction goes to zero, partition lines pass through each event on the space-like surface in the figure and they do not cross one another.
It is possible to find the direction of the partition line through an event **a** without having to erect a system-wide scaffolding like that of Fig. 5. Any small neighborhood of **a** has a probability that the particle will be found within it; and that probability will be ‘minimal’ when the partition line going through **a** coincides with the preferred direction of time for that neighborhood.
Space-time directions are chosen for a given partition line in a way that is similar to the procedure in Fig. 2. Starting with an in Fig. 6a, move up its partition line a metrical distance $-\Delta$ to event **b**. Then find $\textbf{b}'$ by proceeding down the partition line the same invariant interval $-\Delta$. Construct a backward time cone with **b** at its vertex and a forward time cone with $\textbf{b}'$ at its vertex and identify the closed two-dimensional loop intersection shown in in the limit as $\Delta$ goes to zero. In the local inertial system, two perpendicular unit vectors $\hat{x}$ and $\hat{y}$ are chosen along the radius of the circle of radius $\Delta$ that spans the spatial part of the local grid at . For any $\Delta$, choose a space-like line beginning at **a** that is aligned with $\hat{x}$ and extends to the circumference of the circle in Fig. 6a. It intercepts the circle at the event we call $\textbf{c}'$ in . The space-like line that begins with -$\hat{x}$ intercepts the circle at the event we call **c** in . These space-like lines do not have to be ‘straight’, so long as they are initially aligned with the unit vector and intercept the circle in only one place.
![image](tcphysfig6.eps)
The spatial grids of nearby events such as **a** and $\textbf{a}'$ in Fig. 6b do not have to line up in any particular way. Even if they are in each other’s spatial neighborhood for some value of $\Delta$, $\hat{x}$ and $\hat{x}'$ will generally point in different directions.
In 3 + 1 space the intersection of a backward and forward time cone will produce a spherical surface like the one pictured in Fig. 6c. In this case choose four mutually perpendicular unit vectors $\hat{x}$, $\hat{y}$, $\hat{z}$, and $\hat{t}$ to form the local grid at event **a**. As before, the orientation of the spatial part of these grids is of no importance. They may be arbitrarily directed because their only purpose is to locally define all three spatial derivatives of the function $\varphi$. That function is continuous throughout the wave packet in any direction; therefore, it does not matter which grid orientation is chosen at any event for the purpose of specifying the function and its derivatives there. The Dirac solution has four components $\varphi_\mu$ where each satisfies all of the above conditions in the 3 + 1 directions.
Since every event on the surface of the sphere in Fig. 6c locates a partition line, the event **a** is enclosed by a sphere with a differential volume $d\Omega$ that contains a differential fraction $df$ of the entire particle, where $$\varphi^*(\textbf{a})\varphi(\textbf{a})d\Omega= df$$ which normalizes the 3 + 1 wave function.
Applying the Dynamic Principle (3 + 1) {#applying-the-dynamic-principle-3-1 .unnumbered}
======================================
The third condition on a wave function $\varphi (\textbf{a})$ in Eq. 1 requires that the dynamic principle applies throughout the space. This can be done in the 3 + 1 space of an event **a** by using the grid defined in Fig. 6c. Since we can do this at any event and for any orientation of the grid, we state the more general form of the third condition:
> *The wave function $\varphi (\textbf{a})$ of a particle at any event **a** is subject to a dynamic principle that is applied locally to any four mutually perpendicular space-time directions centered at **a**, where time is directed along the partition line through **a**. This principle determines how $\varphi (\textbf{a})$ evolves relative to its own time against the metric background, and how it relates spatially to its immediate neighbors.*.
The continuity condition applies to the function $\varphi$ along any finite segment of line emanating from any event.
Atoms and Solids {#atoms-and-solids .unnumbered}
================
Consider how all this might apply to a hydrogen atom. Each massive particle carries a local grid that is independently defined at each event in its wave packet. This insures separate normalization at each event for each particle. The proton and electron grids may overlap but they need not be aligned because the particles do not directly interact. They are connected through the Coulomb field by virtual photons that carry no grid of their own. There are two interactions, one involving a virtual photon and the event grids of the proton, and one involving a virtual photon and the event grids of the electron. These are described in the section “Virtual Interaction".
In the non-relativistic case both particles can be covered by a single *common inertial frame* in which the total energy and momentum is conserved. It does no harm and it facilitates analysis to imagine that each grid in the system is aligned with this common coordinate frame. The time $t$ assigned to each proton grid and the time $t'$ assigned to each electron grid are then set equal to each other and to the time of the common inertial frame. The retarded interaction $j_\mu A^\mu$ at each end of the interaction will then give the Coulomb intensity of $(e^2/4\pi r)\delta(t - t')$ where $r$ is the distance between the particles in the common frame \[[@FMW]\]. Relativistic corrections to this occur when the spatial components of the current fourvectors are taken into account.
The above inertial system is one that we impose on the atom. By itself, the system operates on the basis of individual event grids alone. A photon passing over the atom will interact with each separate event in the proton wave function throughout its volume, and with each separate event of the electron throughout its volume. Energy and momentum conservation is required at each site, but the system will not support conservation unless the interaction Hamiltonian \[[@RM1]\] includes the entire system in a ‘single’ interaction. It is the interaction Hamiltonian that makes the difference between particles in a single interaction that conserves energy and momentum, and particles in separate interactions that may or may not conserve these quantities. In the atomic case the dynamic principle for the entire atom provides the unity that can give rise to a quantum jump that carries the product $pe\gamma$ of the proton, the electron, and the photon, into a new product $p'e'\gamma'$, conserving energy and momentum in the process.
In the case of macroscopic crystals, metals, and other stationary solid forms in a flat space, each event in each particle wave packet has its own space-time grid and is separately normalized. However, they are all interactively aligned to such an extent that we can usually impose a single common coordinate system. We require the coordinates of this system to co-move with the average density of matter in the solid. If that system has the right symmetry properties it will insure macroscopic energy, momentum, and angular momentum conservation.
Containers {#containers .unnumbered}
==========
![image](tcphysfig7.eps)
Let the central region of the hollow spherical container in Fig. 7 be a general relativistic space of unknown curvature. The center of the sphere is initially empty (suppressing vacuum fluctuations). A massive object leaves event **a** and at some later time arrives at event **b**. At each event along the way it is propelled by its dynamic principle into its forward time cone; and since the resulting path of the packet cannot be broken down into spatial and temporal parts, its velocity, energy, momentum, and distance traveled on that path are not determined. The particle will have ‘internal’ energy and momentum that are derived from interal coordinates, but these will not be its ‘translational’ energy and momentum in the usual sense going from **a** to **b**. A radiation photon will not even have these internal properties over its path; for it will only acquire the energy and momentum in Eq. 4 when it encounters a particle in the container wall.
We can certainly construct a common coordinate system over this system, extending the co-moving coordinates of the solid into the center of the sphere. We will then know how far the object goes and its velocity along the way. If the metric of that system is time independent, then total energy will be conserved throughout the trip from event **a** to event **b**. Although we can usually cover the system with extended coordinates and a metric, there is no guarantee that resulting system will conserve total energy and momentum without introducing the pseudo-potentials of [@LL].
A Gaseous System {#a-gaseous-system .unnumbered}
================
The introduction of many gas particles in the space of Fig. 7 does not change anything of substance. Molecular collisions occurring on the inside surface of the container and between molecules are distinct physical events. But we still do not have a natural basis for ascribing a numerical distance between any of these collisions or the molecular velocities between them.
Molecular collisions are here assumed to be electromagnetic in nature. Parts of the colliding molecules may or may not overlap, but they each (i.e., the internal parts of each) maintain their separate grids for the purpose of normalization. These grids do not compete with one another during a collision because the interaction between them is conducted through virtual photons, and these are declared to be gridless.
States {#states .unnumbered}
======
In coordinate physics we normally define a physical ‘state’ across a horizontal plane at some given time. This definition identifies an origin of coordinates relative to which the system’s particles are located at that time. That scheme will not work in the trans-coordinate case because the “same time" for separated particles is undefined. Indeed, the time of a single particle at a single location is undefined. The meaning of *state* must therefore be revised.
The state of a system of three particles is now given by $$\Psi(\textbf{a}, \textbf{b}, \textbf{c}) = \phi_1(\textbf{a})\phi_2(\textbf{b})\phi_3(\textbf{c})$$ where **a**, **b**, and **c** are events anywhere within each of the given wave functions, subject only to the constraint that each event has a *space-like* relationship to the others. Each of these three functions is defined relative to its own local grid and is related to its time-like successors through its dynamic principle. These events are connected by the space-like line in Fig. 8, thereby defining the state $\Psi$ of the particles that are specified along their separate world lines $w_1$, $w_2$, .
![image](tcphysfig8.eps)
A *successor state* can be written $$\Psi'(\textbf{a}', \textbf{b}', \textbf{c}') = \phi_1(\textbf{a}')\phi_2(\textbf{b}')\phi_3(\textbf{c}')$$ where events $\textbf{a}'$, $\textbf{b}'$, and $\textbf{c}'$ in the new state must also have space-like relationships to each other; and in addition, they are required to be in the forward time cones of events **a**, **b**, and **c** respectively. These events lie along a space-like line in Fig. 8 giving the state function $\Psi'$. Equation 5 does not say that each event has advanced by the same amount of time. It says only that each particle has advanced continuously along its own world line (i.e., along its own partition line) under its own dynamic principle, and has reached the designated ‘primed’ events.
We might also let $\textbf{b}''$ replace event $\textbf{b}'$, where $\textbf{b}''$ has space-like relationships to $\textbf{a}'$ and $\textbf{c}'$ and is in the forward time cone of event **b**. The resulting state $\Psi''(\textbf{a}', \textbf{b}'', \textbf{c}')$ is not the same as $\Psi'(\textbf{a}', \textbf{b}', \textbf{c}')$, but it is just as much a successor of the initial state $\Psi(\textbf{a}, \textbf{b}, \textbf{c})$. Also, $\Psi''$ is a successor of $\Psi'$ because $\textbf{b}''$ is a successor of $\textbf{b}'$. This definition of state is far more general than the coordinate based (planar) definition, giving us an important degree of flexibility as will be demonstrated below and in another paper . The Hamiltonian for this kind of state can be defined in such a way as to establish the *conservation of probability current* flow, as is also shown in this reference.
An Application {#an-application .unnumbered}
==============
Consider the case of an atom emitting a photon that is captured by a distant detector. The initial spontaneous decay of the atom can be written in the form $$\varphi = (a_1 + a_0\gamma)D$$ where $a_1$ is the initial state of the atom, $a_0$ is its ground state, $\gamma$ is the emitted photon, and $D$ is a distant detector that is not involved in the decay. At this point we do not specify specific events or use the new definition of state. In response to the dynamic principle, the probability current flows from the first component in Eq. 6 to the second component inside the bracket, so the first component decreases in time and the second component increases in such a way as to conserve square modulus as shown in Ref. 3. At some moment of time a stochastic choice occurs and the state undergoes a quantum jump from $\varphi$ to $\varphi'$ conserving energy and momentum and giving $$\varphi' = a_0\gamma D$$ that describes the state of the system during the time the photon is in flight from the atom to the detector. When the photon interacts with the detector the equation of state becomes $$\varphi'' = a_0(\gamma D + D'')$$ where $D''$ is the detector after capture. The atom $a_0$ is not a participant in this interaction. Again, probability current flows from $\gamma D$ to $D''$ and this, we assume, results in another stochastic hit conserving energy and momentum and yielding $$\varphi''' = a_0D''$$
When the *new definition* of state is applied to this case Eq. 6 is written $$\varphi(\textbf{a},\textbf{c}) = [a_1(\textbf{a}) +a_0(\textbf{a})\gamma(\textbf{a})]D(\textbf{c})$$ where the atom and the photon overlap at event **a**. The photon uses the grid of the atom at event **a** to evaluate its frequency and wavelength, whereas the detector uses its own grid. Nonetheless, the dynamic principle in the form of the Hamiltonian defined in Ref. 3 applies to this interaction equation that is local to event **a**.
Equation 7 for the proton in flight is then $$\varphi'(\textbf{a},\textbf{k},\textbf{c}) = a_0(\textbf{a})\gamma(\textbf{k})D(\textbf{c})$$ where the energy of the atom and the detector are given by their time derivatives at events **a** and **c**, but there is no energy associated with the independent photon in this equation. The function $\gamma(\textbf{k})$ is of the form exp$[i\theta(\textbf{k})$\] where **k** is the event appearing in Fig. 3, so frequency and wavelength are not given. The photon’s Hamiltonian applied to this equation equals zero. Equation 9 applies so long as the photon is located on a definite partition line of the atom; but the moment the photon event appears apart from the atom, Eq. 10 will apply.
Equation 8 using the above state definition is $$\varphi''(\textbf{a},\textbf{c}) = a_0(\textbf{a})[\gamma(\textbf{c})D(\textbf{c}) + D''(\textbf{c})]$$ where the photon overlaps the detector at event **c**. In this case the photon uses the grid of the detector at event **c** to evaluate its frequency and wavelength, and the energy of the atom is given by its time derivative on the grid of . Here again the dynamic principle applies to this interaction equation that is local to event **c**.
Actually the atom should be written as a product of the proton $p$ and the electron $e$ giving $a = pe$. In the parts of the atom where the proton and the electron *do not* overlap, Eq. 9 could be written as either $$\varphi''(\textbf{a},\textbf{b},\textbf{c} ) = [p_1(\textbf{a})e_1(\textbf{b}) + p_0(\textbf{a})e_0(\textbf{b})\gamma(\textbf{a})]D(\textbf{c})$$ or $$\varphi''(\textbf{a},\textbf{b},\textbf{c} ) = [p_1(\textbf{a})e_1(\textbf{b}) + p_0(\textbf{a})e_0(\textbf{b})\gamma(\textbf{b})]D(\textbf{c})$$ Both equations are correct. They both describe the interaction of the photon on different grids associated with different parts of the atom, where the dynamic principle applies in each case.
Equations of this kind are used more extensively in Ref. 3, and the rules that govern them are given in the Appendix of that reference.
Unifying Features {#unifying-features .unnumbered}
=================
The most important non-local unifying feature of a trans-coordinate system is the *invariant metric space* in which everything is embedded. Another important unifying feature is the *dynamic principle* applied to each particle by itself and to any system of particles as a whole.
*Non-local correlations* are another unifying features of the functions generated by the dynamic principle. These qualify the location of one particle relative to the location of another particle; so the equation of state of two particles is written $\Phi = p_1p_2(\textbf{a}, \textbf{b})$, rather than $\Phi= p_1(\textbf{a})p_2(\textbf{b})$. These particles have their separate grids as always, to which the dynamic principle separately applies as always. The difference is that the range of **b** depends on the value of **a** and visa versa, and their joint values determine $\Phi$. This function is local to both events **a** and **b**, so it is a bi-local function.
The fourth unifying feature is the *collapse of the wave function* over finite regions of space.
Modified Hellwig-Kraus Collapse {#modified-hellwig-kraus-collapse .unnumbered}
===============================
A local quantum mechanical measurement can have regional consequences through the collapse of a wave function. The question is: How can that superluminal influence be invariantly transmitted over a relativistic metric space?
Hellwig and Kraus answered this question by saying that the collapse takes place across the surface of the backward time cone of the triggering event \[[@HK]\]. The Hellwig-Kraus collapse has been criticized because it appears to result in causal loops \[[@AA]\], but the situation changes dramatically with the new trans-coordinate definition of state. We keep the idea that the influence of a collapse is communicated along the backward time cone; however, the state of the system that survives a collapse (i.e., the finally realized eigenstate) is not defined along a “simultaneous" surface. The increased flexibility of the new state definition allows the remaining (uncollapsed) state to retain its original relationship with the event that initiates the collapse. When this program is consistently carried out causal loops are eliminated, even in a system of two correlated particles. I will not elaborate on this idea in this paper but it is demonstrated in detail in \[[@RM1]\].
Another Approach {#another-approach .unnumbered}
================
Invariance under coordinate transformation is not discussed at any length in this paper because coordinates are not introduced in the first place; but it should be noted that the idea of coordinate invariance is limited. General relativity is not truly independent of coordinates because it does not include *all possible* coordinates in its transformation group. It does not include ‘discontinuous’ coordinate systems, many of which are capable of uniquely identifying all the events in a space-time continuum – as is claimed to be the purpose of a space-time coordinate system. For example, imagine Minkowski coordinates in which the number 1.0 is added to all irrational numbers but not to rational numbers. This system is perfectly capable of systematically and uniquely identifying all of the events in a space-time continuum, but it is thoroughly discontinuous in a way that prevents it from being useful to general relativity. It only takes one example of unfit coordinates to disqualify invariance as a fundamental requirement in physics, and there are many discontinuous coordinates like this one. Of course one can always reject coordinates that don’t work in the desired way on the basis of the fact that they don’t work in the desired way. But that avoids the issue. The point is that the influence of unnatural identification labels cannot be eliminated from physics through an invariance principle that affects only a sub-set of unnatural identification labels. Another approach is indicated.
[99]{}
Aharonov, Y., Albert, D. Z. (1981) *Phys. Rev. D* **24**, 359
Feynman, R. P., Morinigo, F. B., Wagner, W. G. (1995) *Feynman Lectures on Gravitation*, B. Hatfield. ed., Addison-Westley, New York, 33
Hellwig, K. E., Kraus, K. (1970) *Phys. Rev. D* **1**, 566
Landou, L. and Lifshitz, E. *The Classical Theory of Fields*, Pergamon Press, New York, (1971) p. 316
Maldacena, J. (2005) “The Illusion of Gravity”, *Sci. Am.* Nov, 56
Mould, R. A. (2002) *Basic Relatvity*, Springer,New York )2002) Eq. 8.66
Mould, R. A. (2008) “Trans-Coordinate States", arXiv:0812.1937
’t Hooft, G. (2008) “A Grand View of Physics", *Int’l J. Mod. Phys.* **A23** 3755, sect 3; arXiv:0707.4572
[^1]: Department of Physics and Astronomy, State University of New York, Stony Brook, 11794-3800; [email protected]; http://ms.cc.sunysb.edu/\~rmould
[^2]: A region surrounded by flat space will not conserve energy and momentum if **no** coordinates are chosen in the region, or if certain discontinuous coordinates are chosen in the region. Here again conservation depends on a coordinate choice or on the choice of a transformation group.
| {
"pile_set_name": "ArXiv"
} |
Ivanka Trump was caught meeting with the Russian who was trying to set up the get-together Trump and Putin on Trump’s Moscow project.
Buzzfeed reported, “There is no evidence that Ivanka Trump’s contact with the athlete — the former Olympic weightlifter Dmitry Klokov — was illegal or that it had anything to do with the election. Nor is it clear that Klokov could even have introduced Trump to the Russian president. But congressional investigators have reviewed emails and questioned witnesses about the interaction, according to two of the sources, and so has special counsel Robert Mueller’s team, according to the other two. The contacts reveal that even as her father was campaigning to become president of the United States, Ivanka Trump connected Michael Cohen with a Russian who offered to arrange a meeting with one of the US’s adversaries — in order to help close a business deal that could have made the Trump family millions.”
Why the report of the meeting is bad for the Trumps is not based on legality or even the 2016 election. The meeting is important because Donald Trump has claimed that has or never had any business in Russia. It is clear that the Trumps did have business in Russia and were looking to build a Trump Moscow. For the Special Counsel to potentially argue that the Trump campaign illegally conspired with Russia, he has to demonstrate that there was a relationship between the two parties.
Ivanka Trump’s meeting is a direct indication of that relationship. The Russians were not only involved with members of the Trump campaign, but also the Trump family themselves. If Ivanka and Don Jr. were talking to Russians during the presidential campaign, it is impossible to believe that Trump wasn’t doing the same. | {
"pile_set_name": "Pile-CC"
} |
The minimal data set is published on figshare DOI: [10.6084/m9.figshare.12174354](https://doi.org/10.6084/m9.figshare.12174354)
Introduction {#sec005}
============
Percutaneous coronary intervention (PCI) represents the most important treatment modality of coronary artery stenosis. However the occurrence of In-stent restenosis (ISR) is still a limitation for the long-term outcome despite the introduction of drug eluting stents (DES).\[[@pone.0232483.ref001]\] Several contributory factors have been identified during the last years, but the overall underlying mechanisms are still unclear.
ISR occurs at different points in time (early and late restenosis) after implantation of a DES and involves numerous cellular and molecular mechanisms. \[[@pone.0232483.ref002],[@pone.0232483.ref003]\] The pathophysiology of restenosis involves accumulation of new tissue within the arterial wall. Smooth muscle cell (SMC) migration and extracellular matrix secretion (ECM) plays a central role in neointimal hyperplasia (NIH), which is nowadays seen as pathognomic of ISR.\[[@pone.0232483.ref004]\] ECM synthesis by these SMCs is responsible for the increasing volume of intimal tissue, which is composed of ECM proteoglycans and collagens.\[[@pone.0232483.ref005]\] Over the months after the implantation of a DES there is a shift towards greater ECM synthesis rather than SMC proliferative activity.\[[@pone.0232483.ref006],[@pone.0232483.ref007]\] So inhibition of initial SMC migration to the media could play a key role in later NIH. But unlike the proliferative aspect of the SMCs, little is known about their ''motile" activity after stent-implantation, which allows them to migrate into the media. The use of DES reduced the incidence of restenosis but the cytostatic agents also delayed endothelialization of the implanted stent which plays a key role for the long-term outcome. Incomplete endothelialization can lead to stent-thrombosis and acute myocardial infarction after stopping of antiplatelet therapy.
Vaspin is an adipocytokine that has been isolated from the visceral adipose tissue of Otsuka Long-Evans Tokushima Fatty (OLETF) rats, which is a diabetes rat model.\[[@pone.0232483.ref008]\] Due to the fact that vaspin is characterized by the presence of a core domain consisting of three β-sheets and nine α-helices it is likely that vaspin belongs to the serine protease inhibitors (serpin) family. \[[@pone.0232483.ref009],[@pone.0232483.ref010]\] But still nothing is known about the physiologic inhibitory function of vaspin. Several Studies in OLETF could demonstrate that vaspin production decreased as diabetes worsened but increased by treatment with insulin or pioglitazone.\[[@pone.0232483.ref011]\] This suggests that the up-regulation of vaspin may have a defensive action against insulin resistance. A recent study could clearly show that vaspin is also produced by periadventitial adipose tissue which may play an important paracrine role during the development of ISR.\[[@pone.0232483.ref012]\]
Therefore we tested whether plasma levels of vaspin are related to the clinical manifestation of ISR in patient with stable coronary artery disease that have received an DES and if vaspin inhibits the migration of smooth muscle cells and endothelial cells *in vitro*. Further, we wanted to know if vaspin has any negative effects on the healing process (endothelialization) of the stented vessel.
Methods {#sec006}
=======
Patients {#sec007}
--------
Blood samples were taken prospectively from all patients with stable coronary artery disease who were scheduled for elective PCI. The type, number, length, and size of the stent(s) implanted were left to the discretion of the interventionalist. All patients with DES only (n = 107) were asked to participate in this study, and we included all 85 patients who gave their informed consent for follow-up angiography.
Paclitaxel-eluting stents (Taxus; Boston Scientific, Boston, MA, USA) were implanted in 62 patients (72.9%); sirolimus-eluting stents (Cypher; Cordis, Johnson & Johnson, Miami Lakes, FL, USA) were used in 23 patients (27.1%). The study was approved by the institutional ethics committee (Ethikkommission der Medizinischen Universität Wien). Patients with concurrent severe illness, acute coronary syndrome within three months before angioplasty, PCI for restenosis and unsuccessful procedure were excluded. Clopidogrel therapy was either started on the day before angiography (n = 56) or immediately after stent implantation (n = 29) with four tablets (300 mg). During the intervention, all patients were treated with unfractionated heparin. After the procedure, patients were maintained on 100 mg aspirin indefinitely, 75 mg clopidogrel for at least 6 months. Other medications were given in accordance with the relevant guidelines and regulations.
Angiographic definitions {#sec008}
------------------------
Quantitative coronary angiographic analysis was performed by a single, experienced researcher who was blinded to clinical characteristics and laboratory measurements. The modified American College of Cardiology/American Heart Association grading system (types A, B1, B2, and C) was used to characterize lesion morphology. The off-line quantitative coronary angiographic analysis was performed with an automated edge-detection system (QCA-CMS V 6.0; Medis, Medical Imaging Systems). A contrast-filled nontapered catheter tip was used for calibration. The reference diameter was measured by interpolation. The angiographic parameters that have been measured were vessel size maximal balloon pressure (atm), balloon-to-vessel ratio, lesion length, and length of stented segment. Minimal luminal diameter (prior and post procedure) and diameter stenosis (prior and post procedure) have been measured in-stent and in-segment (including the stented segment, as well as both 5mm margins proximal and distal to the stent). Angiographic restenosis was evaluated at six to eight months follow-up, or earlier if clinically indicated. The primary end point of the study was angiographic restenosis (diameter stenosis of at least 50% based on in-segment analysis) at follow-up angiography. The secondary end points were in-stent LL and the need for target lesion revascularization due to restenosis in the presence of symptoms or objective signs of ischemia during follow-up.
Blood sampling {#sec009}
--------------
After informed consent was obtained, blood samples were taken under fasting conditions. Samples were taken in the morning before PCI (before treatment with unfractionated heparin). Therefore, venous blood was drawn from the antecubital vein with minimal tourniquet pressure into EDTA tubes. After centrifugation (4°C; 660g for 25 min) plasma samples were stored at -80°C until use.
Cell culture experiments {#sec010}
------------------------
Human coronary artery smooth muscle cells (HCASMC) were isolated from pieces of coronary arteries obtained from patients undergoing heart transplantation. Such smooth muscle cells were cultured and characterized as already described.\[[@pone.0232483.ref013]\] Human umbilical vein endothelial cells (HUVEC) were isolated from fresh umbilical cords by mild collagenase treatment, and cultivated as described.\[[@pone.0232483.ref014]\]
Cell migration assay {#sec011}
--------------------
The migration of HCASMC and HUVEC was examined using a colorimetric cell migration assay (Millipore, Billerica, USA) based on the Boyden chamber principle using inserts with a pore size of 8 μm.
HCASMC and HUVEC were trypsinized, washed 2x with PBS, resuspended in 1% FBS in M199 in the presence or absence of vaspin of different concentrations (0.004ng/mL up to 4ng/mL), and afterwards added to the upper tray (2.5x10^4^cells/300μL). M199 with 10% FBS and the same concentration of vaspin as in the upper tray was added to the bottom chamber. As a negative control we used serum free M199 in both chambers. After 6 hours at 37°C, nonmigrating cells were scraped from the upper surface of the filter. Cells on the bottom surface were incubated with Cell Stain Solution (Millipore, Billerica, USA), then subsequently extracted and detected by spectrophotometry (absorbance at 560 nm).
Scratch assay {#sec012}
-------------
HCASMC and HUVEC were trypsinized, washed 2x with PBS and resuspended in 1% FBS in M199. Cells were seeded into 6-well plates at a concentratrion of 5\*10\^5 cells / well. After 24h the cell monolayer was \"scratched\" in a straight line with a 200μL pipet tip. The debris of the scratch was removed by washing the cells once with growth medium and then cells were ether stimulated with 4ng/ml vaspin or PBS as a control. After 24h remigration into the the scratched area was analyzed under the microscope. Each experiment was repeated 3 times and a representative sample is shown.
Measurement of proliferation {#sec013}
----------------------------
Cells (HUVECs and HCASMCs) were seeded at a concentration of 2000 cells/well in the wells of an electronic microtiter plate (E-Plate®) of the xCELLigence System (ACEA Biosciences Inc., San Diego, USA). After 4h of adhesion non-adherent cells were washed away with PBS and vaspin was added at the indicated concentrations. Adhesion of cells to the gold microelectrodes impedes the flow of electric current between electrodes. This impedance value increases as cells proliferate. After 24h differences in impedance were measured.
Determination of vaspin plasma levels {#sec014}
-------------------------------------
Vaspin antigen was determined by a specific enzyme-linked immunosorbent assays (ELISA), (AdipoGene, Incheon, Korea). (Sensitivity 12 pg/ml)
Statistical analysis {#sec015}
--------------------
Sample size calculation was based on the hypothesis that vaspin plasma levels show a difference of at least 50% in patients with and without ISR. Calculation of sample size revealed that, with an expected "real world" restenosis rate of 10%, 80 patients were needed to detect a 50% difference in vaspin levels between patients with and without ISR with a power of 80% and significance level (2-tailed) of 0.05 (15). In patients with multiple lesion interventions, only the lesion with the highest late lumen loss was included. Continuous variables are expressed as mean ± SD. Demographic data of patients with and without restenosis were compared by the unpaired Student t test. Categorical variables are summarized as counts and percentages and were compared by the chi-square or by Fisher exact test. Plasma levels of vaspin were compared by Mann-Whitney U test. Spearman\'s correlation was used to correlate vaspin levels with late lumen loss. Multivariate analysis was performed with the logistic regression model in which restenosis was used as dependent variable and vaspin levels as well as potentially confounding baseline variables were used as independent variables. Baseline variables were selected for the model if they appeared to be imbalanced between patients with and without restenosis indicated by a p-value \<0.20. HCASMC and HUVEC migration and proliferation was analyzed by ANOVA. A value of p \< 0.05 (2-tailed) was considered statistically significant. All statistical analyses were performed with the statistical software package SPSS version 11.0 (SPSS, Inc., Chicago, Illinois). The authors had full access to the data and take responsibility for its integrity. All authors have read and agree to the manuscript as written.
Results {#sec016}
=======
Association of vaspin plasma levels with instent-restenosis {#sec017}
-----------------------------------------------------------
Baseline characteristics of study population are given in [Table 1](#pone.0232483.t001){ref-type="table"}. The mean age was 64±10 years and 77.6% of patients were male. In-stent restenosis occurred in 14 (16.5%) patients. Angiographical and interventional characteristics of patients are given in [Table 2](#pone.0232483.t002){ref-type="table"}. Patients with ISR at follow-up angiography had significantly lower vaspin plasma levels before stent-implantation compared to patients without ISR (0.213 ng/ml vs 0.382 ng/ml; p = 0.001; [Fig 1A](#pone.0232483.g001){ref-type="fig"}). By dividing plasma vaspin levels into tertiles we could show that the restenosis rate is also strongly related to levels of vaspin ([Fig 1B](#pone.0232483.g001){ref-type="fig"}). In patients with plasma vaspin levels in the third tertile before intervention we could not observe any restenosis (0%; p\<0.05). Multivariate regression analysis ([Table 3](#pone.0232483.t003){ref-type="table"}) revealed that a decrease of vaspin plasma levels was associated with ISR independent from clinical variables (hypertension, BMI) and procedural variables (number, diameter and type of stents). By investigating late lumen loss in the stented coronary segment we found a significant correlation with the plasma vaspin levels before intervention (R = 0.3, p\<0.05; [Fig 2](#pone.0232483.g002){ref-type="fig"}).
![Plasma levels of vaspin before percutaneous coronary intervention (PCI) with implantation of drug-eluting stents.\
(A) Box plots indicate median, interquartile range (range from the 25th to the 75th percentile), and total range. p\<0.001 no restenosis vs. restenosis. Restenosis rates according tertiles of vaspin plasma levels before PCI, p\<0.001 (B).](pone.0232483.g001){#pone.0232483.g001}
![Correlation of late lumen loss with vaspin plasma levels before PCI; R = 0.3, p\<0.05.](pone.0232483.g002){#pone.0232483.g002}
10.1371/journal.pone.0232483.t001
###### Baseline characteristics of patients with and without restenosis.
![](pone.0232483.t001){#pone.0232483.t001g}
Total Restenosis No restenosis
----------------------- ----------- ------------ --------------- ------
Age (years) 64±10 66±7 63±10 0.32
Sex (male) 66 (79) 9 (75) 57 (80) 0.7
Hypertension 63 (76) 7 (58) 56 (78) 0.12
Diabetes 25 (30) 3 (25) 22 (31) 0.48
Family History of CHD 46 (55) 7 (58) 39 (54) 0.54
Smoker 28 (34) 4 (33) 24 (34) 1.0
BMI (kg/m^2^) 27.9±3.7 26.6±3.6 28.1±3.7 0.19
Triglycerides (mg/dl) 175±99 159±70 177±103 0.55
TC (mg/dl) 189±43 186±27 190±45 0.81
HbA1c 6.02±0.74 6.07±0.63 6.06±0.77 0.83
Leukocytes 6.69±1.51 6.49±1.83 6.73±1.46 0.6
ACE-Inhibitors 35 (42) 7 (58) 28 (39) 0.34
ARB 10 (12) 1 (8) 9 (13) 1.0
Beta Blocker 44 (53) 8 (67) 36 (51) 0.36
Statin 64 (77) 9 (75) 55 (77) 1.0
Values are given as mean ± SD or n (%).
ACE = angiotensin-converting enzyme; ARB = angiotensin receptor blocker; BMI = body mass index; CRP = C-reactive protein; HbA1c = glycosylated hemoglobin; TC = total cholesterol.
10.1371/journal.pone.0232483.t002
###### Angiographic and procedural characteristics of patients with and without restenosis.
![](pone.0232483.t002){#pone.0232483.t002g}
Angiography Target vessel Restenosis No restenosis
------------------------------------ ------------ --------------- ------
LAD 8 (57) 40 (56.3) 0.69
LCx 1 (7) 13 (18.3)
RCA 5 (36) 18 (25.4)
Lesion Type (A/B/C) 2/10/2 17/45/7 0.42
Vessel Size 3.23±0.39 3.26±0.37 0.79
MLD (mm) 0.68±0.21 0.67±0.23 0.99
DS (%) 78.70±7.05 80.03±7.14 0.55
Number of stents per procedure 2.33±1.37 1.78±0.92 0.08
Number of stents per vessel 1.58±0.79 1.30±0.57 0.13
**Type of Stent**
Taxus 13 (93%) 48 (69) 0.16
Cypher 1 (7%) 22 (31)
**Length of stented segment (mm)** 22.41±0.41 20.79±6.40 0.4
MLD after procedure
In-stent (mm) 2.64±0.41 2.68±0.36 0.74
In-segment (mm) 2.63±0.41 2.66±0.35 0.78
LAD = left anterior descending coronary artery; LCx = left circumflex coronary artery; MLD = minimal lumen diameter; RCA = right coronary artery. MLD = minimal lumen diameter.
10.1371/journal.pone.0232483.t003
###### Logistic regression model assessing the risk for in-stent restenosis after implantation of drug-eluting stents according a decrease of plasma levels of vaspin.
![](pone.0232483.t003){#pone.0232483.t003g}
Hazard ratio for 1 SD decrease of vaspin plasma levels 95% Confidence interval p-value
--------------------------------------------------------------- -------------------------------------------------------- ------------------------- ---------
Unadjusted 4.4 1.6--11.4 0.003
Adjusted for hypertension, BMI 4.2 1.5--11.6 0.005
Adjusted for stent diameter, type of stents, number of stents 6.6 1.9--23.0 0.003
BMI body mass index; SD standard deviation.
Effect of vaspin on migration of human coronary smooth muscle cells (HCASMC) {#sec018}
----------------------------------------------------------------------------
Treatment of HCASMC with vaspin decreased migration towards serum in a dose-dependent manner with a maximum inhibitory effect at 4 ng/mL (100±9.26% vs. 9±5.96%; p\<0.001). Interestingly, the inhibitory effect of vaspin was decreased at the higher dose of 40 ng/mL resulting in a U-shaped curve ([Fig 3A](#pone.0232483.g003){ref-type="fig"}). In addition, vaspin (4 ng/mL) completely abolished repopulation with HCASMC in a scratch assay ([Fig 3C](#pone.0232483.g003){ref-type="fig"}).
![Panel A & B: Effect of vaspin on migration of human coronary smooth muscle (A), and human umbilical vein endothelial cells *in vitro* (B); \* p\<0.001. Panel C & D: Scratch assay to determine the effect of vaspin on wound healing migration of human coronary smooth muscle cells (C) and human umbilical vein endothelial cells (D) *in vitro*. Cells were added at a concentration of 5\*10\^5 cells / well. After 24h the cell monolayer was scratched in a straight line with a 200μL pipet tip. The debris of the scratch was removed by washing the cells once with growth medium and then cells were either treated with 4ng/ml vaspin (Panel A + B) or PBS as a control (Panel C + D). After 24h remigration into the scratched area was analysed under the Microscope. Each experiment was repeated 3 times and a representative sample is shown.](pone.0232483.g003){#pone.0232483.g003}
Effect of vaspin on migration of human umbilical vein endothelial cells (HUVEC) {#sec019}
-------------------------------------------------------------------------------
Treatment of HUVEC with the same concentrations of vaspin was not effective in decreasing migration towards serum. It was not possible to demonstrate any effect of inhibition of migration of HUVEC at all. (100±5.65% vs. 98±7.67%; p\>0.05) ([Fig 3B](#pone.0232483.g003){ref-type="fig"}). Also, vaspin (4 ng/mL) did not abolish repopulation with HUVEC in a scratch assay ([Fig 3D](#pone.0232483.g003){ref-type="fig"}).
Effect of vaspin on the prolifferation of HUVEC and HCASMC {#sec020}
----------------------------------------------------------
Treatment of HUVEC and HCASMC with vaspin at a concentration of 4ng/ml for 24h had no significant effect on the proliferation of these cells compared to cells which were treated with 0ng/ml (HUVEC: 100±11% vs 103±9% p\>0.05; HCASMC: 100±10% vs 101±8,6% p\>0.05). ([Fig 4](#pone.0232483.g004){ref-type="fig"})
![Effect of vaspin on proliferation *in vitro*.\
Human coronary smooth muscle cells and human umbilical vein endothelial cells were seeded at a concentration of 2000 cells/well in the wells of an electronic microtiter plate. After 24h differences in impedance, which represents proliferation, were measured.](pone.0232483.g004){#pone.0232483.g004}
Discussion {#sec021}
==========
Locally produced adipokines, especially by periadventitial adipose tissue, may affect vascular physiology and pathology.\[[@pone.0232483.ref012]\] It has already been described a correlation of vaspin plasma levels with carotid intima-media thickness independent of insulin resistence.\[[@pone.0232483.ref015]\] This indicates that vaspin may play a role in the development of atherosclerosis also in slim or non-diabetic patients. However, still little is known for the association of vaspin with atherosclerosis.\[[@pone.0232483.ref016],[@pone.0232483.ref017]\] Due to the fact that intima hyperplasia induced by smooth muscle cell migration plays a key role in the development of atherosclerosis\[[@pone.0232483.ref018]\] as well as in the development of in-stent restenosis\[[@pone.0232483.ref019]\] the effects of vaspin on smooth muscle cell migration and the occurrence of ISR in correlation to the baseline plasma levels of patients with stable coronary artery disease were of special interest in our study.
We were able to show for the first time that plasma levels of vaspin are able to predict the occurrence of ISR in patients with stable CAD, independently from established CV risk factors. In addition, we could show a correlation to the late lumen loss of the stented segment of the coronary artery with plasma vaspin levels before PCI. These findings are in line with our *in-vitro* data showing a dose dependent effect on smooth muscle cell migration.
Vaspin is thought to belong to the serpin family \[[@pone.0232483.ref009],[@pone.0232483.ref010]\] but the molecular target and its mode of action is not fully understood. Heiker at al. were able to determine kallikrein 7 as a first target of vaspin which could be the physiological mechanism for its compensatory actions on obesity-induced insulin resistance \[[@pone.0232483.ref020]\]. The effects of vaspin on smooth muscle cell migration has already been studied in an rat animal model.\[[@pone.0232483.ref021]\] However, due to the fact of the well described differences in gene expression patterns between different types of smooth muscle cells within one organism\[[@pone.0232483.ref022]\] we reinvestigated the described effect on smooth muscle cell migration in cells derived from human coronary arteries. We were able to reproduce the described inhibitory effect of vaspin on serum induced smooth muscle cell migration in a human *in vitro* model for the first time. A recent study in rat smooth muscle cells has shown that vaspin attenuates high glucose-induced proliferation and chemokinesis. In addition, it has been demonstrated that this effect was mediated by PI3K/Akt pathway as vaspin significantly attenuated Akt phosphorylation *in vitro* \[[@pone.0232483.ref023]\]. This in line with previous studies that demonstrated that the PI3K/Akt pathway is central in the development of intima hyperplasia after vascular injury *in vivo* \[[@pone.0232483.ref024]\]\[[@pone.0232483.ref025]\]. Further we could show that vaspin has no effect on the migration and the proliferation of HUVEC. In contrast to the inhibitory effect of vaspin on the PI3K/Akt pathway in smooth muscle cell, vaspin has been shown to induce Akt activation in human aortic endothelial cells, protecting them thereby from free fatty acid-induced apoptosis \[[@pone.0232483.ref026]\]. This is of vital importance for the endothelializaion of the stent after implantation which determines the long-term outcome and the risk for acute stent-thromosis and myocardial infarction. In combination with our clinical data, this gives us strong evidence for the observed correlation between the development of in-stent restenosis and the pre-interventional plasma levels of vaspin in an individual patient. Further the observed strong inhibitory effect on HCASMC but not on HUVEC *in vitro*. The almost abolished serum induced cell migration of HCASMC could be an explanation for the fact that in patients with the highest plasma vaspin levels no in-stent restenosis occurred. Interestingly, it has been shown recently that vaspin has also direct effects on human macrophages and treatment with vaspin reduces an inflammatory phenotype of human macrophages by nuclear factor κB down-regulation and significantly suppresses oxidized low-density lipoprotein-induced foam cell formation. This was associated with significantly reduced intraplaque inflammation in an *in vivo* model that employed chronic infusion of vaspin into Apoe−/− mice (PMID 29891806).
The source of circulating vaspin in plasma is not fully understood. The highest tissue expression levels of vaspin were found in liver brain and skin compared to only modest expression in adipose tissue, spleen and low or non-detectable expression in bone marrow, muscle and kidney\[[@pone.0232483.ref027]\]. Interestingly, Sato et al was able to demonstrate that vaspin is expressed at high levels in atheromatous plaques in particular in foam cells in human coronary arteries. In contrast vaspin was not observed in human normal coronary arteries \[[@pone.0232483.ref028]\]. However, no data is available on the expression of vaspin in perivascular adipose tissue or vascular cells in vessels with intima hyperplasia.
Some limitations of the present study have to be acknowledged. As the primary endpoint of this study was angiographic restenosis, we included only patients that gave their informed consent for control angiography. However, we do not believe that selection bias plays a major role since these patients did not differ in respect to baseline characteristics and outcome from the total cohort of patients that had PCI with implantation of DES. Further, our study is necessarily of an observational nature. Accordingly, our results may be explained by unmeasured confounding factors. Therefore, we tried to control for baseline imbalances by multivariate modeling. The possibility of residual or undetected confounding is small but cannot be ruled out completely. In addition, in our study only first generation DES were analyzed, however biological mechanisms of ISR are similar between first and younger generations of DES. In addition, in our cohort ISR rates after implantation of Cypher or TAXUS stents were remarkable different. We therefore included stent type in the multivariate analysis and could demonstrate, that the association of vaspin plasma levels with ISR was independent of used stent.
In conclusion, determination of vaspin plasma levels before PCI might be helpful in the identification of patients with high risk for development of ISR after stent implantation. In addition, the selective effects of vaspin on smooth muscle cell migration could potentially be used to reduce ISR without inhibition of re-endothelialization of the stented segment.
PCI
: Percutaneous coronary intervention
ISR
: In-stent restenosis
SMC
: Smooth muscle cells
HCASMC
: Human coronary artery smooth muscle cell
DES
: Drug eluting stents
Vaspin
: Visceral adipose tissue-derived serpin
ELISA
: Enzyme-linked immunosorbent assays
NIH
: Neointimal hyperplasia
ECM
: Extracellular matrix
[^1]: **Competing Interests:**The authors have declared that no competing interests exist.
| {
"pile_set_name": "PubMed Central"
} |
{
"id": "overlay_male_worn_glasses_eye",
"fg": ["glasses_eye_m"],
"bg": []
}
| {
"pile_set_name": "Github"
} |
Q:
Bound on surface gradient in terms of gradient
Let $S \subset \mathbb{R}^n$ be a hypersurface and define the surface gradient of a function $u:S \to \mathbb{R}$ by
$$\nabla_S u = \nabla u - (\nabla u \cdot N)N$$
where $N$ is the normal vector.
Is it possible to obtain a bound of the form
$$|\nabla u |_{L^2(S)} \leq C|\nabla_S u|_{L^2(S)}$$
where $C$ doesn't depend on $u$? Assume whatever smoothness of $N$ is needed.
A:
This is not true by the following counter example. We note first of all that
$\nabla_{S}u = \nabla u - (\nabla u \cdot N)N$
is simply the tangiential component of $\nabla u$ on the surface. This is assuming that $N$ is unitary. To verify that this is the case we can check that that
$(\nabla_{s} u, N) = 0.$
Now consider that $\nabla u$ points in the direction of the unit normal. Then there will be no tangential component of $\nabla u$ on the surface $S$, i.e. $\nabla_{S} u = 0$. Then assuming $\nabla u \neq 0$ we have that
$| \nabla u|_{L^{2}(S)} > |\nabla_{S} u|_{L^{2}(S)} = 0.$
| {
"pile_set_name": "StackExchange"
} |
The present invention relates to a tubing pipe sizing tool.
In plumbing, sizing tubes is an important job for the plumber to do frequently and conventionally.
The above cannot be more true when the work environment is narrow or crowded places. In such a limited space, the plumber cannot use two hands or bulky tools. Therefore, it would be very grateful if the plumber can do his or her works with a single hand or an inventive tool that enables the user to use a single hand to do the job done.
Accordingly, a need for a tubing pipe sizing tool has been present for a long time considering the expansive demands in the everyday life. This invention is directed to solve these problems and satisfy the long-felt need. | {
"pile_set_name": "USPTO Backgrounds"
} |
Love Bombing
Can’t wait to see the amount of emails I am going to get today about this tip…
I am always reading the Peaceful Parenting stuff because I really believe LOVE is the answer to most of our problems. I have a 6 and an 8 year old. I recently came across this article on Love Bombing. I think it can and probably should apply to adults as well as children.
It works like this…if you have a kid who is acting out or even being aggressive, instead of punishing him or giving him a time out engulf him with love and give him all the control. Plan a Love Bomb day. Love Bomb time entails doing anything the kid wants to do (as long as it is safe of course) but yes that even includes watching back to back episodes of Sponge Bob. Basically they get to choose whatever they want to do. They are in control But with that control they are also smothered in LOVE>
How does could this apply to adults? In my opinion we are all just kids in big bodies. But let’s say you husband is being grumpy and angry why not give HIM a Love Bomb day? | {
"pile_set_name": "Pile-CC"
} |
Q:
UITextField : testing text property for empty string or nil or null doesn't work
I got this weird problem testing for empty (or null) text property.
Here my setup : I got a view with 6 textfield in it and here the code I use to go thru those field (loaded in a NSMutable Array)...
NSEnumerator *portsEnumerator = [appliancePorts objectEnumerator];
UITextField *tmpField;
newSite.port = [NSMutableArray array];
while (tmpField =[portsEnumerator nextObject]) {
NSLog(@"value:%@",tmpField.text);
if (![tmpField.text isEqualToString:nil]) {
[newSite.port addObject:(NSString *)tmpField.text];
}
}
When I'm in this interface and type some text in the first two field and "just" tab the remaning field and it the "Done" button here's what I got from GDB output:
2010-08-10 20:16:54.489 myApp[4883:207] value:Value 1
2010-08-10 20:16:58.115 myApp[4883:207] value:Value 2
2010-08-10 20:17:02.002 myApp[4883:207] value:
2010-08-10 20:17:13.034 myApp[4883:207] value:
2010-08-10 20:17:15.854 myApp[4883:207] value:
2010-08-10 20:17:17.762 myApp[4883:207] value:
I know that if I test for empty string it should work because the text property when dump to the console show this :
UITextField: 0x5d552a0; frame = (20 8; 260 30); text = ''; clipsToBounds = YES; opaque = NO; tag = 1; layer = CALayer: 0x5d54f20
But the REAL problem begin when I go back the view, enter some text in the same first two field and it the "Done" button right after (not going thru the other field so they don't get any focus). This is again the GDB output...
2010-08-10 20:23:27.902 myApp[4914:207] value:Value 1
2010-08-10 20:23:31.739 myApp[4914:207] value:Value 2
2010-08-10 20:23:34.523 myApp[4914:207] value:(null)
2010-08-10 20:23:56.443 myApp[4914:207] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '* -[NSMutableArray insertObject:atIndex:]: attempt to insert nil object at 2'
So, the obvious problems is that first the isEqualtoString:nil doesn't work and second, how come this text change from '' to null in just of matter of putting the focus on the field.
So, is there a better way to test for empty field ?
Thanks!
A:
I tend to use
if (![tmpField.text length])
This will miss it out if it either nil or @"". i.e. it finds the length of the string, and if it is 0 (which would be the case if the string were empty or nil) it does not execute the IF command.
A:
How about
if (![tmpField.text isEqualToString:@""])
that tests for an empty string, not a nil string.
There are many other ways including testing the length of the string. Note you can send a message to a nil object with no problems, so if tmpField.text was nil then the test would succeed (and you would crash when trying to add a nil object to your array) - but the test you want is to send a message to the NSString object querying whether it is empty or not.
| {
"pile_set_name": "StackExchange"
} |
Using multiple sequence alignment editors and formatters.
Sequence alignment editors enable the user to manually edit a multiple sequence alignment (msa) in order to obtain a more reasonable or expected alignment. Editors allow sequences to be reordered and/or modified using the computer's cut and paste commands. They are designed to accept various msa formats and to provide the output file in a suitable user-designated format. Sequence formatters provide various output formatting options, such as color and shading schemes to enhance visualization of residue alignments. The formatters can output files in Postscript, EPS, RTF, and other widely recognized formats, while accepting the standard input formats, such as MSF, ALN, and FASTA. This article introduces a number of sequence alignment editors and formatters, and provides links to sites where they can be found. | {
"pile_set_name": "PubMed Abstracts"
} |
Cutest 101 Dalmations with Cruella de Vil Group Halloween Costume
My four roommates and I decided to be the 101 Dalmatians accompanied with Cruella de Vil. We went to the fabric store and found the Dalmatian print fabric. We made our dresses by cutting slits that line up with each other and tying knots up the entire back (make sure to leave extra length because the dress shortens when you tie it tight). We used ribbon for our collars and wrote our names on them.
We all picked names of our dogs and it was my birthday that night so I stood out by putting a spot over my eye. We used extra fabric to make little bows for our hair to make ourselves look like lady pups. Our costumes did not fully come together until Cruella was complete; she was the icing on the cake. We teased her whole head and pinned it up to make it look crazy and then dyed her hair white. Her attire was created by digging through our five closets to get all the pieces necessary (fur coat, black dress, pearls).
Her boyfriend didn’t know what to be, so last minute we made him into a dog. We took fabric and made a poncho then gave him a red bandanna to make the costume more masculine. We went out together and saw other puppies but no other Cruella. Everyone told us that ours was the most well thought out and that we looked cute especially with “Cruella” calling for her puppies through the bar, and us barking back. We had a great time doing this costume and really got into character. | {
"pile_set_name": "Pile-CC"
} |
Complexation of Np(V) with oxalate at 283-343 K: spectroscopic and microcalorimetric studies.
Thermodynamic parameters including the equilibrium constants and enthalpy of complexation of Np(V) with oxalate at variable temperatures (T = 283-343 K, ionic strength = 1.05 mol kg(-1) NaClO(4)) were determined by spectrophotometric and microcalorimetric titrations. The results show that the complexation of Np(V) with oxalate is moderately strong and becomes weaker at higher temperatures. The complexation is exothermic and driven by both enthalpy (negative) and entropy (positive) in the temperature range from 283 K to 343 K. As the temperature is increased, both the enthalpy and entropy of complexation increase (ΔH becomes less negative and ΔS becomes more positive), having opposing effects on the complexation. Because the increase in the enthalpy (ΔH) exceeds that of the entropy term (TΔS), the complexation of Np(V) with oxalate becomes weaker at higher temperatures. The effect of temperature on the complexation is discussed in terms of the energetics of ion solvation and hydrogen bonding involved in the complexation. | {
"pile_set_name": "PubMed Abstracts"
} |
Individualized managing strategies of aggressive angiomyxoma of female genital tract and pelvis.
To investigate and evaluate the clinical management strategies of aggressive angiomyxoma (AA) in female genital tract and pelvis. A cohort of 13 patients with AA diagnosed and treated in Peking Union Medical College Hospital in the last 12 years was reported focusing on the results of the managements and prognosis. The mean age at initial presentation was 36.9 years. The commonest site of tumor was perineum. Only two cases were accurately diagnosed as AA preoperatively by biopsy and fine needle aspiration of the tumors respectively. MRI helpfully reveals the location, relationship and degree of infiltration between tumors and pelvic organs. Surgery is the mainstay treatment. 11 of 12 patients had complete resection and majority of the operations were finished successfully through trans-perineum and trans-vagina approaches. Three cases with positive expression of ERs and PRs in the tumors received GnRHa injections which were useful preoperatively but not postoperatively. One repeatedly-recurrent case was treated with radiotherapy effectively. The recurrence rate in our study was 41.7% (5/12), with a median recurrence interval of 20.9 months. No patient developed distant metastases and died of the disease. AA preferentially involves the pelvic and perineal regions of women in reproductive age. Tumor biopsy and fine-needle aspiration cytology are conducive to the preoperative diagnosis. The individualized operative strategy and awareness to protect and rebuild structure and function of the organs should be emphasized during the management of AA. Long-term follow-up is mandatory because of the high rate of recurrence. | {
"pile_set_name": "PubMed Abstracts"
} |
Diffusion furnaces have been widely used for thermal processing of semiconductor device materials (such as semiconductor wafers or other semiconductor substrates). The furnaces typically have a large thermal mass that provides a relatively uniform and stable temperature for processing. However, in order to achieve uniform results, it is necessary for the conditions in the furnace to reach thermal equilibrium after a batch of wafers is inserted into the furnace. Therefore, the heating time for wafers in a diffusion furnace is relatively long, typically exceeding ten minutes.
As integrated circuit dimensions have decreased, shorter thermal processing steps for some processes, such as rapid thermal anneal, are desirable to reduce the lateral diffusion of dopants and the associated broadening of feature dimensions. Thermal process duration may also be limited to reduce forward diffusion so the semiconductor junction in the wafer does not shift. As a result, the longer processing times inherent in conventional diffusion furnaces have become undesirable for many processes. In addition, increasingly stringent requirements for process control and repeatability have made batch processing undesirable for many applications.
As an alternative to diffusion furnaces, single wafer rapid thermal processing (RTP) systems have been developed for rapidly heating and cooling wafers. Most RTP systems use high intensity lamps (usually tungsten-halogen lamps or arc lamps) to selectively heat a wafer within a cold wall clear quartz furnace. Since the lamps have very low thermal mass, the wafer can be heated rapidly. Rapid wafer cooling is also easily achieved since the heat source may be turned off instantly without requiring a slow temperature ramp down. Lamp heating of the wafer minimizes the thermal mass effects of the process chamber and allows rapid real time control over the wafer temperature. While single wafer RTP reactors provide enhanced process control, their throughput is substantially less than batch furnace systems.
While RTP systems allow rapid heating and cooling, it is difficult to achieve repeatable, uniform wafer processing temperatures using RTP, particularly for larger wafers (200 mm and greater). The temperature uniformity is sensitive to the uniformity of the optical energy absorption as well as the radiative and convective heat losses of the wafer. Wafer temperature nonuniformities usually appear near wafer edges because radiative heat losses are greatest at the edges. During RTP the wafer edges may, at times, be several degrees (or even tens of degrees) cooler than the center of the wafer. At high temperatures, generally greater than eight hundred degrees Celsius (800.degree. C.), this nonuniformity may produce crystal slip lines on the wafer (particularly near the edge). To minimize the formation of slip lines, insulating rings are often placed around the perimeter of the wafer to shield the wafer from the cold chamber walls. Nonuniformity is also undesirable since it may lead to nonuniform material properties such as alloy content, grain size, and dopant concentration. These nonuniform material properties may degrade the circuitry and decrease yield even at low temperatures (generally less than 800.degree. C.). For instance, temperature uniformity is critical to the formation of titanium silicide by post deposition annealing. In fact, the uniformity of the sheet resistance of the resulting titanium silicide is regarded as a standard measure for evaluating temperature uniformity in RTP systems.
Temperature levels and uniformity must therefore be carefully monitored and controlled in RTP systems. Optical pyrometry is typically used due to its noninvasive nature and relatively fast measurement speed which are critical in controlling the rapid heating and cooling in RTP. Increasingly complex systems have been developed for measuring emissivity and for compensating for reflected radiation.
While these systems have enhanced wafer temperature uniformity, their complexity has increased cost and maintenance requirements. In addition, other problems must be addressed in lamp heated RTP systems. For instance, many lamps use linear filaments which provide heat in linear segments and as a result are ineffective or inefficient at providing uniform heat to a round wafer even when multi-zone lamps are used. Furthermore, lamp systems tend to degrade with use which inhibits process repeatability and individual lamps may degrade at different rates which reduces uniformity. In addition, replacing degraded lamps increases cost and maintenance requirements.
In order to overcome the disadvantages of lamp heated RTP systems, a few systems have been proposed which use a resistively heated plate. Such heated plates provide a relatively large thermal mass with a stable temperature.
While heated plate rapid thermal processors provide a stable temperature on the heated plate that may be measured using a thermocouple, problems may be encountered with wafer temperature nonuniformities. Wafers may be heated by placing them near the heated plate rather than on the plate. In such systems, the edges of the wafer may have large heat losses which lead to nonuniformities as in lamp heated RTP systems. Even when a wafer is placed in contact with a heated plate, there may be nonuniformities. The heated plate itself may have large edge losses, because: 1) the corners and edges of the plate may radiate across a wider range of angles into the chamber; 2) vertical chimney effects may cause larger convective heat losses at the edges of the heated plate; and 3) the edges of the heated plate may be close to cold chamber walls. These edge losses on the plate may, in turn, impose temperature nonuniformities upon a wafer placed on the plate. In addition, heat loss and temperature uniformity across the wafer surface varies with temperature and pressure.
As a result of the problems associated with conventional heated plate rapid thermal processors, they have not been adopted in the industry as a viable alternative to lamp heated RTP systems. A 1993 survey of RTP equipment covering twenty two different vendors' products indicates that, at the time of the survey, only one non-lamp system was available. See Roozeboom, "Manufacturing Equipment Issues in Rapid Thermal Processing," Rapid Thermal Processing at 349-423 (Academic Press 1993). The only non-lamp system listed uses a resistively heated bell jar with two temperature zones and is not a heated plate reactor. See U.S. Pat. No. 4,857,689 to Lee. Currently, the RTP market is dominated by lamp based systems and despite the many problems associated with such systems, they have been widely accepted over proposed heated plate approaches. Despite the potential that heated plate approaches offer for a stable and repeatable heat source, it is believed that problems with energy efficiency, uniformity, temperature and heating rate control, and the deployment of fragile, noncontaminating resistive heaters have made such systems unacceptable in the marketplace.
A system which overcomes many of the disadvantages of the prior art is described in U.S. pat. application Ser. No. 08/499,986 filed Jul. 10, 1995, which is hereby incorporated herein by reference in its entirety. The system described in application Ser. No. 08/499,986 provides good temperature uniformity and high throughput using a large thermal mass resistive heater and an insulated processing region at low pressure to control heat transfer.
What is desired are an improved method and apparatus for providing insulation and controlling heat transfer in a rapid thermal processing system. Preferably, such improvements may be used in a system such as that described in application Ser. No. 08/499,986 while providing better insulation, higher thermal uniformity in the processing region, and reduced potential for slip as substrates are placed into the processing region for heating and removed for cooling. | {
"pile_set_name": "USPTO Backgrounds"
} |
Direct evidence for antioxidant effect of Bcl-2 in PC12 rat pheochromocytoma cells.
Mock-transfected PC12 rat pheochomocytoma cells and PC12 cells transfected with the bcl-2 gene, a gene associated with inhibition of apoptosis, were subjected to oxidative stress by incubation in the presence of the azo-initiator of lipid peroxyl radicals, 2,2'-azobis(2,4-dimethylvaleronitrile) (AMVN). Extraction and chromatographic analysis by two-dimensional TLC of the major phospholipid classes showed no differences in the phospholipid composition between the mock- and bcl-2-transfected cell lines after incubation in the presence of 0.5 mM AMVN for 2 h at 37 degrees C. A method consisting of incorporation of cis-parinaric acid into the constituent membrane phospholipids before exposure to AMVN was developed to improve the sensitivity of detecting lipid peroxidation in PC12 cells. Analysis of the pattern of changes in parinaric acid-labeled phospholipids after exposure to 0.25 and 0.5 mM AMVN by HPLC showed significant oxidation of phosphatidylcholine (PC), phosphatidylethanolamine (PEA), phosphatidylserine (PS), phosphatidylinositol (PI), and sphingomyelin (SPH) during a 2-h incubation. The extent of oxidation of each phospholipid class was dependent on the concentration of AMVN present up to 1 mM. Based on phospholipid fractional composition, the specific rates of PnA peroxidation in phospholipid classes were estimated. In mock-transfected PC12 cells, the order of AMVN-induced oxidation effectiveness was the same for both specific rates and relative rates: PC >> PEA > PS > SPH > PI. While a dramatic decrease in both relative and specific oxidation rates was observed for all phospholipid classes in bcl-2-transfected PC12 cells, the specific oxidation rates were higher for aminophospholipids (PEA and PS) than for other phospholipids. This suggests that antioxidant protection by bcl-2-related product(s) may be phospholipid-specific and that aminophospholipids are relatively less protected than the other phospholipids. The vitamin E analogue, 2,2,5,7,8-pentamethyl-6-hydrochromane, acted as an effective antioxidant in preventing oxidation of parinaric acid-labeled membrane phospholipids during incubation in the presence of AMVN and the extent of protection was approximately the same in both cell lines. Since, unlike the agents used to generate oxidative stress in other studies, temperature-driven generation of peroxyl radicals by AMVN is not dependent on intracellular metabolism, the results presented provide proof for antioxidant protection, rather than abrogation of radical generation afforded by bcl-2 transfection of PC12 cells. | {
"pile_set_name": "PubMed Abstracts"
} |
Toward automatic phenotyping of developing embryos from videos.
We describe a trainable system for analyzing videos of developing C. elegans embryos. The system automatically detects, segments, and locates cells and nuclei in microscopic images. The system was designed as the central component of a fully automated phenotyping system. The system contains three modules 1) a convolutional network trained to classify each pixel into five categories: cell wall, cytoplasm, nucleus membrane, nucleus, outside medium; 2) an energy-based model, which cleans up the output of the convolutional network by learning local consistency constraints that must be satisfied by label images; 3) a set of elastic models of the embryo at various stages of development that are matched to the label images. | {
"pile_set_name": "PubMed Abstracts"
} |
Recommended Games
Mexico frees 61 kidnap victims held near U.S. border
MEXICO CITY (Reuters) - Mexican authorities freed 61 kidnapping victims in the northern border city of Reynosa, the government said on Friday, liberating a mix of foreign nationals that included at least nine minors and one American.
The raid, which took place on Thursday in four separate buildings in Reynosa, freed captives from Honduras, El Salvador, Guatemala, Nicaragua, Mexico and the United States, government spokesman Eduardo Sanchez said at a news conference in the capital.
He said the captives, many of whom had been trying to cross the border into the United States, had been held there for at least a week in "inhuman conditions," although he did not go into details. Reynosa is across the border from McAllen, Texas.
The captives included children ages 2, 7 and 11, he added.
Authorities arrested four suspects, who could face sentences of up to 36 years if found guilty, Sanchez said.
Mexican criminal gangs have long been involved in trafficking migrants from Mexico and Central America north into the United States in a side business that complements their drug smuggling efforts.
Although the murder rate in Mexico has fallen somewhat, kidnapping has risen since President Enrique Pena Nieto took office in December vowing to stamp out such crimes.
A recent Pew Research Center report found that as of March 2012, 11.7 million illegal immigrants were living in the United States, according to a preliminary estimate based on U.S. government data.
The number of such immigrants in the country peaked at 12.2 million in 2007 and fell to 11.3 million in 2009, bucking an upward trend that had held for decades, the study found.
Immigration reform is one of U.S. President Barack Obama's main objectives following his re-election last year. The White House hopes to push through a broad bill to reform immigration rules and provide a pathway to citizenship for the undocumented, but the effort has stalled in the House of Representatives after passing with bipartisan support in the Senate. | {
"pile_set_name": "Pile-CC"
} |
Battle of Rastarkalv
The Battle of Rastarkalv () took place in 955 on the southern part of the island of Frei in the present-day Kristiansund Municipality in Møre og Romsdal county, Norway.
This was one of several battles between the forces of King Haakon the Good and those of the sons of Eirik Bloodaxe (Eiriksønnene). After their father's death, Harald Greycloak and his brothers were allied against King Haakon with King Harald Bluetooth of Denmark. Haakon had put up a warning system with cairns that would be lighted to tell of approaching war fleets. Therefore, the king was alarmed first with messengers at Nordmøre from Stadlandet. By placing ten standards far apart along a low ridge, Haakon gave the impression that his army was bigger than it actually was. Haakon managed to fool Eirik's sons into believing they were outnumbered. The Danes fled but when they came to the beach, they discovered that their ships had been pushed out to sea. Haakon gained the victory and the Danish forces were slaughtered by Haakon's army.
Egil Ullserk, who was Haakon's leading man, died in the battle. Gamle Eirikssen, one of the sons of Eirik Bloodaxe, also died in the conflict. Haakon buried Egil Ullserk in a ship together with the people who had died in the battle. In 1955 (on the 1000 year anniversary), King Haakon VII visited the area and commemorated the battle. There is a stone monument located near Frei Church in Nedre Frei. It consists of an Obelisk Memorial for Egil Ullserk and his men who died at the Battle of Rastarkalv.
References
Category:955
Rastarkalv 955
Rastarkalv 955
Rastarkalv
Category:History of Møre og Romsdal
Category:Kristiansund | {
"pile_set_name": "Wikipedia (en)"
} |
Computed tomographic evaluation of primary osseous malignant neoplasms.
A total of 128 patients with pathologically confirmed primary osseous malignant lesions was examined by computed tomography (CT). In each case, the CT findings were compared with those from the standard radiographs, tomograms, and isotope bone scans as well as with the clinical findings, in regard to tumor detection, diagnosis, and extent. Even though CT demonstrated all lesions, 96% were seen on radiographs, with only 4% of tumors identified solely by CT. In 7% of cases, CT provided unique diagnostic information not obtainable by other means. In 77% of cases, CT gave a better indication of tumor location, extent, and relationships than did any of the other methods. After treatment, CT was efficacious in the detection or ruling out of recurrences and in patient follow-up after chemotherapy or radiation therapy. | {
"pile_set_name": "PubMed Abstracts"
} |
The present invention relates generally to vehicles and, more particularly, to cargo management apparatus for use within vehicles.
In sport/utility and mini-van vehicles, generally there are two or more rows of seating. Conventionally, behind the last row of seating is a cargo storage area. Unfortunately, in automotive vehicles such as sport/utility vehicles and mini-vans, cargo storage space may be somewhat limited. Accordingly, a need exists to maximize the efficiency and utilization of existing cargo storage space without intruding on passenger space.
In view of the above discussion, a storage apparatus is provided that attaches to a vehicle seat backrest and that has a compartment for storing items therein. According to embodiments of the present invention a storage apparatus includes opposite upper and lower walls, opposite side walls, and a rear wall that collectively define a compartment for receiving items for storage therewithin. A door is pivotally mounted to the storage apparatus and is movable between a closed position for covering an opening and an open position for allowing access to the storage apparatus via the opening. The elongated storage apparatus is secured to the rear portion of a vehicle seat backrest via a pair of elongated support members. Each elongated support member includes a hanger that is configured to removably attach the respective support member to one or more headrest support posts of a respective headrest. The elongated support members include connectors that are configured to removably interconnect with corresponding connectors on the rear wall of the elongated storage apparatus. The elongated storage apparatus can be removably secured to the elongated support members in multiple vertically spaced-apart positions.
According to embodiments of the present invention, two or more storage apparatus may be removably secured to the elongated support members in adjacent, vertically spaced-apart relationship.
Apparatus according to embodiments of the present invention may be lightweight and are designed for quick and easy installation and removal. Moreover, apparatus according to embodiments of the present invention can be interchangeably installed within various different vehicles. Apparatus according to embodiments of the present invention can be inexpensive to manufacture and do not require special brackets and/or attachments, and do not require vehicle modifications. | {
"pile_set_name": "USPTO Backgrounds"
} |
By buying from a company that's actively doing something praiseworthy, affluent consumers feel they're helping out as well-without having to do anything other than make a purchase they were going to make anyway. Find out how you can align your company with a cause they'd like to support.
Just when you thought you had Twitter down, a basic understanding of LinkedIn and you finally know how to upload a video to YouTube, I present my take on social trends that are on the horizon next year. From plug-ins to location-based marketing initiatives to the move to higher bandwidth, there's change afoot in the world of social media.
I've compiled these 10 trends from my own observations, the opinions of my fan base online and other experts. Consider this a starting point for what to incorporate in your own social-media strategies in the New Year.
1. Location-Based Marketing: Location services will grow in popularity as people get more comfortable checking in to a business. This will be the result of enhanced safety features -- such as privacy options that block your location from public view -- and more enticing brand offers.
It's time to acquaint yourself with sites and applications such as Foursquare, Facebook Places and Gowalla. These sites will help you better target prospects' likes and interests, pique interest and influence purchase decisions by offering discounts, promotions or giveaways when they “check in” to your business.
2. Video Platforms:YouTube might be one of the largest user-generated video-sharing sites on the Internet today, but other platforms will begin to surface that are more business-focused, easier to market videos on and not as crowded. Sites like Viddler, Vimeo and Dailymotion will gain momentum with a stronger focus on live streaming on sites such as interactive broadcast platform Ustream, or streaming straight from blogs.
3. Text Campaigns: As the mobile arena grows, small businesses will continue to move marketing campaigns to mobile phones.
MessageBuzz and Enowit offer tiered levels of service and free demo accounts that allow you to test a mobile-marketing campaign. Start by incorporating a mobile widget onto your site to help collect mobile numbers through an automated system and create campaigns specific to area codes or regions.
4. For Cause Tweet-a-thons: A Tweet-a-thon is a fundraising campaign on Twitter for which users encourage their followers to tweet about and donate to a particular charitable cause over a specific period of time. These initiatives will gain in popularity as savvy entrepreneurs capitalize on the relationship-building advantages of social media and the good publicity that comes with giving back.
5. The Move to Higher Bandwidth (4G): Say goodbye to 3G. The use of higher bandwidth will not only be a necessity but a demand from busy consumers.
Widely becoming recognized with carriers such as Sprint and soon to be Verizon, 4G speed allows marketers to get the message out faster with quicker download times. Jump on this bandwidth wagon early and incorporate it into your 2011 marketing plan.
6. Wordpress-Based Websites: Open source publishing application Wordpress will become the platform of choice. Why? It makes it easier for websites to implement search-engine optimization at little to no cost with plug-ins, which add specific capabilities to software applications.
To stay competitive, I recommend that you consider moving your website to a Wordpress platform. These sites are user-friendly and do not require knowledge of HTML code.
7. Plug-ins: The use of plug-ins will proliferate, with thousands of new options surfacing monthly.
An increasingly popular plug-in is Scribe which monitors and scores your keywords and updates to your site automatically to increase results with your search-engine optimization.
8. Review Sites: Websites dedicated to customer reviews will dominate the social media landscape. Consumers want to be heard, and more importantly they want answers. Sites such as Groubal.com, will achieve this by consolidating common user-submitted complaints and presenting those petitions to businesses, demanding answers for their wrongdoings. Consumers are even creating blogs specifically about teaching people how to complain effectively.
This is another reason to monitor conversations about your products and services online. Moving into 2011, make sure you have a plan for how to respond to positive and negative reviews. Remember to respond immediately. Reviews will start to spread like wildfire with these sites.
9. Monitoring Conversations: Show your customers you're listening and responding. Make it a New Year's resolution to set up online monitors for your company.
You can choose social-media dashboards or keyword-alert services. There are paid dashboards such as Radian6 and ObjectiveMarketer as well as free service sites Hootsuite and Tweetdeck. These dashboards not only allow you to monitor but participate in live conversations.
10. Presentation Platforms: As teleseminars become overused and tired, interactive web seminar platforms will step in to fill that market need.
New presentation platforms such as SlideRocket.com and Prezi.com are incorporating easy-to-build presentation tools with social media, live feeds and video. People online want to see the presenter, not just listen over a bridge line.
Choose your presentation platform for 2011 and make sure that it not only streams well but also allows for immediate interaction and cutting-edge presentation tools.
By buying from a company that's actively doing something praiseworthy, affluent consumers feel they're helping out as well-without having to do anything other than make a purchase they were going to make anyway. Find out how you can align your company with a cause they'd like to support. | {
"pile_set_name": "Pile-CC"
} |
Markets around the globe are taking a beating today, with gold and silver prices bearing the brunt of the selloff as investors position themselves for the eventual curtailing of the Fed’s bond-buying program.
“It is hard to know where to start this morning with a synchronized collapse taking place in emerging market currencies bonds and equities, gold breaking down to a new 2013 low and an ugly session taking place in Europe,” Michael Shaoul, chairman of New York-based Marketfield Asset Management, wrote to clients this morning.
Gold prices fell by more than 5% and traded below $1,300 an ounce for the first time in nearly three years. The SPDR Gold Trust (GLD) is down 4% amid heavy trading volume.
Gold’s ugly five-day chart
FactSet
Silver prices plunged more than 8% and fell below $20 an ounce for the first time in 32 months. By comparison, silver had been above $30 earlier this year. The iShares Silver Trust (SLV) is down more than 6% on Thursday.
Silver prices over the past five days
FactSet
Emerging markets are getting hit hard as the Vanguard FTSE Emerging Markets ETF (ticker symbol VWO) is down 3.5% on heavy volume. Bond yields are spiking, with yields on benchmark 10-year Treasury notes jumping as high 2.469%, according to Tradeweb. And U.S. stocks are getting hit, with the Dow down about 200 points after falling 206 points in yesterday’s session.
As we noted in today’s Morning MoneyBeat, the renewed volatility across market may just be a knee-jerk reaction. The problem now is trying to decipher when that knee-jerk reaction is going to dissipate.
“We continue to believe that the market is likely to take matters into its own hands well before any genuine clarity over timing emerges from the Federal Reserve,” Shaoul says. | {
"pile_set_name": "Pile-CC"
} |
What's New News Summaryhttp://bernardmiddle.mehlvilleschooldistrict.com/cms/One.aspx?portalId=125716&pageId=125731
2018-2019 Elective Courses<img src="http://bernardmiddle.mehlvilleschooldistrict.com/UserFiles/Servers/Server_125632/Image/elective%20courses.jpg" />Students will be enrolled in two elective classes. 2018-2019 Elective Course offerings information and request forms are now available. http://bernardmiddle.mehlvilleschooldistrict.com/cms/One.aspx?portalId=125716&pageId=5159394
5159394Wed, 07 Feb 2018 00:00:00 GMTMessage From The Principal<img src="http://bernardmiddle.mehlvilleschooldistrict.com/UserFiles/Servers/Server_125632/Image/sullivan.jpg" />Bernard Middle School has been selected as a 2018 Model School by the International Center for Leadership in Education.http://bernardmiddle.mehlvilleschooldistrict.com/cms/One.aspx?portalId=125716&pageId=125732
125732Fri, 26 Jan 2018 00:00:00 GMTMiddle Schools Get More Computers<img src="http://bernardmiddle.mehlvilleschooldistrict.com/UserFiles/Servers/Server_125632/Image/computer.jpg" />On November 2, 2017 the Mehlville Board of Education unanimously approved the recommendation to expand Chromebook availability to support student learning in our four middle schools.http://bernardmiddle.mehlvilleschooldistrict.com/cms/One.aspx?portalId=125716&pageId=125734
125734Tue, 14 Nov 2017 00:00:00 GMTBoard Works To Improve Communications With Public<img src="http://bernardmiddle.mehlvilleschooldistrict.com/UserFiles/Servers/Server_125632/Image/Board%202016%202017%20reduced.jpg" />The Mehlville Board of Education will begin hosting a series of Open Dialogue Sessions this fall in an effort to better communicate with members of the Mehlville and Oakville communities. http://bernardmiddle.mehlvilleschooldistrict.com/cms/One.aspx?portalId=125716&pageId=125735
125735 | {
"pile_set_name": "Pile-CC"
} |
Validation of a new whole-body cryotherapy chamber based on forced convection.
Whole-body cryotherapy (WBC) and partial-body cryotherapy (PBC) are two methods of cold exposure (from -110 to -195°C according to the manufacturers). However, temperature measurement in the cold chamber during a PBC exposure revealed temperatures ranging from -25 to -50°C next to the skin of the subjects (using isolating layer placed between the sensor and the skin). This discrepancy is due to the human body heat transfer. Moreover, on the surface of the body, an air layer called the boundary layer is created during the exposure and limits heat transfer from the body to the cabin air. Incorporating forced convection in a chamber with a participant inside could reduce this boundary layer. The aim of this study was to explore the use of a new WBC technology based on forced convection (frontal unilateral wind) through the measurement of skin temperature. Fifteen individuals performed a 3-min WBC exposure at -40°C with an average wind speed of 2.3ms-1. The subjects wore a headband, a surgical mask, underwear, gloves and slippers. The skin temperature of the participants was measured with a thermal camera just before exposure, just after exposure and at 1, 3, 5, 10, 15 and 20min after exposure. Mean skin temperature significantly dropped by 11°C just after exposure (p<0.001) and then significantly increased during the 20-min post exposure period (p<0.001). No critically low skin temperature was observed at the end of the cold exposure. This decrease was greater than the mean decreases in all the cryosauna devices with reported exposures between -140°C and -160°C and those in two other WBC devices with reported exposures between -60°C and -110°C. The use of this new technology provides the ability to reach decreases in skin temperature similar to other technologies. The new chamber is suitable and relevant for use as a WBC device. | {
"pile_set_name": "PubMed Abstracts"
} |
I finally caught up with Terminator: Genisys this weekend and it’s fair to say that I viewed it with the same sense of disappointment and rising disinterest that the rest of you probably experienced—and not just because Netflix seemed to be working on dial-up speeds (oh, the irony!) at the time.
I expect I have the same history with the Terminator franchise as almost anyone else: the first two are classics (I personally view T2 as the Perfect Blockbuster Movie); the third one is mostly bearable, but also occasionally painful; the fourth one is simply dull. I only say this to emphasise that I had no major stake in Terminator: Genisys. I hoped to be entertained, while recognising that the franchise was several decades past its prime.
Even before the credits rolled I was developing a perplexing sense that the filmmakers had achieved something remarkable: they had actually come very, very close to reinvigorating the franchise, but had instead ended up giving us a super-sized serving of disappointment because of some basic errors of judgement (DO YOU SEE WHAT I DID THERE??? DO YOU?!) that could have easily been avoided.
Let’s go through some of those mistakes.
And, since this post has ended up being way longer that it truly has any right to be, let’s also have some jump links so that you can pick and choose and/or come back later:
Exposition
Exposition is a killer (cue ironic, extended three-page explanation of what exposition is).
Needless to say, if there’s one thing you don’t want in the sort of sci-fi action thriller that the Terminator films are supposed to be, it’s people standing around in rooms and talking to each other. And if there’s one thing that Terminator: Genisys has, it’s lots of scenes of people standing around in rooms and talking to each other.
Let’s take just one example: the moments leading up to Kyle and Sarah travelling forward in time. We have a scene of Kyle and Sarah talking while they get undressed (yes, this film actually manages to make a scene of two sexy people getting undressed seem boring). We then have a scene of Kyle and Sarah debating whether to go to 1997 or 2017. We also have a scene with Pops! (I’m going to keep adding the exclamation point because it’s ridiculous and makes me laugh) explaining why he has to stay behind so that he can age enough to look like actual Arnold Schwarzenegger. You see, even that sentence took forever…
All of this may only have taken a few minutes, but it felt like twenty. Not only is this boring, but it also gives us plenty of time to think about how nonsensical the plot is (“hey – why don’t we give ourselves less than two days to stop Skynet instead of two whole decades!”).
Let’s look back and compare with the first Terminator film. Sure Kyle has a scene where he has to explain to Sarah where he’s come from, but that’s done in the middle of a frikkin’ high-speed car chase while they’re trying not to get killed by Evil Arnie. Think about how relentless that first film is. Think about how efficiently all the time travel mechanics are explained. Think about how many dozens of times you had to watch it before you caught yourself picking out the plot holes.
Lesson: when exposition really has to happen, try and wrap it inside something much more exciting. Under no circumstances should you let your audience get bored.
Telling, not showing
This is related to exposition, but has more to do with character development. One of the worst examples of this in cinematic history happens in Star Wars Episode II. George Lucas goes to great pains to get Anakin and Padme to tell us all about how much in love they are. Unfortunately he doesn’t show us a single reason why they should feel that way.The result is quite probably the greatest crime against cinema romance ever committed to celluloid (yes, I know it was shot digitally but you can have my cliches when you take them from my cold, dead, writer hands!).
This sort of thing happens multiple times in Terminator: Genisys. For example, the movie has a pretty fresh take on Sarah Connor to play with—the idea of her being trapped by her destiny (admittedly this is largely borrowed from T2)—but wastes it by doing little more than have Sarah complain occasionally instead of showing us moments where visibly struggles against her role in the future. (Coincidentally I watched Minority Report the night before, which is a bit on the nose concerning predestination and choice, but still does a lot more with it).
As a particularly bad example I’d select one of the film’s earliest scenes: the moment that we first see Kyle and John Connor talking. Now, supposedly these two are seasoned soldiers who play critical roles in saving humanity. We assume they’ve bonded on the battlefield, saved each other’s lives countless times, earned their mutual trust through pain and sacrifice.
So the first time we see them together is the two of them talking in an empty corridor.
Grrreat.
This is meant to show us that they’re great friends, but all the scriptwriters are really doing is telling us: “Hey, look at what great friends these guys are! Are you on board with that? They’re talking about beer, dudes. Is this all we need to do on this one? Yeah? Awesome…”
Sure. Leader of the resistance, saviour of all humanity, on the eve of a major assault against the enemy and he’s got time to chat in a corridor? Nope. Nope nope nope.
(Yes, I know the writers hedge their bets by showing us young Kyle being rescued by somehow-already-much-older John Connor, but all this gives us is literally the first minute of their lifelong(ish) relationship. There’s no meat on them bones yet.)
So let’s have a counter example: the relationship between Sarah and John in T2, almost the entirety of which is beautifully illustrated within a single scene as they’re driving away from the T-1000. Sarah reaches out to John. John thinks his mother is about to hug him, only to realise that she’s just checking him for wounds. To her he’s the Leader Of The Resistance and must be protected. But all he wants is his mother.
Lesson: dialogue isn’t showing; it’s telling. Actions are showing. Use your character’s behaviour, actions, and reactions to give us meaningful insights into who they are.
Fan service
Terminator: Genisys is chock full of fan service: those moments that repeat, rework or otherwise harken back to the first two films (and that only fans are supposed to notice).
There are too many instances to pick out just one, and in isolation there’s nothing especially bad about any of them.
In the right hands, fan service can be a lot of fun—especially when it plays on the prior knowledge of those moments. Let’s look, once again, at T2 for a good example: the moment when Arnie’s terminator invites Sarah to: “Come with me if you want to live.” It works because it’s a new take on an iconic line. It adds both irony and drama. We know Arnie is there to rescue Sarah but she’s going to assume the exact opposite because he spent the entire the last movie relentlessly trying to kill her. It also works whether or not you’ve seen the first film. Fans get to feel a little bit smart. Non-fans lose nothing from not getting the reference.
In the wrong hands, fan service can be laboured and obvious. In the wrong film it does nothing more than draw attention to how much better the other films were than the one you’re watching now.
Terminator: Genisys makes a bold play: it places its entire stock on using two iconic movies as the supposed launchpad for an Exciting New Franchise. Unfortunately it misfires. It opens a door for cynical reviewers to say the franchise has long ago run out of ideas, and for cynical fans to accuse Genisys of trashing the memory of the original movies in order to establish its own continuity. It’s likely that one or two references would have pulled through, but the references continue well into the third act and leave the impression that the film lacks the confidence to assert its own identity on the franchise.
It’s a shame because within the limits of the Terminator universe (exactly how many times can we send killing machines back in time in order to fail at murdering the Connor family?) Genisys did manage to find a new way of using time travel and multiverse theory to shake things up a bit. There was an interesting story to be told from this foundation.
Unfortunately, Terminator: Genisys didn’t tell that story.
Lesson: a little fan service can be a treat. Too much and you’re as likely to turn those same fans against you.
Casting
While we’re on the subject of fan service we can’t ignore casting. All the problems I’ve highlighted up to here could have—and should have—been fixed at the script stage. No one should have spent a penny on this movie until the script was solid. However, bad casting can undermine a film almost as surely as a bad script (I’m hesitant to point at the Star Wars prequels since they were poor in almost every respect, but we can look to Gods Of Egypt for a tangential example of how casting can help to stuff up a film).
Terminator has unusually strong form in recasting roles that have been established by other actors. If we include the TV series, we’ve already had at least two Sarah Connors, three Kyle Reeses (pieces … yup, went there ….) and four John Connors. Recasting really shouldn’t be a problem for us at this stage.
The key thing here is that the recasted characters have typically been presented at different times in their lives, or in different circumstances (including being on TV, instead of in a movie). They are essentially different people than the ones we’re already familiar with, so we have less problem with them being literally different people.
However, in Genisys we’re given a Sarah Connor and Kyle Reese who are intended to be mirrors of the characters we already know from the first film. Sure, Sarah Connor has already lived a very different life, but there’s no way we’re going to be looking at a circa 1984 Sarah Connor in a Terminator film without seeing Linda Hamilton.
Anyway, for my money, no one is especially bad in this film. They’re just not quite right.
Jai Courtney would have probably been excellent as a terminator. I’m not being facetious: look at what it did for Arnie. His best role to date (that I’ve seen) was as the bad guy in Jack Reacher, so we know he can be chilling without having much to work with. Here he’s condemned to leave us pining for Michael Biehn (who I unreservedly love, but is never going to win any acting awards if we’re honest).
Jason Clarke, on the other hand, is an excellent actor. Personally, I just can’t see him as either grizzled warrior John Connor or an unstoppable killing machine. Why didn’t they cast him as Kyle Reese? Use him to give us the tortured, human face to the war instead of the relatively impassive Courtney.
While we’re here, why not have Matt Smith as John Connor instead of wasting him in a tiny role that’s only there to set up the sequel? Given that he’s already pulled off Doctor Who, I reckon we could buy Smith as the saviour of humanity; having him later revealed as a terminator would be a superb way of subverting his well-established genre credentials.
Meanwhile, Emilia Clarke does well enough, but I can’t help feeling she’s fighting against a script and director that seem determined to force her into the ‘embittered, hard-as-nails, secretly vulnerable, female warrior’ trope. Consequently, there’s not enough here to separate her from Daenerys Targaryen and the film might have worked better with someone like Emily Blunt who can bring some of their own character to the role.
Or, maybe just a better script.
Lesson: don’t ask me, I’m not a casting agent.
Marketing fails
There are two final fails that I want to tack on here. Arguably they’re lesser fails that (inarguably) are more to do with the marketing of the film, but they still fall into that already overstuffed Coulda Been Avoided basket.
The first is this whole “Gen-eye-sys” thing.
In the context of the film the name actually makes perfect sense: it’s a wanky, must-have app-platform-whizmo-thingie that everyone simply must have. Of course it’s going to have a dumb-as-nails name.
The problem comes partly from using it as the film’s name. It’s a clumsy pun which only really makes sense after you’ve seen the film. The bigger problem comes from how it was revealed: simply as the film’s title, fait accompli, without any context whatsoever. If you come from an overwhelmingly Christian country and you make like you don’t know how to spell genesis then people are going to think you’re an idiot. And once people start thinking you’re an idiot—especially where the internet is involved—it’s very, very hard to claw your way back from that.
Second problem is frikkin’ spoiling the fact that John Connor ends up as a terminator! In the trailer!! And on the poster!!! It’s the big frikkin’ twist! It’d be like having a poster for The Sixth Sense that says ‘Look at Bruce Willis: he’s playing a dead guy!’.
I don’t have a huge problem with John Connor being a terminator. It’s a little bit of a dick move, for sure, but again it makes perfect sense in the context of the film: Skynet is trying to turn humanity’s greatest asset into a weapon (I’m less sure why they’d do all that only to send him back to 1997 to act as an IT consultant, but whatevs).
It’s like giving Terminator fans the double finger. First we’re going to ruin this mythological hero who underpins the whole franchise. Then we’re going to take away the potential fun of it being a huge twist.
The end
At this point the only question remaining is this: why have I just written almost 2,500 words on Terminator: Genisys (and did you even make it this far)?
I didn’t hate the film. I didn’t love it, or even like it much. It was a collection of interesting ideas that were undermined by some basic mistakes. The truth is that I find these near-misses often more fascinating than the ones you can tell are going to be a disaster right from the outset.They get so close and then let it blow up in their faces. I guess it’s a good dramatic principle if nothing else. Imagine The Great Escape if their entire plan was based around a jelly fight. You’d realise straight away it’s going to be a dismal failure, and most of the dramatic tension would be lost. Instead we have a classic, unforgettable movie where they so very nearly get away with it.
If it wasn’t for that. One. Stupid. Mistake.
So, please use the comments to share your thoughts about Terminator: Ineptitude. Which bits did you like? If any? Which parts made you want to drive a T-1000’s finger into your brainspace? Is there a future for this once-great franchise, or do we need to consign this one to the dustbin of history? | {
"pile_set_name": "Pile-CC"
} |
Wernberg
Wernberg () is a municipality in the district of Villach-Land in the Austrian state of Carinthia.
Geography
Wernberg lies on the Drava River at the foot of the Ossiach Tauern range, east of Villach, and between Lake Ossiach on the north, Wörthersee on the east, and Lake Faak in the southern part of the municipality. It is located at the northwestern rim of the traditional settlement area of Carinthian Slovenes.
The municipal area comprises the cadastral communities of Neudorf (Nova vas), Sand (Pešče), Trabenig (Trabenče), Umberg (Umbar), and Wernberg (Vernberk).
Neighboring municipalities
History
Archaeological findings indicate an early settlement of the area already in Roman times. A castle near the village of Sternberg, today a ruin, was first mentioned in a deed issued at Saint Paul's Abbey about 1170/80. Wernberg itself first appeared in a document dated 17 November 1227 determining the demolition of a Drava bridge and the transfer of Werdenberch Castle to the Carinthian estates of the Bishopric of Bamberg.
The castle, situated on arock above the Drava river, was built under the rule of Duke Bernhard of Carinthia, it later passed to the Austrian Habsburgs. Held by the Khevenhüller noble family from 1519 onwards, the present-day Renaissance building was erected at the behest of the Carinthian governor George Khevenhüller (1533–1587). Part of the Ossiach Abbey estates from 1672, it is since 1935 a possession of the Catholic Missionary Sisters of the Precious Blood.
The municipality of Wernberg was established in 1850.
Politics
Seats in the municipal council (Gemeinderat) as of 2015 local elections:
Social Democratic Party of Austria (SPÖ): 12
Austrian People's Party (ÖVP): 4
Freedom Party of Austria (FPÖ): 4
The Greens: 2
Wählergemeinschaft Wernberg (independent): 1
References
External links
Municipal site
Category:Cities and towns in Villach-Land District | {
"pile_set_name": "Wikipedia (en)"
} |
1624 in music
The year 1624 in music involved some significant events.
Events
Antonio Bertali is employed as court musician in Vienna by Emperor Ferdinand II.
Classical music
Juan Arañés – Libro Segundo de tonos y villancicos
Girolamo Frescobaldi – Il primo libro di capricci
Claudio Monteverdi – Il Combattimento di Tancredi e Clorinda
Samuel Scheidt – Tabulatura nova, three books of organ music
Johann Ulrich Steigleder – Ricercar Tabulatura
Opera
Births
date unknown
Francesco Provenzale, composer (died 1704)
François Roberday, organist and composer (died 1680)
Deaths
February 4 – Vicente Espinel, writer and guitarist (born 1550)
October – Marcantonio Negri, composer, singer and musical director
November 14 – Costanzo Antegnati – Italian organ builder, organist, and composer
Music
Category:17th century in music
Category:Music by year | {
"pile_set_name": "Wikipedia (en)"
} |
Q:
How to pass dropdown value to state/get request? React
Hey react newbie here!
I am trying to use a select/option value from a drop down to be used in a get request { selectedOption }.
I am unsure how to pass the selectedOption into my main state/component to be used in the get request.
Can anybody point me in the right direction please? <3
Constructor/state:
public constructor(props) {
super(props);
this.state = {
documents: [],
selectedOption: null
};
}
Get request:
public getDocuments() {
axios
.get("https://bpk.sharepoint.com/_api/search/query?querytext='Colour:" + this.state.selectedOption + "'&trimduplicates=true&rowsperpage=100&rowlimit=1000",
{ params:{},
headers: { 'accept': 'application/json;odata=verbose' }
})
....
}
Render:
public render(): React.ReactElement<IKimProps> {
let { documents, selectedOption } = this.state;
return (
<div className={ styles.kim }>
<Selecter></Selecter>
<br/><br/>
{this.renderDocuments()}
</div>
);
}
}
Selector Component (Not in main app, in the main app its a component called <.Selecter.><./Selecter.>):
import React from 'react';
import Select from 'react-select';
const options = [
{ value: 'red', label: 'red' },
{ value: 'blue', label: 'blue' },
{ value: 'green', label: 'green' }
];
class Selecter extends React.Component {
state = {
selectedOption: null,
};
handleChange = selectedOption => {
this.setState({ selectedOption });
console.log(`Option selected:`, selectedOption);
};
render() {
const { selectedOption } = this.state;
return (
<Select
value={selectedOption}
onChange={this.handleChange}
options={options}
/>
);
}
}
export default Selecter;
A:
The issue here is that you're setting the state of Selecter, but never bubbling that up to the parent class. The general way you do this is via a prop passed to Selecter that sets the parent state:
Parent.js:
...
public setSelectedOption(selectedOption){
this.setState({ selectedOption: selectedOption });
// or this.setState({ selectedOption }); (whichever works)
}
public render(): React.ReactElement<IKimProps> {
...
<Selecter onChange={this.setSelectedOption.bind(this)}></Selecter>
}
Then, handle the passed function in Selecter.js:
class Selecter extends React.Component {
...
handleChange = selectedOption => {
this.setState({ selectedOption });
if(this.props.onChange){
this.props.onChange(selectedOption);
}
};
}
| {
"pile_set_name": "StackExchange"
} |
Introduction {#sec1}
============
The postcoital test (PCT) was first described by Marion Sims in 1866 \[[@ref1], [@ref2]\]. He examined a patient a few minutes after she had intercourse with her husband and observed active sperm in her cervical mucus. He later used the PCT as a means of evaluating the cause of infertility in eight women \[[@ref3]\]. The procedure was further developed by Hühner in 1913 \[[@ref4]\], and the Sims-Hühner test to determine whether "cervical factors" played a role in infertility became standard practice in the evaluation of infertile couples for many decades.
The World Health Organization, in its publication *WHO Laboratory Manual for the Examination and Processing of Human Sperm*,[^1^](#fn1){ref-type="fn"} states that the aims of a PCT are to "determine the number of active spermatozoa in the cervical mucus and to evaluate sperm survival and sperm behaviour some hours after coitus." The manual describes important aspects of the procedure as shown in [Table 1](#TB1){ref-type="table"}. It can be seen that small changes were made between the fourth and fifth editions \[[@ref5], [@ref6]\] and the manual has been interpreted in different ways, depending on whether it is being applied to infertility evaluation or contraceptive research.
######
PCT procedures, *WHO Laboratory Manual for the Examination and Processing of Human Sperm*
------------------------------------
![](ioaa099fx1a.jpg){#ioaa099fx1a}
![](ioaa099fx1b.jpg){#ioaa099fx1b}
------------------------------------
Use of the PCT in the evaluation of infertile couples {#sec2}
=====================================================
As evidenced by changes in the WHO manual, standardization of the procedure was always a work in progress and many practitioners carried it out as they best saw fit. It was reported in 1995 that the PCT was used in 92% of obstetrics/gynecology departments with large fertility clinics in 16 European countries, but despite the existence of the WHO manual, there were "large differences in timing in relation to cycle and coitus, methodology used for the test, cut-off level of normality and treatments applied for abnormal test results." \[[@ref7]\].
Variations in PCT for infertility {#sec3}
---------------------------------
A review of research studies involving PCTs done in infertile women reveals the following variations:
- Length of time between coitus and collection of mucus for evaluation:
This has been variously reported as being set at, for example, 2.5, 2--8, 6--8, 8--16, and 15--20 h \[[@ref8]\].
- Method to determine when in the menstrual cycle to conduct the test:
Dates of previous menses and basal body temperature charting were the methods used by most researchers \[[@ref9]\]. Some later studies required a plasma progesterone level of \>3 ng/ml during the luteal phase to give retrospective evidence of ovulation \[[@ref8], [@ref12]\]. Urinary luteinizing hormone (LH) dipsticks and ultrasound were rarely used \[[@ref15], [@ref17], [@ref18], [@ref20]\].
- Scoring of cervical mucus to assess time in cycle:
Some of the five criteria cited in the WHO manual were used in most cases \[[@ref8], [@ref9], [@ref11], [@ref18], [@ref21]\], but all five were used only in later studies \[[@ref16], [@ref20]\]. Not all studies required a cervical mucus score of at least 10 to be considered valid, although some required repeat of negative tests \[[@ref13]\].
- Length of male abstinence
While some studies recommend that the couple abstain from sex for 2--3 days before the PCT sex act \[[@ref10], [@ref14], [@ref15], [@ref20]\], proscription against the male partner masturbating is rare.
- Method of counting sperm
Studies vary in the method of slide preparation (at least one investigator added saline to the vaginal preparation \[[@ref22]\]), magnification used (most used, 400× \[[@ref9], [@ref10], [@ref13], [@ref14], [@ref18]\]), number of fields examined (e.g., at least 3 \[[@ref13]\], at least 5 \[[@ref14]\], 10 \[[@ref9]\]), and categorization of motility (e.g., 0--3 \[[@ref10], [@ref15], [@ref17]\], not progressively motile vs. progressively motile \[[@ref13], [@ref14], [@ref16], [@ref19]\]).
The interpretation of the PCT---i.e., what may be considered a "normal" result associated with a higher risk of pregnancy---has also varied widely. The WHO manual fifth edition states: "The presence of any spermatozoa with progressive motility in endocervical mucus 9--14 hours after intercourse argues against significant cervical factors, and sperm autoimmunity in the male or female, as possible causes of infertility." However, a 1973 WHO publication stated "10 or more sperm/HPF \[high power field\] with directional motility may be considered satisfactory. Fewer than 5/HPF, especially when associated with sluggish or circular motion is an indication of oligo-asthenospermia or abnormal cervical mucus." \[[@ref23]\].
PCT prediction of pregnancy in infertility {#sec4}
------------------------------------------
A number of studies have been conducted in an effort to determine how well the PCT predicts subsequent pregnancy, with varying results. Using the WHO definition of any spermatozoa with progressive motility, some researchers found that a single sperm seen on the PCT was associated with an increased chance of pregnancy. Hull studied 80 women with at least 12 months infertility and found a five-fold higher pregnancy rate when at least one sperm with forward progression was seen in at least three HPFs in the cervical mucus 6--18 h after coitus \[[@ref13]\]. Glazier studied 318 infertile couples and found that the ratio of pregnancy within the subsequent 18 months in those with at least one forward-moving sperm in each of five HPFs examined vs. those with no forward-moving sperm was 3.73 \[[@ref14]\]. In a 2000 reanalysis of the same data, Glazener reported "the relative chance of conception in couples with a negative PCT was about a quarter of that when the PCT was positive." \[[@ref24]\]. Similarly, Snick studied 726 infertile women and defined an abnormal result as the presence of at most one forward-moving sperm in the entire mucus sample \[[@ref12]\]. Having this type of abnormal PCT was associated with a relative risk of live birth of 0.26. Dunphy found that among 94 infertile couples, those with at least one sperm per HPF showing at least sluggish motility had nearly five times the chance of conceiving compared with those with sperm with only in situ motility or no motility \[[@ref15]\]. Eimers found that among 996 infertile patients, those with more than one progressively motile sperm (PMS) in the entire mucus sample had a 330% chance of conception relative to women with no sperm \[[@ref16]\]. Similarly, Hessel found that the presence of one or more progressive forward-moving spermatozoa per HPF among 1624 newly referred infertile women was associated with spontaneous (meaning achieved without medical intervention) and overall ongoing pregnancy rates after 3 years of 37.7 and 77.5% compared with 26.9 and 68.8% after a negative test (*P* \< 0.001) \[[@ref25]\].
While it seems clear (and somewhat predicable) that having some sperm rather than no sperm in the cervical mucus is associated with a higher chance of subsequent pregnancy, it is more difficult to assess the likelihood of pregnancy associated with different quantities of sperm among women who have more than one sperm/HPF. Collins found that among 355 infertile couples, the pregnancy rate at 24 months was significantly higher in couples with at least five motile sperm/HPF vs. those with fewer (46.9 vs. 31.6% *P* = 0.05) \[[@ref21]\]. Jette found a statistically significant increase in pregnancy rates among 205 infertile patients in those who had \>20 motile sperm/HPF \[[@ref11]\]. And Moghissi found an average of 16.8 sperm in the endocervix of 58 infertile women who became pregnant compared with 7.1 in 143 women who did not \[[@ref26]\]. In a review article, Blasco stated that between the two extremes of \>10 sperm/HPF with 50% having purposeful motility and \<5 with \>50% that do not move, the prognostic value of the PCT is limited \[[@ref27]\].
An important factor to consider is the population from which participants in these studies were drawn. For obvious reasons, since the PCT was being used to evaluate infertility, these were generally populations of women being seen for infertility. However, infertility has many causes besides those involving sperm--mucus interaction. So even if certain causes were ruled out before the PCT, such as anovulation or tubal occlusion, there may have been other unidentified causes preventing pregnancy. The failure to conceive despite the presence of sperm on a PCT does not necessarily invalidate the test---it may indicate that a problem other than one involving sperm--mucus interaction is likely the chief cause.
PCT prediction of pregnancy in fertile couples {#sec5}
----------------------------------------------
A better test of the PCT as a predictor of fertility would be a study done in women of proven recent fertility with the same partner, in whom good mucus is seen, and who engage in no other coital acts in that cycle. However, even in this population, important factors must be considered, length of time of follow-up being probably the most important. In any population of women attempting pregnancy, the pregnancy rate falls over time since the most fertile women achieve pregnancy first. This phenomenon affects the Pearl pregnancy rate that is being replaced by the life table analysis that provides the pregnancy rate for each month of follow-up and can provide a pregnancy rate for any length of follow-up.
To date, no perfect study has been carried out. Giner allowed only one coital act per cycle, and that act was studied in a PCT \[[@ref9]\]. He did not find a correlation between pregnancy and sperm number or motility in the PCT, but the study was done in a population of women who had experienced recurrent spontaneous abortions, likely due to reasons other than problems with sperm/cervical mucus interactions.
Beltsos studied 200 couples who had discontinued contraception to become pregnant up to 3 months earlier \[[@ref17]\]. They had no history of infertility and no known risk factors for infertility or recurrent pregnancy loss. They underwent monthly PCTs based on their menses dates with daily urines collected for retrospective urinary LH testing in the lab (presumably because home LH test kits did not yet exist). Pregnancy occurred in 163 couples within 12 months. The PCT values for each woman were averaged and there was a small, but significant, difference in the number of sperm with purposeful forward motility per HPF among the women who became pregnant vs. those who did not (2.5 vs. 1.4, *P* = 0.03). However, 42% of cycles were found to have been mistimed and results were not recalculated using only correctly timed cycles.
Decline of PCT for infertility evaluation {#sec6}
-----------------------------------------
In 1990, Griffith and Grimes published a review of the PCT and concluded that the PCT "lacks validity as a test for infertility." \[[@ref28]\]. And in 1998, Oei published a paper showing that use of the PCT among infertile women did not affect pregnancy rate \[[@ref18]\]. However, both papers were heavily criticized because, among other things, the couples had varying lengths of follow-up, there may have been other causes for infertility, and the PCT was not used to determine treatment \[[@ref19], [@ref24], [@ref29]\].
In a comprehensive 2002 review of methods used to predict conception, Glazener concluded "in a population of infertile couples with otherwise normal results after complete investigations, the chance of conception could be predicted by their duration of infertility at first presentation and the result of the PCT, but not by semen parameters or the woman's age." \[[@ref29]\]. Nevertheless, the PCT was gradually replaced in the infertility work-up by more modern tests and procedures, and the wide use of in vitro fertilization, which bypasses the cervical mucus--sperm interaction.
Use of the PCT in the evaluation of new vaginal contraceptives {#sec7}
==============================================================
Historical Use of PCT for development of vaginal contraceptive agents {#sec8}
---------------------------------------------------------------------
However, the PCT continues to be used in the evaluation of vaginal contraceptives, both chemical products (e.g., spermicides) and mechanical barriers (e.g., diaphragms), as recently as 2017 \[[@ref30]\]. The first published report of the PCT used in the evaluation of a contraceptive was a 1953 study of an experimental contraceptive jelly ("Jelly P") \[[@ref22]\]. In it, 289 PCTs were done in 158 mostly postnatal women 2--72 h post coitus. Spinnbarkeit and time since last menses were used to estimate time in cycle. Motile sperm were found in six (2.1%) of the PCTs. In three cases, product use instructions had not been followed correctly. There were seven pregnancies among the 83 women who were followed for 3 months, yielding a pregnancy rate of 17.5 per 100 woman-years. Correlation between PCT and pregnancy was not attempted, but the authors concluded that they were "satisfied that the PCT is accurate and should be utilized more widely in the evaluation of spermicidal preparations used as contraceptives."
The PCT has subsequently been used to evaluate other vaginal chemical barriers, i.e., Advantage 24 gel \[[@ref31]\], benzalkonium chloride (BZK) films \[[@ref32]\], nonoxynol-9 (N-9) films \[[@ref33]\], C31G gel \[[@ref34]\], and ACIDFORM (later Amphora and Phexxi) gel \[[@ref35]\], as well as several mechanical vaginal barriers, i.e., Lea's Shield \[[@ref36]\], FemCap \[[@ref37]\], Ovaprene vaginal ring \[[@ref38]\], and the SILCS (later Caya) diaphragm \[[@ref30], [@ref39]\]. A review of the literature since 1953 identified 10 PCT studies of vaginal contraceptives involving these 9 test products and 3 control products (Ortho diaphragm \[[@ref36], [@ref37]\], Vaginal Contraceptive Film (VCF) N-9 film \[[@ref32], [@ref33]\], and Conceptrol N-9 gel \[[@ref31]\]). They are summarized in the first three columns of [Table 2](#TB2){ref-type="table"}.
######
Vaginal chemical and mechanical barrier studies using PCTs carried out in tubally sterilized women
Test product and year of PCT study publication Type of cycle PCT studies: mean number of PMS/HPF. SD and range shown, if available Contraceptive effectiveness study: 6-month typical use pregnancy rate, if available
------------------------------------------------------- ------------------------------------------------------------------------- --------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------
Lea's Shield, 1995 Baseline \[[@ref35]\] \>5[^a^](#tblfn1){ref-type="table-fn"} (*n* = 10)
Lea's Shield + N-9 \[[@ref35]\] 0 (*n* = 10) 8.7% \[[@ref40]\] (*n* = 146)
Ortho diaphragm + N-9 \[[@ref35]\] 0 (*n* = 10)
Advantage 24 gel (N-9, 52.5 mg), 1996 Advantage 24, applied 15--30 min before coitus \[[@ref22]\] 0.5 (*n* = 120) 2% of PCTs had ≥10 PMS/HPF
Advantage 24, applied 12 h before coitus \[[@ref22]\] 2.5 (*n* = 111) 9% of PCTs had ≥10 PMS/HPF
Advantage 24, applied 24 h before coitus \[[@ref22]\] 4.4 (*n* = 139) 14% of PCTs had ≥10 PMS/HPF
Conceptrol (N-9, 100 mg), applied 15--30 min before coitus \[[@ref22]\] 0.1 (*n* = 127) All values \< 10
FemCap, 1997 Baseline \[[@ref36]\] Baseline cycle \#1: 18.0 (*n* = 7) SD 20.5 Baseline cycle \#2: 17.8 (*n* = 7) SD 17.8
FemCap + N-9 \[[@ref36]\] 0.2 (*n* = 7) SD 0.4 13.5% \[[@ref41]\] (*n* = 327)
Ortho diaphragm + N-9 \[[@ref36]\] 0 (*n* = 7) 7.9% \[[@ref41]\] (*n* = 372)
BZK film, 1997 Baseline \[[@ref31]\] Baseline \#1: 22.2 (*n* = 10) SD 20.2 Baseline \#2: 21.2 (*n* = 10) SD 20.2
BZK film, 19 mg \[[@ref31]\] 0.2 (*n* = 10) SD 0.6
BZK film, 25 mg \[[@ref31]\] 0 (*n* = 10)
VCF film (N-9, 70 mg) \[[@ref31]\] 0 (*n* = 10)
N-9 film, 1997 Baseline \[[@ref32]\] Baseline \#1: 23.7 (*n* = 10) SD 26.7 Baseline \#2: 15.0 (*n* = 10) SD 14.6
N-9 film, 100 mg \[[@ref32]\] 0.6 (*n* = 10) SD 0.9
N-9 film, 130 mg \[[@ref32]\] 0.9 (*n* = 10) SD 2.3
VCF film (N-9, 70 mg) \[[@ref32]\] 0.5 (*n* = 10) SD 0.8
ACIDFORM gel (later Amphora and Phexxi), 2004 Baseline \[[@ref34]\] 17.94 (*n* = 20) SD 19.91
ACIDFORM applied 0--30 min before coitus \[[@ref34]\] 0.19 (*n* = 20) SD 0.52 13.7% \[[@ref42]\] (*n* = 1183) (7-cycle cumulative typical-use pregnancy rate). Amphora was applied 0--60 min before coitus
ACIDFORM applied 8--10 h before coitus \[[@ref34]\] 0.75 (*n* = 20) SD 1.37
C31G gel, 2004 Baseline \[[@ref33]\] 14.6 (*n* = 22) SD 9.0 Range: 5.0--36.3
C31G 0.5% \[[@ref33]\] 0.3 (*n* = 13) SD 0.6 Range: 0--2.0
C31G 1.0% \[[@ref33]\] 0.5 (*n* = 18) SD 2.0 Range: 0--8.3 12.0% \[[@ref43]\] (*n* = 932)
C31G 1.7% \[[@ref33]\] 0.4 (*n* = 15) SD 1.6 Range: 0--6.1
Ovaprene vaginal ring, 2009[^2^](#fn2){ref-type="fn"} Ovaprene vaginal ring \[[@ref37]\] (no baseline cycle conducted) 0 (*n* = 20)
Caya (SILCS) diaphragm + N-9, 2008 Baseline \[[@ref38]\] 12.5 Range: 5.9--35.6 (*n* = 14) SD 8.8 12.5% \[[@ref44]\] (*n* = 128)
Caya + N-9 \[[@ref38]\] 0 (*n* = 8)
Caya (SILCS) diaphragm + N-9, 2017 Baseline \[[@ref39]\] 22.5 (*n* = 9) SD 33.4
Caya + N-9 \[[@ref39]\] 0 (*n* = 9)
^a^In baseline cycles, all participants were required to have at least five PMS/HPF to continue in the study. No further details about the average number of PMS/HPF in the baseline cycles were provided in the publication of this study.
BZK, benzalkonium chloride; HPF, high power field; PCT, postcoital test; PMS, progressively motile sperm; SD, standard deviation.
This table does not include the unpublished results of a new, recently completed PCT study on Ovaprene. According to a press release dated 11/12/19 from its new developer, Daré Bioscience, Inc., "The study enrolled 38 participants who completed a 'baseline PCT cycle' in which at least five PMS/HPF were observed in the woman's cervical mucus after intercourse with no contraceptive device in place\... Twenty-three participants completed a total of approximately 21 visits each\... The PCT clinical study met its primary endpoint - Ovaprene prevented the requisite number of sperm from reaching the cervix across all women and all cycles evaluated. Specifically, in 100% of women and cycles, an average of less than five (\< 5) progressively motile sperm (PMS) per high power field (HPF) were present in the midcycle cervical mucus collected two to three hours after intercourse with Ovaprene in place." <https://darebioscience.gcs-web.com/news-releases/news-release-details/dare-bioscience-announces-positivefindings-postcoital-test>, accessed 1/26/20
Similarities in PCTs for vaginal contraception testing {#sec9}
------------------------------------------------------
Unlike PCTs done to evaluate infertility, these trials were done with similar methodology in most respects. Similarities in these studies are as follows:
- Inclusion criteria:
Female participants had to have regular menstrual cycles, be protected from pregnancy by female tubal sterilization, and have no history of infertility involving themselves or their partner.
- Pre-PCT activities:
With the exception of the Ovaprene vaginal ring and one or two others, participants were advised to use condoms from the first day of the menses in the cycle in which the PCT would be performed. No intercourse or male ejaculation was allowed starting on about Day 10 of the cycle.
- Ovulation predictor kits:
Home urinary test kits that assessed LH, and in some cases estradiol, were used in most studies to indicate the best time to find midcycle cervical mucus.
- Adequate mucus:
The cervical mucus score, using the five WHO criteria, had to be at least 10 for the evaluation of sperm with this technique to be considered valid ([Table 1](#TB1){ref-type="table"}). The absence of sperm in the mucus prior to the coital act being tested had to be documented in order to ensure that sperm seen after the coital act came only from that act. If no sperm were seen in the cervical mucus after coitus with the product, sperm had to be seen in the vagina to provide evidence that sex had actually taken place.
- Interval between coitus and mucus assessment:
An interval of 2--3 h between the coital act and evaluation for sperm was usually used, based on Moghissi's assertion that "after ejaculation, sperm reach the level of the internal os rapidly. Their numbers increase gradually and reach a peak approximately 2 to 3 hours later." \[[@ref10]\]. This interval is the most likely to detect any sperm that has made it through or around the chemical or mechanical barrier being tested. In addition, according to the WHO fifth edition manual, "Spermatozoa are usually killed in the vagina within 2 hours," thus this interval should be long enough to minimize the chance that motile vaginal sperm could contaminate the cervical sampling.
- Method of assessment of motile sperm:
Sperm were counted in nine HPFs in a set pattern in an area representative of the distribution of sperm on the slide. In later studies, a gridded slide was used to facilitate sperm counting, and jelly containing microbeads was placed between the corners of the coverslip and the slide to standardize the height of the mucus sample.
Variations in PCTs for vaginal contraceptive testing {#sec10}
----------------------------------------------------
Likely due to the wide range of motile sperm associated with higher rates of pregnancy in infertility studies (from \>1 to \>20), these PCT studies varied in two important ways, both involving the average number of PMS/HPF.
First, in order to be eligible for the study, women had to have an average of ≥1, ≥5, or ≥10 PMS/HPF in the baseline cycle, depending on the study. Only the Advantage 24 study \[[@ref31]\] required ≥10 PMS/HPF. The Lea's Shield study \[[@ref36]\] required more than or equal to five PMS/HPF, the figure supported by Collins \[[@ref21]\]. Subsequent studies (FemCap, BZK film, N-9 film, C31G, and Acidform) required only more than and equal to one PMS/HPF until the two Caya studies that returned to Collin's standard of more than and equal to five PMS/HPF (these are the two most recent studies performed as part of a registration package with the FDA).
Second, the primary endpoint, or definition of a satisfactory result in test cycles, meaning a decrease in PMS/HPF after product use, was variously set at \<1, \<5, or \<10 PMS/HPF. The cut-off was \<5 PMS/HPF in the Lea study and then \<10 in the Advantage-24, FemCap, BZK film, and N-9 film studies, before being lowered to the cut-off of less than one PMS/HPF in the later Acidform study and less than five PMS/HPF in the C31G and two Caya studies.
A standardized method for PCT testing, including set baseline and post-product parameters, along with clinical trials determining actual pregnancy rates with product use will continue to improve understanding of the correlation between PCT outcomes and product contraceptive effectiveness.
PCT prediction of pregnancy prevention {#sec11}
--------------------------------------
Not all products went to contraceptive effectiveness trials, but some did. The first three columns in [Table 2](#TB2){ref-type="table"} shows results of PCTs of vaginal contraceptives, including both the results of baseline cycles done without a product, and cycles in which a product was used. The pregnancy rates seen in contraceptive effectiveness trials, if carried out, are also shown in the fourth column.
It may be seen that the average number of PMS/HPF in baseline PCT cycles falls within the range of 12.5--23.7. With the exception of the low-dose N-9 gel Advantage-24 when it was applied 12--24 h before coitus, average values with a product in place are uniformly below 5 PMS/HPF---all are actually below 1.0 PMS/HPF and some are 0, although outliers with values of over 8 PMS/HPF exist. Standard deviations are somewhat wide due to the small number of subjects, but Glatstein found that among observers of identical slides, there was fair reproducibility (kappa statistic 0.40--0.75) for sperm number and motility \[[@ref20]\]. Six-month typical use effectiveness rates vary, but all correspond to at least 86% effectiveness. It appears that a product that performs well in a PCT study goes on to demonstrate contraceptive effectiveness at a level *predictive* of a highly effective product, although not predictive of the exact effectiveness rate. For instance, Lea's Shield and the Ortho and Caya diaphragms, all had PCTs with an average of 0 PMS/HPF and typical use failure rates were 8.7, 7.9, and 12.5%, respectively. The ultimate contraceptive effectiveness is influenced by the ease and convenience of use of the product, along with patient compliance. Lea's Shield, FemCap, the Caya diaphragm, and Phexxi received FDA approval based on their contraceptive effectiveness studies.
Feasibility of PCT in clinical trials {#sec12}
-------------------------------------
Studies of products other than vaginal contraceptives often use correlates of protection as endpoints in Phase II studies. In evaluating vaginal contraceptives, the PCT is the closest thing we have to a correlate of protection---that is, something that gives an indication of whether the product works before it is tested in subjects at risk for the condition the product is supposed to prevent. However, PCT studies are extremely challenging in terms of scheduling---the woman and her partner must be able and willing to engage in intercourse on short notice and at a time that may not be at all conducive to it. In addition, the woman and the site staff must be available in the evening and on weekends for collecting the test samples, also on short notice. For these reasons, conducting a PCT study of the size expected for a typical larger Phase II study has not been deemed feasible in the development of any vaginal contraceptive to date.
Considerations for addressing current challenges {#sec13}
------------------------------------------------
Because of these challenges, methods to facilitate PCT studies or use of surrogate markers are being considered. As previously mentioned, standardization of the PCT for baseline and post-use parameters will improve ability to predict pregnancy rates.
Other methods to facilitate the PCT could be beneficial. With respect to determining midcycle, levels of mucins in the cervical mucus, particularly Mucin 5b, and O-glycosylation of mucins have been shown to change during midcycle \[[@ref45], [@ref46]\]. The performance of mucins compared with ovulation predictor kits that assess both urinary LH and estradiol has not been studied, and due to the large increase in LH prior to ovulation, it is not likely that other methods would prove to be more accurate. However, for lab testing of cervical secretions, mucins could replace the WHO criteria for cervical mucus scoring if adequately studied.
Surrogate markers of the barrier properties of cervical mucus (e.g., pore size and microrheology) have also been studied. This would be helpful in the context of progestin-only contraceptive methods that thicken cervical mucus but do not consistently prevent ovulation. For products that make it more difficult for sperm to penetrate mucus, these markers, including particle tracking, could be useful preclinically to predict effectiveness of the method \[[@ref47], [@ref48]\]. In vitro testing could precede clinical studies by use of collected cervical secretions and sperm with particle tracking analysis to aid in product development. However, ultimately clinical trials will need to be performed, and the PCT would still be the best predictor.
Conclusion {#sec14}
==========
Sangi-Haghpeykar wrote of the PCT that "this test is currently the best method available for estimating the performance of a spermicide in humans other than a full-fledged efficacy trial." \[[@ref31]\]. It can be concluded that a PCT study of a test product, carried out in the same manner as recent PCTs before it, can be predictive of contraceptive efficacy. PCT results similar to results seen with products that later showed satisfactory performance in efficacy trials is currently the best indicator we have of likely success of the test product.
The first edition of the WHO manual was published in 1980, with second, third, fourth, and fifth editions published in 1987, 1992, 1999, and 2010, respectively. With publication of the fifth edition, the title changed from *WHO Laboratory Manual for the Examination of Human Semen and Sperm-Cervical Mucus Interaction* to *WHO Laboratory Manual for the Examination and Processing of Human Semen*. The fourth and fifth editions are available online \[5, 6\].
This table does not include the unpublished results of a new, recently completed PCT study on Ovaprene. According to a press release dated 12 November 2019 from its new developer, Daré Bioscience, Inc., "The study enrolled 38 participants who completed a 'baseline PCT cycle' in which at least five PMS/HPF were observed in the woman's cervical mucus after intercourse with no contraceptive device in place... Twenty-three participants completed a total of approximately 21 visits each... The PCT clinical study met its primary endpoint---Ovaprene prevented the requisite number of sperm from reaching the cervix across all women and all cycles evaluated. Specifically, in 100% of women and cycles, an average of less than five (less than five) PMS per HPF were present in the midcycle cervical mucus collected two to three hours after intercourse with Ovaprene in place." <https://darebioscience.gcs-web.com/news-releases/news-release-details/dare-bioscience-announces-positive-findings-postcoital-test>*,* accessed 26 January 2020.
^†^ **Grant Support:** Financial support has been provided in part by NIH SBIR grant number 4R44HD095724-02 and by Daré Bioscience, Inc.
Conflict of interest {#sec15}
====================
Christine Mauck is employed by Daré Bioscience, Inc., the developer of Ovaprene.
| {
"pile_set_name": "PubMed Central"
} |
United States Court of Appeals
for the Federal Circuit
______________________
MARION ALDRIDGE,
Claimant-Appellant
v.
ROBERT A. MCDONALD, SECRETARY OF
VETERANS AFFAIRS,
Respondent-Appellee
______________________
2015-7115
______________________
Appeal from the United States Court of Appeals for
Veterans Claims in No. 14-3656, Judge Alan G. Lance,
Sr., Judge Robert N. Davis, Judge William Greenberg.
______________________
Decided: September 9, 2016
______________________
NATALIE A. BENNETT, McDermott, Will & Emery LLP,
Washington, DC, argued for claimant-appellant. Also
represented by LEIGH J. MARTINSON, Boston, MA.
IGOR HELMAN, Commercial Litigation Branch, Civil
Division, United States Department of Justice, Washing-
ton, DC, argued for respondent-appellee. Also represented
by BENJAMIN C. MIZER, ROBERT E. KIRSCHMAN, JR.,
MARTIN F. HOCKEY, JR; BRIAN D. GRIFFIN, JONATHAN
KRISCH, Office of General Counsel, United States De-
partment of Veterans Affairs, Washington, DC.
______________________
2 ALDRIDGE v. MCDONALD
Before NEWMAN, SCHALL, and TARANTO, Circuit Judges.
Opinion for the court filed by Circuit Judge SCHALL.
Dissenting opinion filed by Circuit Judge NEWMAN.
SCHALL, Circuit Judge.
Marion Aldridge appeals the final decision of the
United States Court of Appeals for Veterans Claims
(“Veterans Court”) that dismissed as untimely his appeal
from a final decision of the Board of Veterans Appeals
(“Board”). Aldridge v. McDonald, 27 Vet. App. 392, 394
(Vet. App. 2015). We affirm.
BACKGROUND
Mr. Aldridge served on active duty in the United
States Marine Corps from January of 1984 to May of
1992. On December 24, 2013, the Board denied his claim
for a disability rating higher than 10% for his right-knee
patellofemoral syndrome and his claim for a disability
rating higher than 10% for his left-knee patellofemoral
syndrome. J.A. 59–60. The Board informed Mr. Aldridge
that, if he wished to challenge its decision, he had 120
days to file a notice of appeal with the Veterans Court.
J.A. 69; see also 38 U.S.C. § 7266(a) (providing that a
person adversely affected by a final decision of the Board
“shall file a notice of appeal with the Court within 120
days after the date on which notice of the decision is
mailed”). Any appeal by Mr. Aldridge thus was required
to be filed by April 23, 2014.
The Veterans Court received a notice of appeal from
Mr. Aldridge on October 27, 2014, more than six months
after it was due. J.A. 75. After the Secretary filed a
motion to dismiss the appeal, the Veterans Court ordered
Mr. Aldridge to explain why his appeal should not be
dismissed as untimely. Responding to the Veterans
Court’s order, Mr. Aldridge acknowledged that his appeal
was late under § 7266(a). He stated, however, that deaths
ALDRIDGE v. MCDONALD 3
in his family and his resulting depressive state had pre-
vented him from timely filing his notice of appeal. Specif-
ically, Mr. Aldridge recounted in an affidavit that his
mother died on September 27, 2013; that his daughter
gave birth to a stillborn child on December 16, 2013; and
that his sister passed away on January 14, 2014. J.A. 34–
35. Mr. Aldridge averred that he was “severely depressed
for at least nine months” following the death of his moth-
er and that, because of his depressive state and his focus
on his family, he did not appreciate that he was required
to file a notice of appeal by April 23, 2014. J.A. 37.
Stating that it was “around the summer of 2014” that he
recovered from his depressive state and was able to
consider the need to file his appeal, J.A. 37, he asked the
Veterans Court to apply the doctrine of equitable tolling
and thereby deem his October 27 notice of appeal timely,
see J.A. 31.
The Veterans Court began its consideration of Mr. Al-
dridge’s request by noting that the Supreme Court has
determined that equitable tolling is appropriate when an
appellant demonstrates “‘(1) that he has been pursuing
his rights diligently, and (2) that some extraordinary
circumstance stood in his way’ and prevented timely
filing.” Aldridge, 27 Vet. App. at 393 (quoting Holland v.
Florida, 560 U.S. 631, 649 (2010) (quoting Pace v. DiGug-
lielmo, 544 U.S. 408, 418 (2005))). Focusing on the second
prong of the Holland test, the Veterans Court determined
that Mr. Aldridge had failed to demonstrate that the
deaths of his mother and sister and the stillborn birth of
his grandchild “themselves directly or indirectly affected
the timely filing of his appeal.” Aldridge, 27 Vet. App. at
393. The court arrived at this determination after noting
that Mr. Aldridge stated that, during the period of his
depression, he closed the estates of his deceased mother
and sister, became his elderly father’s primary caregiver,
maintained his job as a desk clerk at a Veterans Affairs
hospital, and attempted to hire a law firm to represent
4 ALDRIDGE v. MCDONALD
him in his appeal. Id. “Given these facts,” the court
stated, it was “unconvinced that Mr. Aldridge’s depression
rendered him incapable of handling his affairs or other-
wise directly or indirectly prevented his appeal from being
timely filed.” Id. Having concluded that Mr. Aldridge
had failed to demonstrate “facts sufficient to justify equi-
table tolling,” the court dismissed his appeal. Id. at 393,
394. One judge dissented on the ground that, in his view,
the facts presented by Mr. Aldridge justified equitable
tolling. Id. at 396 (Greenberg, J., dissenting). Mr. Al-
dridge has timely appealed from the dismissal of his
appeal.
DISCUSSION
Our ability to review a decision of the Veterans Court
is limited. Pursuant to 38 U.S.C. § 7292(a), we may
review “the validity of a decision of the Court on a rule of
law or of any statute or regulation . . . or any interpreta-
tion thereof (other than a determination as to a factual
matter) that was relied on by the Court in making the
decision.” We have exclusive jurisdiction “to review and
decide any challenge to the validity of any statute or
regulation or any interpretation thereof brought under
[38 U.S.C. § 7292], and to interpret constitutional and
statutory provisions, to the extent presented and neces-
sary to a decision.” 38 U.S.C. § 7292(c). However, except
to the extent that an appeal presents a constitutional
issue, we “may not review (A) a challenge to a factual
determination, or (B) a challenge to a law or regulation as
applied to the facts of a particular case.” Id. § 7292(d)(2).
I.
Mr. Aldridge makes three arguments on appeal.
First, he contends that, in denying him equitable tolling,
the Veterans Court applied a legal standard that is incon-
sistent with the decision of the Supreme Court in Hol-
land. See Appellant Opening Br. 17–18. Second, he
argues that application of the correct legal standard to
ALDRIDGE v. MCDONALD 5
what he characterizes as “the undisputed facts” of the
case establishes that he is entitled to equitable tolling.
See id. at 18–19. And third, he urges that, even if the
Veterans Court did not apply an incorrect legal standard,
it still erred as a matter of law when it determined that
no “extraordinary circumstance stood in his way” so as to
prevent timely filing of his notice of appeal. Id. at 19–20.
Specifically, Mr. Aldridge argues that the court necessari-
ly took an incorrectly narrow view of what constitutes an
“extraordinary circumstance” when it determined that,
because he was able to address certain matters in his life
when he claimed he was in a depressive state, an “ex-
traordinary circumstance” did not exist. Id. According to
Mr. Aldridge, “[n]othing in the case law forecloses the
possibility that [his] circumstances qualify as a basis for
equitable tolling, even if he was not fully incapacitated by
his grief.” Id. at 41.
We have jurisdiction to consider Mr. Aldridge’s first
argument—that the Veterans Court applied a legal
standard that is inconsistent with Supreme Court prece-
dent—because it represents a challenge to the Veterans
Court’s interpretation of a rule of law; namely, the rule as
to what must be shown to establish equitable tolling. We
do not reach Mr. Aldridge’s second argument because, as
set forth below, we conclude that the Veterans Court did
not apply an incorrect legal standard when it denied his
request for equitable tolling. Mr. Aldridge’s third argu-
ment is beyond our jurisdiction. Although Mr. Aldridge
couches this argument in legal terms, urging that the
Veterans Court took an incorrectly narrow view of what
constitutes an “extraordinary circumstance,” the argu-
ment ultimately seeks a fact-based analysis that we may
not undertake. Cook v. Principi, 353 F.3d 937, 937–38
(Fed. Cir. 2003) (dismissing for lack of jurisdiction be-
cause the requested review “ultimately reduce[d] to an
application of the law to facts,” where the veteran “pre-
sent[ed] his argument as a legal premise couched in terms
6 ALDRIDGE v. MCDONALD
of statutory interpretation”). What the Veterans Court
did was simply look at the various tasks that Mr. Aldridge
said he performed during the period he was depressed
and conclude that his ability to perform those tasks
indicated that he was not confronted with a Holland-like
“extraordinary circumstance.” In other words, contrary to
Mr. Aldridge’s assertion, the court did not impose a per se
requirement of full incapacitation. The court merely
applied law to fact, and review of that decision is not
within our jurisdiction. See Leonard v. Gober, 223 F.3d
1374, 1375–76 (Fed. Cir. 2000) (dismissing appeal because
“we lack[ed] jurisdiction to consider the application of
equitable tolling” to the facts of the case, which included
determining whether untimely filing of the veteran’s
appeal was “not due to neglect but rather to events be-
yond her control”); Sullivan v. McDonald, 815 F.3d 786,
789 (Fed. Cir. 2016) (explaining that “[w]e may not review
factual determinations or application of law to fact”
(citing 38 U.S.C. § 7292(d)(2))).
II.
We turn now to Mr. Aldridge’s argument that we have
jurisdiction to consider: his contention that, in denying
him equitable tolling, the Veterans Court applied a legal
standard that is inconsistent with the decision of the
Supreme Court in Holland. As noted, after citing the two-
pronged test set forth in Holland and examining the facts
before it, the Veterans Court determined that Mr. Al-
dridge had failed to demonstrate that the deaths of his
mother and sister and the stillborn birth of his grandchild
“themselves directly or indirectly affected the timely filing
of his appeal.” Aldridge, 27 Vet. App. at 393. On this
basis, the court concluded that Mr. Aldridge had failed to
demonstrate that he was confronted with an “extraordi-
nary circumstance,” as required by Holland, and it denied
him equitable tolling. Id. at 394. Mr. Aldridge argues
that the Veterans Court’s use of a causation analysis (i.e.,
ALDRIDGE v. MCDONALD 7
“directly or indirectly affected”) was contrary to Holland.
He states:
Instead of requiring the party petitioning for
equitable relief to show that the missed deadline
was a “but for” consequence of the extraordinary
circumstances, the Supreme Court [in Holland]
imposed a simpler paradigm. The legal standard
that was adopted, “some extraordinary circum-
stance stood in [the] way and prevented timely fil-
ing,” focuses on whether the extraordinary
circumstances created a roadblock to timely filing
as opposed to a metaphorical chain of causation
that links events through time. This distinction is
critical in this case, where Mr. Aldridge faces a
serious roadblock, or impediment, to timely filing.
Appellant Opening Br. 28 (second alteration in original).
Mr. Aldridge elaborates that the Veterans Court’s use of
what he refers to as “a standalone ‘causation’ prong”
placed “a heavier burden on the veteran than showing
some threshold connection between extraordinary circum-
stances and the untimely filing,” which, he says, is all
that Holland requires. See Appellant Reply Br. 3–4. Mr.
Aldridge states that he is “entitled to have the undisputed
evidence evaluated under the correct standard.” Appel-
lant Opening Br. 31. He concludes by asking us to re-
mand his case to the Veterans Court, adding that, on
remand, the court “should adhere to the language in
Holland and ask, simply, whether the deaths in [his]
family and his ensuing depression stood in his way and
prevented timely filing.” Id.
Having considered Mr. Aldridge’s arguments, we are
unable to agree that, in denying his request for equitable
tolling, the Veterans Court applied an incorrect legal
standard. The requirement of prong two of Holland—that
an appellant demonstrate that “‘some extraordinary
circumstance stood in his way’ and prevented timely
8 ALDRIDGE v. MCDONALD
filing,” 560 U.S. at 649 (quoting Pace v. DiGuglielmo, 544
U.S. 408, 418 (2005))—necessarily carries with it an
element of causation. That is because when something
“stands in the way” and “prevents” another thing from
happening, it is “causing” that other thing not to happen.
In fact, this is precisely what the Supreme Court made
clear this year in Menominee Indian Tribe of Wisconsin v.
United States, 136 S. Ct. 750 (2016). The Court stated:
“We . . . reaffirm that the second prong of the equitable
tolling test is met only where the circumstances that
caused a litigant’s delay are both extraordinary and
beyond its control.” Id. at 756 (first emphasis added).
Moreover, decisions of this court are consistent with what
the Supreme Court said in Menominee. See, e.g., Toomer
v. McDonald, 783 F.3d 1229, 1238 (Fed. Cir. 2015) (“[T]his
court has made clear that ‘to benefit from equitable
tolling, . . . a claimant [must] demonstrate three elements:
(1) extraordinary circumstance; (2) due diligence; and
(3) causation.’” (second alteration in original) (quoting
Checo v. Shinseki, 743 F.3d 1373, 1378 (Fed. Cir. 2014))).
In sum, the Veterans Court did not apply an incorrect
legal standard when it determined that Mr. Aldridge had
failed to demonstrate that the deaths in his family “them-
selves directly or indirectly affected the timely filing of his
appeal.”
CONCLUSION
For the foregoing reasons, the decision of the Veterans
Court dismissing Mr. Aldridge’s appeal as untimely is
affirmed.
AFFIRMED
COSTS
Each party shall bear its own costs.
United States Court of Appeals
for the Federal Circuit
______________________
MARION ALDRIDGE,
Claimant-Appellant
v.
ROBERT A. MCDONALD, SECRETARY OF
VETERANS AFFAIRS,
Respondent-Appellee
______________________
2015-7115
______________________
Appeal from the United States Court of Appeals for
Veterans Claims in No. 14-3656, Judge Alan G. Lance,
Sr., Judge Robert N. Davis, Judge William Greenberg.
______________________
NEWMAN, Circuit Judge, dissenting.
This case puts judicial humanity to the test; the Fed-
eral Circuit and the Court of Appeals for Veterans
Claims1 fail the test.
Mr. Aldridge was six months late in filing a notice of
appeal to the Veterans Court from a decision of the Board
of Veterans Appeals. He explained the deaths of his
mother, sister, and grandchild, all within four months.
He explained his grief, his depression, and his focus on
1 Aldridge v. McDonald, 27 Vet. App. 392 (Vet. App.
2015) (“Vet. Ct. Op.”).
2 ALDRIDGE v. MCDONALD
the needs of his family as well as the legal obligations he
bore. He explained his role as caretaker for his elderly
father, his emotional support for his daughter after the
stillbirth of his grandchild, and his employment obliga-
tions. He explained that his attention to the needs of
others overcame important matters in his own life, includ-
ing the timely filing of this notice of appeal.
The Veterans Court (by split decision) concluded that
the veteran was indeed capable of filing a timely notice of
appeal, stating that there is “no support in the jurispru-
dence of either this Court, the U.S. Court of Appeals for
the Federal Circuit, or the Supreme Court that would
counsel the application of equitable tolling to the facts of
this case as they have been presented.” Vet. Ct. Op. at
394. The Veterans Court held that equitable tolling is not
available because Mr. Aldridge was not “rendered incapa-
ble of handling his affairs.” Id. at 393.
My colleagues on this panel agree, explaining that
“Mr. Aldridge had failed to demonstrate that the deaths
in his family ‘themselves directly or indirectly affected the
timely filing of his appeal.’” Maj. Op. at 8. That is not the
correct standard. Equity requires not only justice and
fairness, but a realistic and humane perspective on how
the facts of life and death can affect human behavior.
Equity is “flexible jurisdiction . . . to protect all rights and
do justice to all concerned.” Providence Rubber Co. v.
Goodyear, 76 U.S. 805, 807 (1869).
Federal Circuit precedent has recognized that equita-
ble tolling is available in “extraordinary circumstances,”
and we have rejected the “suggestion that equitable
tolling is limited to a small and closed set of factual
patterns and that equitable tolling is precluded if a veter-
an’s case does not fall within those patterns.” Mapu v.
Nicholson, 397 F.3d 1375, 1380 (Fed. Cir. 2005). In Sneed
v. Shinseki, 737 F.3d 719 (Fed. Cir. 2013), the court stated
that there are no “exclusive parameters of equitable
ALDRIDGE v. MCDONALD 3
tolling”), id. at 726, and held that “the Veterans Court’s
analysis focused too narrowly on whether [the] case fell
into one of the factual patterns of past cases considering
§ 7266(a),” id. at 724.
The pattern-seeking analysis that is here imposed
against Mr. Aldridge is exactly the kind of “improperly
narrow standard for equitable tolling” that was dis-
claimed in Sneed. Id. at 724. Yet the court now rejects
this flexibility, instead stating that the “rule of law”
controls whether to “establish equitable tolling.” Maj. Op.
at 5. Equity is not controlled by the rules of law. Equity
includes not only what the law tells judges we may do, but
is the “power to moderate and temper the written law,
[subject] only to the law of nature and reason.” Samuel
Johnson, Dictionary of the English Language (1756).
Although the time limit for appeal from the BVA to
the Veterans Court is not “jurisdictional,” the VA argues
that only incapacity of the veteran is an acceptable
ground of equitable tolling. Precedent recognizes, but
does not require, incapacity. The Court instructs that
equity is adaptable to the circumstances:
[C]ourts of equity can and do draw upon decisions
made in other similar cases for guidance. Such
courts exercise judgment in light of prior prece-
dent, but with awareness of the fact that specific
circumstances, often hard to predict in advance,
could warrant special treatment in an appropriate
case.
Holland v. Florida, 560 U.S. 631, 650 (2010). Such “spe-
cial treatment” must take account of all of the circum-
stances confronting the veteran, particularly in light of
the statutory (as well as equitable) requirements of spe-
cial consideration to veterans. It cannot be that because
Mr. Aldridge was not hospitalized for his depression, or
other manifestation of incapacity, equitable tolling is not
available.
4 ALDRIDGE v. MCDONALD
Consideration of the circumstances includes consider-
ing what is sought to be tolled, and the consequences of
tolling in the particular case:
“[Equitable relief] is not a matter of right in either
party; but is a matter of discretion in the Court;
not of arbitrary or capricious discretion, depend-
ent upon the mere pleasure of the Judge, but of
that sound, and reasonable discretion, which gov-
erns itself, as far as it may, by general rules and
principles; but at the same time, which withholds
or grants relief, according to the circumstances of
each particular case, when these rules and princi-
ples will not furnish any exact measure of justice
between the parties.”
Joseph Story, Equity Jurisprudence § 742 (1st ed. 1836).
The circumstances affecting Mr. Aldridge must be consid-
ered, along with the consequences to the government. It
is relevant that no government or other entity was preju-
diced by this delay in appeal from the BVA; no records
were lost or destroyed; no witness departed; no military or
civilian action prejudiced. There is no monetary conse-
quence, no extra draw on governmental resources.
The government argues, and the panel majority
agrees, that since equitable tolling depends on the partic-
ular facts, this court has no jurisdiction to review the
denial, no matter how strong the draw on equity. Howev-
er, “this court has jurisdiction to consider whether the
Veterans Court employed an improperly narrow standard
for equitable tolling under § 7266(a).” Sneed, 737 F.3d at
724. The court has also recognized that veterans are
“vulnerable litigants” who are typically unrepresented by
counsel. Dixon v. Shinseki, 741 F.3d 1367, 1376 (Fed. Cir.
2014). Mr. Aldridge was not represented by counsel at
the BVA, for counsel would routinely have filed a timely
notice of appeal.
ALDRIDGE v. MCDONALD 5
This court has been assigned the responsibility for as-
suring that the legislative purpose of establishing a
veteran-friendly regime is implemented. This case should
never have come this far. On the undisputed circum-
stances that existed in this veteran’s family, the VA could
readily have allowed the tardy appeal from the BVA to
the Veterans Court. Instead, we see the government in
uncompromising litigation to prevent this veteran from
appealing the BVA decision on his percentage disability,
straining precedent to its equivocal limits. What hap-
pened to the recognition that “the veterans benefit system
is designed to award ‘entitlements to a special class of
citizens, those who risked harm to serve and defend their
country. This entire scheme is imbued with special benefi-
cence from a grateful sovereign.’” Bailey v. West, 160 F.3d
1360, 1370 (Fed. Cir. 1998).
The question before the court is whether the circum-
stances excuse the untimely filing. Equity is no more
confined to a few narrow categories than are humanity,
reason, and justice. I respectfully dissent.
| {
"pile_set_name": "FreeLaw"
} |
# ===========================================================================
# http://www.gnu.org/software/autoconf-archive/ax_python_devel.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_PYTHON_DEVEL([version])
#
# DESCRIPTION
#
# Note: Defines as a precious variable "PYTHON_VERSION". Don't override it
# in your configure.ac.
#
# This macro checks for Python and tries to get the include path to
# 'Python.h'. It provides the $(PYTHON_CPPFLAGS) and $(PYTHON_LDFLAGS)
# output variables. It also exports $(PYTHON_EXTRA_LIBS) and
# $(PYTHON_EXTRA_LDFLAGS) for embedding Python in your code.
#
# You can search for some particular version of Python by passing a
# parameter to this macro, for example ">= '2.3.1'", or "== '2.4'". Please
# note that you *have* to pass also an operator along with the version to
# match, and pay special attention to the single quotes surrounding the
# version number. Don't use "PYTHON_VERSION" for this: that environment
# variable is declared as precious and thus reserved for the end-user.
#
# This macro should work for all versions of Python >= 2.1.0. As an end
# user, you can disable the check for the python version by setting the
# PYTHON_NOVERSIONCHECK environment variable to something else than the
# empty string.
#
# If you need to use this macro for an older Python version, please
# contact the authors. We're always open for feedback.
#
# LICENSE
#
# Copyright (c) 2009 Sebastian Huber <[email protected]>
# Copyright (c) 2009 Alan W. Irwin
# Copyright (c) 2009 Rafael Laboissiere <[email protected]>
# Copyright (c) 2009 Andrew Collier
# Copyright (c) 2009 Matteo Settenvini <[email protected]>
# Copyright (c) 2009 Horst Knorr <[email protected]>
# Copyright (c) 2013 Daniel Mullner <[email protected]>
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see <http://www.gnu.org/licenses/>.
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 17
AU_ALIAS([AC_PYTHON_DEVEL], [AX_PYTHON_DEVEL])
AC_DEFUN([AX_PYTHON_DEVEL],[
#
# Allow the use of a (user set) custom python version
#
AC_ARG_VAR([PYTHON_VERSION],[The installed Python
version to use, for example '2.3'. This string
will be appended to the Python interpreter
canonical name.])
AC_PATH_PROG([PYTHON],[python[$PYTHON_VERSION]])
if test -z "$PYTHON"; then
AC_MSG_ERROR([Cannot find python$PYTHON_VERSION in your system path])
PYTHON_VERSION=""
fi
#
# Check for a version of Python >= 2.1.0
#
AC_MSG_CHECKING([for a version of Python >= '2.1.0'])
ac_supports_python_ver=`$PYTHON -c "import sys; \
ver = sys.version.split ()[[0]]; \
print (ver >= '2.1.0')"`
if test "$ac_supports_python_ver" != "True"; then
if test -z "$PYTHON_NOVERSIONCHECK"; then
AC_MSG_RESULT([no])
AC_MSG_FAILURE([
This version of the AC@&t@_PYTHON_DEVEL macro
doesn't work properly with versions of Python before
2.1.0. You may need to re-run configure, setting the
variables PYTHON_CPPFLAGS, PYTHON_LDFLAGS, PYTHON_SITE_PKG,
PYTHON_EXTRA_LIBS and PYTHON_EXTRA_LDFLAGS by hand.
Moreover, to disable this check, set PYTHON_NOVERSIONCHECK
to something else than an empty string.
])
else
AC_MSG_RESULT([skip at user request])
fi
else
AC_MSG_RESULT([yes])
fi
#
# if the macro parameter ``version'' is set, honour it
#
if test -n "$1"; then
AC_MSG_CHECKING([for a version of Python $1])
ac_supports_python_ver=`$PYTHON -c "import sys; \
ver = sys.version.split ()[[0]]; \
print (ver $1)"`
if test "$ac_supports_python_ver" = "True"; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
AC_MSG_ERROR([this package requires Python $1.
If you have it installed, but it isn't the default Python
interpreter in your system path, please pass the PYTHON_VERSION
variable to configure. See ``configure --help'' for reference.
])
PYTHON_VERSION=""
fi
fi
#
# Check if you have distutils, else fail
#
AC_MSG_CHECKING([for the distutils Python package])
ac_distutils_result=`$PYTHON -c "import distutils" 2>&1`
if test -z "$ac_distutils_result"; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
AC_MSG_ERROR([cannot import Python module "distutils".
Please check your Python installation. The error was:
$ac_distutils_result])
PYTHON_VERSION=""
fi
#
# Check for Python include path
#
AC_MSG_CHECKING([for Python include path])
if test -z "$PYTHON_CPPFLAGS"; then
python_path=`$PYTHON -c "import distutils.sysconfig; \
print (distutils.sysconfig.get_python_inc ());"`
plat_python_path=`$PYTHON -c "import distutils.sysconfig; \
print (distutils.sysconfig.get_python_inc (plat_specific=1));"`
if test -n "${python_path}"; then
if test "${plat_python_path}" != "${python_path}"; then
python_path="-I$python_path -I$plat_python_path"
else
python_path="-I$python_path"
fi
fi
PYTHON_CPPFLAGS=$python_path
fi
AC_MSG_RESULT([$PYTHON_CPPFLAGS])
AC_SUBST([PYTHON_CPPFLAGS])
#
# Check for Python library path
#
AC_MSG_CHECKING([for Python library path])
if test -z "$PYTHON_LDFLAGS"; then
# (makes two attempts to ensure we've got a version number
# from the interpreter)
ac_python_version=`cat<<EOD | $PYTHON -
# join all versioning strings, on some systems
# major/minor numbers could be in different list elements
from distutils.sysconfig import *
e = get_config_var('VERSION')
if e is not None:
print(e)
EOD`
if test -z "$ac_python_version"; then
if test -n "$PYTHON_VERSION"; then
ac_python_version=$PYTHON_VERSION
else
ac_python_version=`$PYTHON -c "import sys; \
print (sys.version[[:3]])"`
fi
fi
# Make the versioning information available to the compiler
AC_DEFINE_UNQUOTED([HAVE_PYTHON], ["$ac_python_version"],
[If available, contains the Python version number currently in use.])
# First, the library directory:
ac_python_libdir=`cat<<EOD | $PYTHON -
# There should be only one
import distutils.sysconfig
e = distutils.sysconfig.get_config_var('LIBDIR')
if e is not None:
print (e)
EOD`
# Now, for the library:
ac_python_library=`cat<<EOD | $PYTHON -
import distutils.sysconfig
c = distutils.sysconfig.get_config_vars()
if 'LDVERSION' in c:
print ('python'+c[['LDVERSION']])
else:
print ('python'+c[['VERSION']])
EOD`
# This small piece shamelessly adapted from PostgreSQL python macro;
# credits goes to momjian, I think. I'd like to put the right name
# in the credits, if someone can point me in the right direction... ?
#
if test -n "$ac_python_libdir" -a -n "$ac_python_library"
then
# use the official shared library
ac_python_library=`echo "$ac_python_library" | sed "s/^lib//"`
PYTHON_LDFLAGS="-L$ac_python_libdir -l$ac_python_library"
else
# old way: use libpython from python_configdir
ac_python_libdir=`$PYTHON -c \
"from distutils.sysconfig import get_python_lib as f; \
import os; \
print (os.path.join(f(plat_specific=1, standard_lib=1), 'config'));"`
PYTHON_LDFLAGS="-L$ac_python_libdir -lpython$ac_python_version"
fi
if test -z "PYTHON_LDFLAGS"; then
AC_MSG_ERROR([
Cannot determine location of your Python DSO. Please check it was installed with
dynamic libraries enabled, or try setting PYTHON_LDFLAGS by hand.
])
fi
fi
AC_MSG_RESULT([$PYTHON_LDFLAGS])
AC_SUBST([PYTHON_LDFLAGS])
#
# Check for site packages
#
AC_MSG_CHECKING([for Python site-packages path])
if test -z "$PYTHON_SITE_PKG"; then
PYTHON_SITE_PKG=`$PYTHON -c "import distutils.sysconfig; \
print (distutils.sysconfig.get_python_lib(0,0));"`
fi
AC_MSG_RESULT([$PYTHON_SITE_PKG])
AC_SUBST([PYTHON_SITE_PKG])
#
# libraries which must be linked in when embedding
#
AC_MSG_CHECKING(python extra libraries)
if test -z "$PYTHON_EXTRA_LIBS"; then
PYTHON_EXTRA_LIBS=`$PYTHON -c "import distutils.sysconfig; \
conf = distutils.sysconfig.get_config_var; \
print (conf('LIBS') + ' ' + conf('SYSLIBS'))"`
fi
AC_MSG_RESULT([$PYTHON_EXTRA_LIBS])
AC_SUBST(PYTHON_EXTRA_LIBS)
#
# linking flags needed when embedding
#
AC_MSG_CHECKING(python extra linking flags)
if test -z "$PYTHON_EXTRA_LDFLAGS"; then
PYTHON_EXTRA_LDFLAGS=`$PYTHON -c "import distutils.sysconfig; \
conf = distutils.sysconfig.get_config_var; \
print (conf('LINKFORSHARED'))"`
fi
AC_MSG_RESULT([$PYTHON_EXTRA_LDFLAGS])
AC_SUBST(PYTHON_EXTRA_LDFLAGS)
#
# final check to see if everything compiles alright
#
AC_MSG_CHECKING([consistency of all components of python development environment])
# save current global flags
ac_save_LIBS="$LIBS"
ac_save_CPPFLAGS="$CPPFLAGS"
LIBS="$ac_save_LIBS $PYTHON_LDFLAGS $PYTHON_EXTRA_LDFLAGS $PYTHON_EXTRA_LIBS"
CPPFLAGS="$ac_save_CPPFLAGS $PYTHON_CPPFLAGS"
AC_LANG_PUSH([C])
AC_LINK_IFELSE([
AC_LANG_PROGRAM([[#include <Python.h>]],
[[Py_Initialize();]])
],[pythonexists=yes],[pythonexists=no])
AC_LANG_POP([C])
# turn back to default flags
CPPFLAGS="$ac_save_CPPFLAGS"
LIBS="$ac_save_LIBS"
AC_MSG_RESULT([$pythonexists])
if test ! "x$pythonexists" = "xyes"; then
AC_MSG_FAILURE([
Could not link test program to Python. Maybe the main Python library has been
installed in some non-standard library path. If so, pass it to configure,
via the LDFLAGS environment variable.
Example: ./configure LDFLAGS="-L/usr/non-standard-path/python/lib"
============================================================================
ERROR!
You probably have to install the development version of the Python package
for your distribution. The exact name of this package varies among them.
============================================================================
])
PYTHON_VERSION=""
fi
#
# all done!
#
])
| {
"pile_set_name": "Github"
} |
Thursday, December 10, 2009
Friday, December 4, 2009
I have played organized basketball since I was six years old and was first eligible to play through a YMCA team.Because the zoning rules that the YMCA used to create the teams, I was placed on a team that consisted of two Caucasian players and ten African American players.This was the beginning of my lifelong education in African American Vernacular English (AAVE) (p.195).I continued to be one of only a few white players on my basketball teams throughout my life.I have noticed that Black English has the unique ability to remake itself completely in a short amount of time.The AAVE that I observed when I first joined ACU basketball team is completely different from the AAVE that I observe now.These changes seem to be fueled by Caucasians and which AAVE slang they choose to adopt into their own slang.The story of the ACU basketball team demonstrates the quick changing nature of AAVE as well as the motivating factors behind its changes.
Despite the changing nature of Black English it does have certain constants.Over the past four years the basketball team has seen well over thirty different players enter and exit.Of those thirty, nearly twenty of them were African American.Each player brought with him a unique set of phrases and words that were common to the area in which he was raised.The team has seen players from California, Alabama, Texas, Mississippi, and Oklahoma.The introduction of each new player brought about a complete change and revamp of the teams AAVE lingo.The teammates adopted certain phrases from each player and created a hybrid of all the different regional dialects.Of course there were certain constants such as the deletion of coupling verbs in instances where a contraction can be made like in the phrase “They busy” which is the AAVE equivalent of “They’re busy” (p.198).
Another commonality was the cadence of the language.In AAVE the first word or short phrase that is spoken is drawn out longer than it would be in Standard American English (SAE) and is usually in exclamation or interjection spoken with heavy emphasis.The rest of the sentence is expressed in a normal or slightly faster than normal rhythm.In the phrase “Yo, who dat is” the word “yo” is drawn out much longer than normal and is followed by a pause that adds even more emphasis and “who dat is” is spoken very fast.This is the most common archetype for an AAVE sentence and even works with short phrases as the opening exclamation such as in the phrase “My nigga, lemme tell you bout las night.”This and a few other grammatical and pronunciation differences such as the absense of interdental fricatives and consonant cluster reduction rule (p. 196) were shared commonalities between all the players on the ACU team over the years.
The hybrid language that my African American and even a few African teammates changed over the years, not only at the beginning of each summer when the new recruits were brought in for workouts, but even within the school year and basketball season itself.Phrases and words were adopted and dropped throughout the year.The phrases and pronunciation came from a variety of sources; some phrases seemed to be carryovers from the differing regions, but the new jargon that sprouted during the season was often adopted from popular rap/hip-hop songs and “black” movies such as ATL and All About the Benjamins.Often in rap/hip-hop songs an artist will alter the pronunciation of emphasis of a word or phrase to help it better fit in the rhyme scheme of the song, and the fact that many of the players on my team adopted their lingo from songs facilitated them adopting their unique pronunciations of words.Lil’ Wayne is one of the most popular and influential hip-hop artists when it comes to the generation of new phrases and pronunciations.In the song “Swagga Like Us” he says, “No one on the corner has swagger like moi (French pronunciation mw-ah), Church/ But I’m too clean for these boys.”The word “boys” is pronounced “bois” (bow-ahs) like the French word moi.This song was quite popular this year and to this day the black members and a few of the white members of the ACU basketball team pronounce the word boy as /bow-ah/.
Another recently acquired element of language among the members of the ACU Team is a couple of different phrases that involve the word subliminal.The context where the word was first heard was after one member of the team made a subtle jab at two-year teammate Ian Wagner.Ian responded by saying, “I see you hitting me with that subliminal.”The phrase and its derivatives, such as “Brooks, I see you getting subliminal,” was then became a common expression to refer to situations when subtlety was used in language or action.It is most used in situations where humor is in play in the conversation, but it is not bound to that constraint.It replaced the phrase “I see you coming at me on the sly” which was directed at a person who was subtly making fun of the speaker.
The main reason that the phrase “I see you coming at me on the sly” fell from common usage with my teammates was that it had been adopted and commonly used by a number of white teammates on my team.They had taken that element of language that was formerly reserved solely for usage for the Black members of the team and had learned how to use it and adopted it into their own language.AAVE has its roots in the black church that was formed on slave plantations (Baldwin).He said they did not just acquire a new language, but “transformed ancient elements into a new language” (Baldwin).Another quote from James Baldwin that helps explains the elimination of words and phrases from AAVE is that “A language comes into existence by means of brutal necessity, and the rules of the language are dictated by what the language must convey” (Baldwin).He explains that slaves often needed to explain in English that one of their fellow slaves was in danger in such a way that the white man could not understand.The laws of Black English allow for quick changes in meanings and pronunciation because it allows the language to remain foreign to white SAE speakers.My teammates dropped the phrases that me and my other white teammates adopted, because it was no longer solely there language.The heritage of their language had taught them that their AAVE needed to be only for African Americans because its origins were based on self-defense from white people.The youtube.com video “black slang” makes fun of this fact in the instance of white American’s adoption of the AAVE use of “brother” to refer to someone who is not family.The comedian jokes that African Americans had to change to using “cousin” and “son” in the same context because white people had stolen the work “brother” (topgal).
The ACU basketball team is a perfect example of way the Black English or AAVE is able to change and adapt to the point of completely revamping itself in a short period of time.Rap/hip-hop artists as well as popular “black” movies aid in the generation of new phrases, words, and pronunciations.As white culture continues to adopt elements of AAVE into mainstream usage the black culture will have to continue to remake its language to keep it solely theirs.The ACU basketball team has managed to due so despite the increasing ease with which American culture continues to pick up their language.
Thursday, December 3, 2009
Every family has their stories, if they didn’t there would be nothing to talk about during Christmas dinner.These stories serve a much greater purpose than merely supplying stories to be told around the dinner table.They give meaning, create identity, they change or uphold the way we look at things.The complete list of purposes of family stories are too remember, to create identity, to entertain, to reinforce values, and to connect generations.There are five types of stories that accomplish those purposes: Creation stories, coming of age stories, crisis stories, decision stories, family unity stories, and family identity stories.
Family identity is passes from generation to generation through stories that usually begin with “My father once told me that his father once told him….” And then go on to give some account of the family history or name.This one story can fill multiple roles it can entertain, it can connect generations, and create identity.
The Fall
One story that my family is particularly fond of telling is about a rock-climbing incident that occurred a little more than a four years ago.I had been an avid rock climber since my freshman year of high school.I owned my own equipment and had a membership at a local climbing gym.I had a habit of occasionally taking girlfriends with me when I went climbing.I did this partially because I enjoyed their company and partially because it was an opportunity for me to show off my climbing prowess, and the method had worked well for me until that fateful day.
The day started off as normally as any other climbing day.I woke up and had a good, but light, breakfast, gathered my gear, and set off to the gym to climb.I stopped along the way to pick up my girlfriend at the time, who had been climbing with me several times before.We planned to make a day of it, by climbing for a few hours before grabbing some lunch, and heading back to get a few more climbs in before closing time.We arrived at the climbing gym around ten o’clock.
The gym was inside of a complex of four concrete silos.The top of the silos was painted a rainbow of colors and had the words “General Electric” in big bold letters.I am not entirely sure what purpose the building served before its days of as a gym, and was not entirely sure I cared.The gym was separated into four parts.There was the bouldering area, which consisted of hundreds of holds spread over a wall no taller than twenty feet, this was were people practiced and trained their grips for the various holds.The next portions was the skill walls upstairs, these walls were not particularly tall at all, most averaged around 50 to 60 feet, but were the most difficult to climb in the whole gym.Next was the novice area, these were easy walls that were a good place for beginners and were about another 20 feet taller than the skill walls.The final area of the gym was the endurance walls.These walls an intermediate level as far the spacing and size of the holds, but were difficult because of the sheer immensity of their size.The walls averaged a height of 100 feet.The tallest wall at the gym, which was actually the tallest indoor wall in Texas, was 121 feet high.It was at this wall that my incident occurred.
My girlfriend was the first to attempt the “121 wall” that day.I took my position as her “belayer” not even bothering to secure myself into the ground because I outweighed her by so much.She proceeded to climb about ¾’sof the wall before growing too weary to continue.After slowly feeding the rope through the carabiner and belay device, which provided me the mechanical advantage to easily suspend her 130-pound frame using only my thumb and pinkie if needed, and allowing her to rappel down, it was my turn to climb.We switched all the necessary equipment and double-checked all of our knots and harnesses to make sure they were correct, and then to be sure we checked one another’s.After we were sure everything was in good order, I began my ascent.
The climb was not particularly difficult for me, for I had made it many times before.At this point it had become a conditioning climb for me; it was a good way to make sure that I stayed in excellent rock climbing condition.Because of the rehearsed ease with which I made this climb I did not require any rest of assistance from my belayer.Fifteen minutes later I reached the top of the wall and was prepared to make my slow and controlled descent.
The trip down was much faster than I anticipated.After I yelled down that I was ready to make my descent, my girlfriend took all the slack out of the rope so I could let go of the wall and lean back away from it.I took my hands away and leaned back with no problem or incidence.I then gave a firm kick with my legs to push myself away from the wall and allow my belayer to let some of the rope slip through the carabiner allowing me to drop.She did allow some rope to slip through and after I dropped a few yards I expected the rope to go tight and for me to be forced back to the way, but this never happened.I began to descend faster and faster, which was worrisome, but I was not truly scared until I heard my belayer scream.I was picking up speed and the ground was screaming towards me now.Apparently my belayer’s device had malfunctioned and was not longer giving her the advantage needed to control the descent of my 230lb. body.The rope started to rush through her fingers burning them badly and forcing her to release the rope.There was nothing stopping the rope now and, by extension, there was nothing stopping me.The rope as it was being whipped up off the floor tripped my belayer sending her crashing to the ground were she was of no use to me.
I thought my life was over.120 feet is a long way and I had plenty of time to think things over.I had decided how I was going to land, what I wanted to go on my tombstone; everything had been thought of.Then I hit.I do no remember anything after that.I woke up in a car on the way to the hospital.I do not remember what happened at the hospital either.My parents were informed that I had no serious damage to my back or any major organs, and that the extent of my damage was numerous stress fractures in my feet and legs.They asked what happened and when my parents told them, the doctors said that I should be dead.
There is a reason that my parents like to tell that story.They, and I, believe that it means there is a purpose for this family.I heard that half of all falls from over thirty feet end up being fatal, but I fell from 120 feet and have almost no lasting damage.There is an identity to be found in this story for me and for my family.God miraculously allowed me to live when common sense would have dictated that I die.I feel that he saved me for a purpose and that my family was allowed to continue to exist for a purpose.That purpose is still not entirely known to me or my family, but we are all very conscious of the fact that we will one day know that purpose.One day I will know why God chose to save me.I am reminded of the identity that this story gives me every time that I hear or tell it.The identity that this story gives me is, that I am a man saved by God, twice.
Wednesday, December 2, 2009
Located within systems theory is the punctuation theory.This theory explores the way that actions serve both as responses to former actions as well as stimulus for future responses.Modern lyrical poet Brandon Boyd had this to say on this subject:
“Hey what would it mean to you?
To know that it’ll come back around again
Hey whatever it means to you
Know that everything moves in circles
Round and round we go
We could know when it ends so well
We fall on and we fall off
Existential carousel”
The song is called circles and it deals with reciprocity and how every person’s actions and every action directed at them brings future responses, which keeps the “existential carousel” spinning.The message Boyd is attempting communicate is that in every situation one must evaluate their own responses to negative outside stimuli in order to shift the circle of reciprocity to an ongoing cycle of positive and avoid an ongoing vicious cycle of negative responses.This paper will explore the theory behind the punctuation theory as well as the ways that it manifests itself within my family and relationships.
Period is Not the End
Punctuation theory operates around the concept of interactive complexity.Interactive complexity central tenant is that every “act triggers new behavior aswell as responds to previous behaviors.This cycle of responses develop into patterns of behavior that once fully formed are not only hard to break, but hard to examine.Punctuation theory provides stops, or, at the risk of sounding redundant, punctuation which allow the behaviors cycles to be broken into pieces and more carefully examined.The problem that accompanies the punctuation of these behavior cycles is that different individuals within one system of behavior will punctuate the sequence differently.It is easy to use the different punctuations to assign blame to one of the members within the system.Each person would feel justified assigning the blame to one of the actions of another.This is counter productive and an easy pitfall of assigning punctuation.The aim is to deal with and solve the entire behavioral cycle.To pick out one behavior and state that it is the cause of the problem accomplishes nothing and merely alienates the accused.The ideal way to work through the problems is to use an “illness-free” lens where no one member is to blame for starting of continuing the cycle.This removes the scapegoat member from the situation and allows a freedom to view the situation objectively because there is no blame being assigned.
The Turnover Cycle
The ACU Men’s Basketball team has a problem and that is turnovers.We are in the bottom three of the Lonestar conference in assist to turnover ratio.During our last game, which was a loss to Tarleton State University, we had thirteen turnovers in one half.This is a cyclical problem, which may be the only reason that we are not the bottom of the league in this statistic.The cycle progresses like this: We have a high turnover game, Coach gets angry and we put enormous focus on taking care of the ball during the three days until the next game, We have a game with low turnovers (which usually means a win), Coach is content with our effort in that aspect of the game and moves the focus to another facet of our game, the next game we have another high turnover game.
The Player’s Perspective
It is easy for the players on the team to view Coach’s shifting focus as the root of the problem.The complaint most often heard thrown about in the locker room is that if Coach wants us to take care of the ball better than we need to have a stronger and more consistent focus on it throughout the year and not only after a high turnover game.The idea is that you practice how you play and that if he has a set way that he wants the team to play then he should make sure that the practices we have before the game reflect that image.
The Coach’s Perspective
Coach sees things differently and sets his own punctuation to the problem.He sees turnovers as mental unpreparedness and a sign of being mentally weak.He feels that the players, being college players, should be able to carry over basics of the game, such as taking care of the ball, from week to week.The idea of having to drill on this one facet of the game from practice to practice seems like a large waste of valuable time.He has other things that he wants to go over and wants to progress beyond this point into more complicated aspects of the game.He does not want to leave the team unprepared by focusing on turnovers at the expense of other things.
Breaking the Cycle
To break this vicious circle of poor games both the players and Coach will need to compromise on their viewpoints and the way that practice is run.The players will need to take responsibility for making turnovers a personal focus every practice and every game.They need to stop waiting on Coach to do something to fix the problem and take initiative themselves.They need accountability amongst one another and not exclusively to Coach.
Coach needs to bend as well.He needs to recognize that this is a chronic problem and that the players are not exclusively to blame.Players are a reflection of their coaching.He can find creative ways to incorporate valuing the ball in the drills that he is using to teach other concepts.This way he is not focusing on the turnovers at the expense of all else, but is working on both at once.
If both parties are able to bend then neither will have to break and the season can be saved.It takes the maturity to stop trying to assign blame and attempt to find what both can do to work together to stop this behavioral pattern.If both will commit to doing this then ACU has a chance to go to the conference tournement.
Monday, November 30, 2009
Hello dear readers,
I just finished playing waterball and it was pathetic. Our team won with such ease that it wasn't even fun, so I was left with an empty feeling in my soul.
In other news, Barack Obama made himself a bowl of cereal this morning and the media has not ceased in there praising. But seriously, GQ named him leader of the year, and he still hasn't done a damn thing. It really bakes my beans too.
Sunday, November 29, 2009
Ok, there was no internet at my parents house so that explains the gap in the blog posts. I actually wrote during that time, but did not post online. I will make sure to post those writings at a later time, but right now I feel much too lazy.
Guess what! I'm engaged. As in engaged to be married. As in about to enter in to a lifelong love covenant with another human being. I'm freaking pumped. Like seriously ecstatic. Should i post the story up on the blog?
Tuesday, November 24, 2009
There is nothing quite like going home to the parents house. I say the parent's house, because, in my mind, once you move out it is no longer your home. There is a familiarity and comfort that comes with going home to see Mommsie and Pops. You walk in and you are no longer the responsible adult you are outside of their house. Once your foot is in that door you become the child once again. Its time to sit back, relax, and be a kid for a few days.
Bonus Material: Fangst- what every emo girl that goes to watch Twilight suffers from
About Me
I'm Norse, if you haven't figured that out then heaven help you, more specifically I'm a Viking. I embrace violent stereotype of my people and hope that by chronicling my exploits on this page, I will grant all who read a more balanced view of me and my brethren. For although we seem heartless and barbaric, we have good and decent souls. | {
"pile_set_name": "Pile-CC"
} |
1. Fields of the Invention
The present invention relates to a light assembly, and more particularly, to a light assembly with adjustable reflection members.
2. Descriptions of Related Art
The conventional light assembly generally comprises a support unit and a rotary unit, wherein the support unit has a body and a Light Emitting Diode on the support unit. The body has a first lip on the outside thereof, and threads. The rotary unit has a tube and a convex lens in the tube. The tube has a first cover on one end thereof, a room and two second lips formed in the inside of the tube. One of the second lips has inner threads, and the other second lip is formed to the first cover which has a hole enclosed by the second lip. The rotary unit is axially movable between a first position and a second position relative to the support unit. When the rotary unit is moved to the first position, the first lip contacts one of the second lips, and the convex lens is located close to the LED. When the rotary unit is moved to the second position, the first lip contacts the other second lip, and the convex lens is moved away from the LED. However, due to the length of the LED, the adjustable distance between the first and second positions is limited, so that the adjustable distance of the focus is limited and cannot obtain satisfied affect. Similarly, due to the length of the LED, the adjustable distance between the first and second positions is limited, so that the adjustable distance of the focus between the convex lens and the LED is limited. Besides, because the LED is movable relative to the convex lens, the LED cannot installed to the base of the convex lens, therefore, the light beams cannot be projected precisely.
The present invention intends to provide a light assembly with a focus adjustable device to eliminate the shortcomings mentioned above. | {
"pile_set_name": "USPTO Backgrounds"
} |
1. Field of the Invention
This invention relates to a display apparatus for a double speed video signal wherein an input video signal is converted into another double speed image signal and a standard television signal is displayed together with a child picture on a screen of a television receiver, for example, for the high definition television system.
2. Description of the Related Art
When it is tried to display a television signal of a standard system such as the NTSC system, PAL system or SECAM system on a high definition television receiver, preferably a video signal is processed, according to the compatibility of the deflection system, at a double speed to display the image in a double field frequency or a double line frequency.
For example, the PAL/SECAM television system adopted in Europe employs a 2:1 interlace system of 625 lines/50 Hz, and accordingly, when a video signal of high brightness is displayed, a large screen flicker is usually seen. In order to eliminate this large screen flicker, it has been proposed to use double speed field display means which performs double speed processing to double the field frequency of a video signal to make a parent picture and repetitively display odd-/even-numbered field signals of 312.5 H and 312.5 H twice in different fields like odd-/odd-/even-/even-numbered fields of 312 H, 312.5 H, 313 H and 312.5 H.
Similarly, for the NTSC system, a superimposed line double speed system has been proposed wherein signals of odd-numbered and even-numbered fields of 262.5 H and 262.5 H are processed at a line double speed to produce field signals of 525 H and 525 H and upper and lower instances of horizontal lines which display the same signal are superimposed with each other so as to perform scanning equivalent to that of a normal interlace system. Such technique is disclosed, for example, in EP 0,482,894, A2.
A similar technique is disclosed in EP 0,551,168, Al wherein a television signal of a standard television system is displayed together with a child picture on a screen of a television receiver of, for example, the high definition television system.
However, where it is tried to simultaneously display a child picture in a superimposed condition on a parent picture processed by double speed signal processing in such a manner as described above, the following problems are encountered.
A. Where the parent picture is a field double speed picture:
1. Also a child picture to be inserted into the parent picture must necessarily be processed by field double speed processing, and to this end, it has been proposed to determine a sequence of read-out areas of a four field sequence memory using a normal speed vertical synchronizing signal as a clock signal for the four field sequence memory for displaying a child picture at a field double speed, latch the child picture signal in response to a vertical synchronizing pulse signal of a double speed to that of the deflection system to delay the child picture by a time corresponding to one field of the double speed and use the thus delayed child picture signal as a control signal for a read-out memory area for the child picture. However, it sometimes occurs that a control signal for a write memory area exhibits overlapping by a time equal to the delay time, and there is a problem in that the memory area which undergoes writing and the memory area which undergoes reading out coincide with each other to cause passing of a memory address for a child picture with the probability of 1/8. PA0 2. One of simple methods of processing a parent picture by double speed processing by four times for odd-numbered, odd-numbered, even-numbered and even-numbered fields is a field display mode wherein only one of the odd-numbered and even-numbered fields of a child picture video signal written in a memory is repetitively read out and displayed by four times. The method, however, has a problem in that motion of the child picture is skipped by one field and the vertical resolution of the child picture is reduced to one half or less than that of ordinary frames. PA0 1. Since the superimposed line double speed conversion system basically involves non-interlace conversion timings, there is a problem in that discrimination between odd- and even-numbered fields is impossible after conversion into picture of a double speed and, when a child picture video signal is converted, after such double speed conversion, into a signal of a double speed and then displayed, the interlace of child picture images is reversed with the probability of 1/2. PA0 2. It may be possible to shift, in a zoom mode wherein a 4:3 video signal is displayed fully on a 16:9 video screen by over-scanning of the upper and lower portions of the 4:3 video signal by a vertical deflection system, a video signal of a child picture prior to double speed conversion so as to be superimposed on a video signal of a parent picture to effect line double speed processing. In this case, however, such complicated scanning is involved that, when a zoomed display picture of the parent picture is scrolled, it is shifted, for example, in the opposite direction so that the position of the child picture may not be varied. PA0 3. When a parent picture is processed by line double speed processing and the signal obtained by the line double speed processing is superimposed to obtain an image of the interlace system, the upper and lower lines of same signal portions of the video signal after double speed conversion are superimposed in the opposite directions to each other between odd- and even-numbered fields by a deflection system. In this case, there is a problem in that, if a child picture processed by field double speed processing is displayed in a superimposed relationship on the double speed image signal, when the odd-numbered field is normal, the superimposition of the upper and lower lines of the child picture in the even-numbered fields is reversed.
B. When a parent picture is converted into a line double speed video signal and a display picture of the interlace system is produced by superimposition: | {
"pile_set_name": "USPTO Backgrounds"
} |
1. Field of the Invention
The present invention relates to a medium identification device, an image forming apparatus, a method of identifying a medium, and a computer program product.
2. Description of the Related Art
Regarding an image forming apparatus that forms an image on a recording medium, known is a technique for automatically identifying a type of a recording medium to be used and changing a condition for an image forming process according to the identified type of the recording medium to perform optimum image formation on the recording medium to be used.
For example, Japanese Patent Application Laid-open No. 2002-182518 and Japanese Patent Application Laid-open No. 2003-302885 disclose an image forming apparatus that identifies a type of a recording medium using a method of detecting surface smoothness of the recording medium by using an image of the recording medium captured with a CMOS sensor, and variably controls a developing condition, a transferring condition, or a fixing condition. Japanese Patent Application Laid-open No. 2007-55814 discloses an example for identifying the type of the recording medium, in which surface smoothness and reflectivity of the recording medium is obtained by detecting reflected light from the recording medium with a CMOS sensor, and a thickness of the recording medium is obtained by detecting transmitted light that transmits through the recording medium with the CMOS sensor.
However, types of recording media that can be used for image formation have been increasing in recent years, and there are many types that cannot be identified according to the related art. Thus, desired is a novel technique for identifying more variety of recording media. | {
"pile_set_name": "USPTO Backgrounds"
} |
Locoregional cutaneous metastases of malignant melanoma and their management.
The correct classification of locoregional metastases of malignant melanoma to skin is central to the planning of treatment. Local recurrence means persistence of neoplastic cells at the local site by virtue of incomplete excision of the primary melanoma. Standard treatment is excisional surgery. In contrast, locoregional metastases of malignant melanoma (satellites, in-transit metastases) are metastases around a primary melanoma or between a primary melanoma and regional lymph nodes. They represent intralymphatic or hematogenous spread of neoplastic cells. We present a variety of available treatment options and discuss especially topical imiquimod as a novel approach for the palliative treatment of locoregional cutaneous melanoma metastases in selected patients. | {
"pile_set_name": "PubMed Abstracts"
} |
Suits (season 5)
The fifth season of the American legal comedy-drama Suits was ordered on August 11, 2014. The fifth season originally aired on USA Network in the United States between June 24, 2015 and March 2, 2016. The season was produced by Hypnotic Films & Television and Universal Cable Productions, and the executive producers were Doug Liman, David Bartis and series creator Aaron Korsh. The season had six series regulars playing employees at the fictional Pearson Specter Litt law firm in Manhattan: Gabriel Macht, Patrick J. Adams, Rick Hoffman, Meghan Markle, Sarah Rafferty, and Gina Torres
Overview
The series revolves around corporate lawyer Harvey Specter and his associate attorney Mike Ross who, between the two of them, have only one law degree.
Cast
Regular cast
Gabriel Macht as Harvey Specter
Patrick J. Adams as Mike Ross
Rick Hoffman as Louis Litt
Meghan Markle as Rachel Zane
Sarah Rafferty as Donna Paulsen
Gina Torres as Jessica Pearson
Special Guest Cast
David Costabile as Daniel Hardman
Recurring Cast
Wendell Pierce as Robert Zane
Eric Roberts as Charles Forstman
Amanda Schull as Katrina Bennett
Rachael Harris as Sheila Sazs
Leslie Hope as Anita Gibbs
John Pyper-Ferguson as Jack Soloff
Farid Yazdani as David Green
Christina Cole as Dr. Paula Agard
Guest Cast
Megan Gallagher as Laura Zane
Six actors received star billing in the show's first season. Each character works at the fictional Pearson Specter Litt law firm in Manhattan. Gabriel Macht plays corporate lawyer Harvey Specter, who is promoted to senior partner and is forced to hire an associate attorney. Patrick J. Adams plays college dropout Mike Ross, who wins the associate position with his eidetic memory and genuine desire to be a good lawyer. Rick Hoffman plays Louis Litt, Harvey's jealous rival and the direct supervisor of the firm's first-year associates. Meghan Markle plays Rachel Zane, a paralegal who aspires to be an attorney but her test anxiety prevents her from attending Harvard Law School. Sarah Rafferty plays Donna Paulsen, Harvey's long-time legal secretary and confidant. Gina Torres plays Jessica Pearson, the co-founder and managing partner of the firm.
Episodes
Ratings
References
External links
Suits episodes at USA Network
List of Suits season 1 episodes at Internet Movie Database
List of Suits season 1 episodes at TV.com
05
Category:2015 American television seasons
Category:2016 American television seasons | {
"pile_set_name": "Wikipedia (en)"
} |
Franklin Local Board
The Franklin Local Board is one of the 21 local boards of the Auckland Council. It is overseen by the Franklin ward councillor.
The Franklin Local Board area spans the full width of the North Island, from the Hauraki Gulf to the Manukau Harbour. It includes the Hunua Ranges.
Angela Fulljames is the current chair of the board.
2019–2022 term
The board members, elected at the 2019 local body elections, in election order, are:
Alan Cole, Team Franklin, (5633 votes)
Andy Baker, Team Franklin, (5166 votes)
Amanda Kinzett, Team Franklin, (3803 votes)
Angela Fulljames, Team Franklin, (3546 votes)
Logan Soole, Team Franklin, (3093 votes)
Sharlene Druyven, Team Franklin, (3048 votes)
Malcolm Bell, not affiliated, (2971 votes)
Lance Gedge, Independent, (2886 votes)
Matthew Murphy, Waiuku First, (1640 votes)
2016–2019 term
The 2016–2019 board consisted of:
Angela Fulljames (chair)
Andy Baker (deputy chair)
Malcolm Bell
Alan Cole
Brendon Crompton
Sharlene Druyven
Amanda Hopkins
Murray Kay
Niko Kloeten
References
Category:Local boards of the Auckland Region | {
"pile_set_name": "Wikipedia (en)"
} |
Investment loan
ID: 1YXRW940ZQ
This loan is to help build reputation here on Bitbond and will be used for investing in part here on the site(approximately 0.6 btc) and the remaining I will use the funds to place trade orders on Coinspot for different cryptocurrencies and currencies(AUD$) and at Crypsty to hold DOGE and different coins. It will be repaid early just like my previous loans. Certainly within 3 months, I find the 6 months too long for the investor and by completing payment within the first 3 months it will help build a good reputation. I plan on a larger loan tied to USD later in the year of 2015 for a business project so this well help me get there. Thank you in advance for investing. | {
"pile_set_name": "Pile-CC"
} |
CREATE TABLE fruits (
id int,
items jsonb
);
INSERT INTO fruits VALUES (1, '{"apple": true}');
INSERT INTO fruits VALUES (2, '{"banana": false}');
INSERT INTO fruits VALUES (3, '{"peach": true}');
SET enable_seqscan = on;
SET enable_indexscan = off;
SET enable_bitmapscan = off;
SELECT id, items
FROM fruits
WHERE items &` 'boolean == true'
ORDER BY id;
id | items
----+-----------------
1 | {"apple": true}
3 | {"peach": true}
(2 rows)
DROP TABLE fruits;
| {
"pile_set_name": "Github"
} |
Geography of Evans County, Georgia
The geography of Evans County describes a county in the state of Georgia in the Southeastern United States in North America. According to the 2010 census, the county has a total area of , of which is land and is water. The major body of water is the Canoochee River, which flows through Evans County.
Evans County lies on the coastal plain region of Georgia, an area consisting mostly of sedimentary rocks. Other rocks in the area include sandstone and claystone, and sand and gravel. Some of the sand in Evans County, especially lying near the Canoochee River, is white quartz of a medium to coarse grain.
Evans County has a mild climate, averaging 49.8 degrees in January and 82.7 degrees in July. The average annual rainfall is 48 inches and the county has a minimum altitude of above sea level and a maximum altitude of above seal level.
There are four cities in Evans County with 60.2 people per square mile in the area. The county is ranked 145th out of 159 counties in the state in size and is led by a board of six popularly elected commissioners with a chairman elected by the commissioners. There are 199 acres of farmland and 796 acres in orchards in Evans County, with an average value of $92,983 in agricultural products sold by farms. Two major crops in the county are corn and soybeans.
Physical geography
Geological development
Geologically, Evans County lies in the coastal plain region of Georgia, an area consisting mostly of sedimentary rocks. The coastal plain is divided from the Piedmont by the Fall Line, which passes through Georgia from Augusta, Georgia in the east, then southwestward to Macon, Georgia, then to Columbus, Georgia and finally westward to Montgomery, Alabama.
Outcrops of sandstone and claystone from the Neogene era, and known as Neogene undifferentiated, cover 62% of the area in Evans County. The Neogene era has lasted approximately 23 million years, beginning 23.03 ± 0.05 million years ago. Sands and gravels from the Pliocene to the Pleistocene eras are the next most widespread geologic units, covering 26% of Evans County, followed by unconsolidated deposits of rock and sand in marsh and lagoonal facies, covering 10% of the area, and finally dune sand at 0.87% of the area.
Rocks and soils
Sedimentary rocks are not the only geological features in Evans County. The county is mostly covered by thin sand and red and yellow clay. The amount of sand in the area can vary from a few inches to a several feet. As in Tattnall County along the Ohoopee River, the sand in Evans County that lies along the Canoochee River is white quartz of a medium to coarse grain. There is exploitable medium-grain sand covering about 50 acres of land along the railroad above Bull Creek. The pure white sand along the Canoochee could be made into bottle glass, but is expensive to recover.
Much of Evans County is located in the Southern Coastal Plain Major Land Resource Area, with the southernmost portion of the county located in the Atlantic Coastal Flatwoods Major Land Resource Area.
River, creeks and ponds
The major body of water in Evans County is the Canoochee River, a 108-mile-long (174 km) river in southeastern Georgia. Other bodies of water are Cypress Pond; Dyess Pond; Beasley Pond; Tippins Lake; Bernard Smith Pond; I.W. DeLoach Ponds; Big Beasley Pond; and DeLoach Pond. Creeks that flow in Evan County are Grice Creek, Billy Fork Creek, Thick Creek, Mill Branch, Barnard Mill, Rocky Branch, Scott Creek, Cedar Creek, and Dry Creek.
Climate
Evans County has a mild climate, averaging 49.8 degrees in January and 82.7 degrees in July. The average annual rainfall is 48 inches and the county has a minimum altitude of above sea level and a maximum altitude of above seal level. The county is 1.4 times below the U.S. average in historical area-adjusted tornado activity. From 1950 to 2004, only 2 injuries have been caused by tornadoes in the county; this occurred on March 29, 1974 when an F1 tornado hit the county, causing between $5,000 and $50,000 in damages.
Political and human geography
Evans County is made up of four cities: Claxton, the county seat; Bellville, Daisy, and Hagan. With a total area of the county is ranked 145th, in size, out of 159 counties in Georgia. There are 60.2 people per square mile in the area. The county is led by a board of six commissioners elected by the people and a chairman elected by the commissioners. Serving members wield both executive and legislative power in the county. The map to the right shows Evans County's location in Georgia and the location of the county's four cities, with Claxton highlighted.
Natural resources
Agriculture and water
Evans County has 199 acres of farmland and 796 acres in orchards. The average value of the agricultural products sold by farms in Evans County is $92,983. Corn and cotton are major crops in the county. Other crops planted and harvested in Evans County include soybeans; wheat and vegetables; and land set aside for orchards. Evans Countians consumes 269,420 gallons of water a day out of a plant capacity of 3,720,000 gallons a day. There is an elevated storage capacity of 700,000 gallons.
References | {
"pile_set_name": "Wikipedia (en)"
} |
Quantitative proteomics for identification of cancer biomarkers.
Quantitative proteomics can be used for the identification of cancer biomarkers that could be used for early detection, serve as therapeutic targets, or monitor response to treatment. Several quantitative proteomics tools are currently available to study differential expression of proteins in samples ranging from cancer cell lines to tissues to body fluids. 2-DE, which was classically used for proteomic profiling, has been coupled to fluorescence labeling for differential proteomics. Isotope labeling methods such as stable isotope labeling with amino acids in cell culture (SILAC), isotope-coded affinity tagging (ICAT), isobaric tags for relative and absolute quantitation (iTRAQ), and (18) O labeling have all been used in quantitative approaches for identification of cancer biomarkers. In addition, heavy isotope labeled peptides can be used to obtain absolute quantitative data. Most recently, label-free methods for quantitative proteomics, which have the potential of replacing isotope-labeling strategies, are becoming popular. Other emerging technologies such as protein microarrays have the potential for providing additional opportunities for biomarker identification. This review highlights commonly used methods for quantitative proteomic analysis and their advantages and limitations for cancer biomarker analysis. | {
"pile_set_name": "PubMed Abstracts"
} |
Bcl2L12 mediates effects of protease-activated receptor-2 on the pathogenesis of Th2-dominated responses of patients with ulcerative colitis.
The immune dysregulation plays an important role in the pathogenesis of ulcerative colitis (UC). Bcl2 like protein-12 (Bcl2L12) and mast cells are involved in immune dysregulation of UC. This study aims to elucidate the role of Bcl2L12 in the contribution to the pathogenesis of T helper (Th)2-biased inflammation in UC patients. The results showed that Bcl2L12 was expressed by peripheral CD4+ T cells that was associated with Th2 polarization in UC patients. Bcl2L12 mediated the protease-activated receptor-2 (PAR2)-induced IL-4 expression in CD4+ cells. Activation of PAR2 increased expression of Bcl2L12 in CD4+ T cells. Bcl2L12 mRNA decayed spontaneously in CD4+ T cells after separated from UC patients which was prevented by activating PAR2. Bcl2L12 mediated the binding between GATA3 and the Il4 promoter in CD4+ T cells. Mice with Bcl2L12 deficiency failed to induce Th2-biased inflammation in the colon mucosa. We conclude that CD4+ T cells from UC patients expressed high levels of Bcl2L12; the latter plays an important role in the development of Th2-biased inflammation in the intestine. Bcl2L12 may be a novel therapeutic target in the treatment of Th2-biased inflammation. | {
"pile_set_name": "PubMed Abstracts"
} |
Term paper help: to buy or not to buy?
The thing about term papers is that they are often so much work. Students don’t like writing their term papers because they’re in the middle of a semester with so many other midterms and classes that they don’t have time for another paper to write. Besides the work that it takes to write it, sometimes it’s more a matter of stress than time. If you just have too much to do, that makes your options clear: using a writing solution might be just the right thing for you. If you know that it’s going to be too hard to do yourself, or you can’t handle another project on your plate, you can opt to buy your term paper.
On the other hand, doing the term paper yourself has some obvious benefits. You end up with something that’s written exactly the way you want it, and you don’t have to worry about getting scammed online or worry about the writer finishing on time. Also if you choose to buy your term paper, it might not be 100% the way you want it, but you’re very likely to get a better grade. A professional writer can certainly do the paper better than a student can, even if you are good at writing. It also saves you from having to do the work.
Buying a Term Paper Online
It really depends on how busy you are, if you want to go with a writing service or not. Look at your schedule and your priorities before deciding. A few key things can change your mind one way or another, such as your other assignments, any tests coming up, or if you have access to a vehicle to go do library research for this paper or not.
Buying a term paper is usually the way most students go, because they think it’s worth the money and saves them a lot of time. As well as time, it comes back to a matter of stress; if you are in need of a break to do a hobby or just relax with some ‘me time’ then buying your paper is the way to go. Keep searching for a good writer until you find one that suits your needs perfectly, because there are so many out there and you don’t have to settle for a writer that doesn’t offer the kind of service you want. | {
"pile_set_name": "Pile-CC"
} |
Q:
PHP - Concatenate object class name
Is is possible to concatenate an object's name?
The below doesn't seem to work..
Trying to call $node->field_presenter_en;
$lang = 'en';
$node->field_presenter_.$lang;
${$node->field_presenter_.$lang};
Thanks!
A:
Try:
$field_presenter = 'field_presenter_'.$lang;
$node->$field_presenter;
This is called variable variables. More information here:
http://php.net/manual/en/language.variables.variable.php
Edit:
The user nickb has suggested a much more elegant solution below, and I will incorporate into this answer for easier reading (nickb: please let me know if you want me to remove this):
$node->{'field_presenter_'.$lang}
A:
$field_presenter = 'field_presenter_'.$lang;
$node->$field_presenter;
| {
"pile_set_name": "StackExchange"
} |
Salomatin
Salomatin () is a rural locality (a khutor) in Novoanninsky District, Volgograd Oblast, Russia. The population was 374 as of 2010. There are 9 streets.
References
Category:Rural localities in Volgograd Oblast | {
"pile_set_name": "Wikipedia (en)"
} |
Solution-processed flexible ZnO transparent thin-film transistors with a polymer gate dielectric fabricated by microwave heating.
We report the development of solution-processed zinc oxide (ZnO) transparent thin-film transistors (TFTs) with a poly(2-hydroxyethyl methacrylate) (PHEMA) gate dielectric on a plastic substrate. The ZnO nanorod film active layer, prepared by microwave heating, showed a highly uniform and densely packed array of large crystal size (58 nm) in the [002] direction of ZnO nanorods on the plasma-treated PHEMA. The flexible ZnO TFTs with the plasma-treated PHEMA gate dielectric exhibited an electron mobility of 1.1 cm(2) V(-1) s(-1), which was higher by a factor of approximately 8.5 than that of ZnO TFTs based on the bare PHEMA gate dielectric. | {
"pile_set_name": "PubMed Abstracts"
} |
App-Ed: Digit Will Trick You Into Saving Your Own Money
Passive-save your way to solvency
This week I learned a lot about passive saving.
Passive saving is “an alternative to being frugal,” writes Modest Money. Or, a “saving and earning [opportunity] which you can set in motion, but which continues on [its] own without your effort.” Confusing? Modest Money suggests an example of passive saving would be to “change where you shop” or “owning a rental agency” — the idea being you do something once and money just… kind of appears?
But if the concept of passive saving (free money? maybe?) seems cool to you, but you really like the Whole Foods near your apartment, there’s another way. An app called Digit has taken up the term, describing its practice of passive saving in a more practical manner: “Every few days, Digit checks your spending patterns and moves a few dollars from your checking account to your Digit account, if you can afford it.”
Is Digit paying you? No. Is Digit investing anything for you? No. So what is Digit doing?
Digit, essentially, is creating a savings account for you. Which is great if you don’t already have one. But even if you do, it’s claiming it knows better than you on how exactly to save money: “Digit automatically figures out when and how much is safe to save based on your lifestyle. Digit doesn’t require you to figure out an arbitrary amount to transfer every month.” My lifestyle, eh? Digit is claiming that by watching my spending patterns, they know how to better save me some of my own money.
Which is not totally crazy if you think about it. Like many people, if I have money in my bank account, I will try to spend it. By keeping this Digit-themed savings money out of my bank account, I can’t actually spend it (unless, of course, I choose to have it transferred back into my account, which Digit does for free and at any time).
But Digit is not your bank, so this bit is important: “All funds held within Digit are FDIC insured up to a balance of $250,000.” Meaning this isn’t some ploy to steal your money. For now, this venture capital-funded app is free to use as you please and as a place to “store” bits and pieces of your earnings in a makeshift, passively earned savings account.
People seem to really like it. I only got started a week ago and have already saved $64. A friend of mine recently posted on Facebook that she had saved $3,000 in 7 months — this, of course, is based off her earnings and what she can afford to save. It’s different for everyone! But what’s great about Digit is that it proves that everyone has the ability to save money and everyone should be doing so. | {
"pile_set_name": "Pile-CC"
} |
At the main menu or while playing the game, press Up, Up, Down, Down, Left, Right, Left, Right to unlock the "Relive Episode" option, which allows you to select any level including those that have not yet been played. If you entered the code correctly, you will hear a sound.
Infinite ammunition
At the main menu or while playing the game, press Up, Down, Left, Right, Up, Down, Left, Right. If you entered the code correctly, you will hear a sound.
Watermark 20
At the main menu or while playing the game, press Z, Z, Z, C, Z, Z, C, C, Z. If you entered the code correctly, you will hear a sound and "Watermark 20" text will be displayed. The full effect of this code remains unknown.
Relive Scene option
Successfully complete the game to unlock the "Relive Scene" option. This option allows any level to be replayed, including the "Release Therapy" bonus level.
Bonus level
Successfully complete the game to unlock a bonus level with Leo.
Familiar face
Go to the door where the guard tells you he will not let you in because you are not a familiar face. Turn around, go off the stage, and climb up the steps on your left. Then, go up the stairs on that side. Move up the stairs until reaching the top. On the right side down the hallway is a case on the wall. Inside the case is an axe. Use the axe on a person or body. Executions immediately detach a head, where as hacking at a corpse requires a few swings. Pick up the head. Run back to the door, and equip the head as a weapon. Go to the door, and press Action. The guard will now let you in. | {
"pile_set_name": "Pile-CC"
} |
(1) Field of the Invention
The present invention relates to a culvert joint which can follow an expansion and contraction of distance or an uneven subsidence between the opposing culvert sections, more specifically to a culvert joint designed to regulate the distance between its strengthening members which bear the earth pressure.
(2) Description of the Prior Art
In the conventional joint for concrete culvert, an elastic material like rubber or synthetic resin is employed to constitute the junction of the culvert; the two ends of a short tubular flexible member are anchored around the inside surfaces of opposing culvert sections or the flexible member is bolted at the rear to suspend a part of said member, thereby minimizing the distortion in the flexible member due to difference between internal and external water pressure in the culvert. In both cases, the earth pressure acting at the junction of culvert sections is borne by a concrete mass poured behind the whole flexible member so that the flexible member simply bears the internal water pressure or the underground water pressure.
Thus in the event of a heavy uneven subsidence of the ground or displacement of the culvert sections in a longitudinal direction or in a direction perpendicular to the longitudinal direction during an earthquake, with result that the gap at the junction remarkably widens, the mud and sand located around the culvert come into direct contact with the flexible member, causing a heavy earth pressure to act directly on the flexible member, whereupon a harmful distortion develops in the flexible member and not only is the water flow impeded, but also the flexible member loses its durability or is broken very often.
To eliminate these troubles, various measures are taken. For instance, anchor members which have a seat for fitting the flexible member and cavities at the inside and outside of the seat are respectively provided at the opposed open ends of culvert sections to be joined together. A short tubular flexible member made of rubber or synthetic resin to smoothly absorb a heavy uneven subsidence in a state of maintaining watertightness of joint block is provided extending over the seats of the opposed anchor members, while a large number of strengthening members to withstand the water pressure and earth pressure at the internal and external positions of the flexible member and transmit these forces to said anchor members, are provided in such an arrangement that both ends of the strengthening members are held displaceable to a certain extent within the cavities of said anchor members. Thus said short tubular flexible member is wholly protected, thereby preventing it from being damaged from inside and outside by rolling stones, mud and sand, wood, iron piece or water.
In a joint having such a strengthening member, however, numerous parallel gaps must be left between the numerous strengthening members so that the displacement in a direction perpendicular to the longitudinal direction of culvert due to an uneven subsidence of adjacent culvert sections can be smoothly absorbed. Thereby said gap between the strengthening members should be appropriately maintained without moving to one side so that mud and sand around the outside of the joint blocks may not enter to the flexible member through the gaps between the strengthening members.
It is conceivable to provide a padding at the gap for the purpose of preventing entry of mud and sand through the gap, but such a padding must be a soft, elastic material like sponge rubber with a relatively low resilience or a filler such as a plastic one like asphalt or putty, so that the resistance to deformation of the junction under progress of uneven subsidence can be minimized. However, the low resilience and plastic deformability of the fillers are likely in the junction of underground culvert to result in that with progress of uneven subsidence the mud and sand around the junction causes the distances between the strengthening members to become uneven and in consequence the strengthening members to move to one side, thereby developing a gap between them which permits penetration of mud and sand and consequent decrease of durability or damage of the flexible member as mentioned before. Thus even when a padding is provided between the strengthening members, the regulation of their distance is important.
The penetration of mud and sand through the gap caused by moved strengthening members restrains a free displacement of the flexible member which is required to follow the uneven subsidence of the culvert, thereby causing damage to the flexible member; when water is flowing in the culvert, the resistance to water flow increases owing to the smoothness of internal surface at the junction being impeded, leading to a drop in the joint performance and in the joint durability. | {
"pile_set_name": "USPTO Backgrounds"
} |
<component name="libraryTable">
<library name="Maven: org.springframework:spring-beans:4.1.7.RELEASE">
<CLASSES>
<root url="jar://$MAVEN_REPOSITORY$/org/springframework/spring-beans/4.1.7.RELEASE/spring-beans-4.1.7.RELEASE.jar!/" />
</CLASSES>
<JAVADOC>
<root url="jar://$MAVEN_REPOSITORY$/org/springframework/spring-beans/4.1.7.RELEASE/spring-beans-4.1.7.RELEASE-javadoc.jar!/" />
</JAVADOC>
<SOURCES>
<root url="jar://$MAVEN_REPOSITORY$/org/springframework/spring-beans/4.1.7.RELEASE/spring-beans-4.1.7.RELEASE-sources.jar!/" />
</SOURCES>
</library>
</component> | {
"pile_set_name": "Github"
} |
1. Introduction {#sec1}
===============
Nonalcoholic fatty liver disease (NAFLD), the most common chronic liver disease worldwide, is one of the major causes of the fatty liver, occurring when fat is deposited in the liver in the absence of excessive alcohol intake \[[@B1], [@B2]\]. Currently, its prevalence in Asia is estimated to be 25%, similar to the incidence in many western countries (20--30%), and is even as high as 40% in westernized Asian populations \[[@B3]\]. The development of NAFLD is directly associated with enhancement in prooxidant status \[[@B4]\], proinflammatory status \[[@B5]\], and lipid content \[[@B4], [@B6]\] of the liver in mice fed with a high-fat diet (HFD) \[[@B7]\].
Lifestyle modification, including dietary changes, weight loss, and physical activity, is the initial treatment option for patients with NAFLD \[[@B8]\]. On the other hand, dietary modification may benefit the treatment of NAFLD without significant weight loss \[[@B9]\]. Accumulating clinical evidence has revealed that low levels of n-3 polyunsaturated fatty acids (n-3 PUFAs), including *α*-linolenic acid (ALA), in serum and liver tissue samples are common characteristics of patients with alcoholic disease and NAFLD \[[@B10], [@B11]\], which may be attributed to impaired bioavailability of liver n-6 and n-3 PUFAs \[[@B11]--[@B13]\]. Jump et al. \[[@B14]\] provided an in-depth rationale for the use of dietary n-3 PUFA supplements as a treatment option for NAFLD. Experimental and clinical data on n-3 PUFAs have also demonstrated that dietary supplementation with eicosapentaenoic acid (EPA, C20:5) and docosahexaenoic acid (DHA, C22:6) prevents or alleviates NAFLD \[[@B15]\]. Additionally, a recent transcriptomic study showed that fish oil protected against HFD- and high-cholesterol diet-induced NALFD by improving lipid metabolism and ameliorating hepatic inflammation in Sprague-Dawley rats \[[@B16]\]. We also reported that diet rich in DHA and/or EPA improved lipid metabolism and had anti-inflammatory effects in HFD-induced NALFD in C57BL/6J mice \[[@B17]\]. Thus, daily intake of DHA and EPA for healthy adults as well as those with coronary artery diseases and hypertriglyceridemia is strongly recommended by authority organizations. However, the precise requirement for marine n-3 PUFAs is not known \[[@B9]\].
Recently, the effects of different dietary n-6/n-3 ratio on health and disease have drawn close attention. A higher intake of n-6 FA and higher dietary n-6/n-3 FA ratio were reported in NAFLD subjects \[[@B18]\]. On the other hand, additional evidence also highlighted the role of ratios of DHA and EPA in the prevention and treatment of chronic disease in rat models \[[@B19]--[@B22]\], indicating the importance of both n-6/n-3 ratios and DHA/EPA ratios. It has been known that the intake of dietary fat alters the FA composition of plasma and various organs, including the liver \[[@B12], [@B18]\]. Lipidomics analysis has also revealed the role of different EPA/DHA ratios in the modulation of inflammation and oxidative markers in genetically obese hypertensive rats through the downregulation of the production of proinflammatory n-6 eicosanoids \[[@B23]\]. We previously showed that an oral administration of n-6/n-3 PUFAs with varying DHA/EPA ratios for 12 weeks ameliorated atherosclerosis lesions \[[@B24]\] and liver damage \[[@B17]\] in mice fed with an HFD. Data from aforementioned studies suggested the positive effects of supplementation with varying DHA/EPA ratios on the metabolic parameters of HFD-fed animals. However, there have been few studies on the protective role of n-6/n-3 PUFA supplementation with varying DHA/EPA ratios against HFD-induced liver damage and its correlation with hepatic FA composition.
Therefore, the focus of this study was to evaluate the positive effects of n-6/n-3 PUFA supplementation with varying DHA/EPA ratios on liver disease induced by an HFD as well as the associated alterations of FA composition of the liver.
2. Materials and Methods {#sec2}
========================
2.1. Animals and Diets {#sec2.1}
----------------------
Male apolipoprotein E knockout (*ApoE*^−/−^) mice at weaning (C57/BL6 background, 6 weeks old, 20 ± 2 g) were obtained from Vital River Laboratories (Beijing, China). All of the mice were housed in a humidity and temperature controlled room (relative humidity, 65--75%; temperature, 20--24°C) with a 12 h : 12 h light/dark cycle and were given *ad libitum* access to their specific diets and water. After a 1-week acclimation, the mice were randomly divided into the following five groups: (1) normal diet (ND) group (control group received an ND of basic feed 86%, casein 4%, and yolk powder 10%), (2) HFD group received HFD I (basic feed 70%, 15% lard, 1% cholesterol, casein 4%, and yolk powder 10%), and (3--5) DHA/EPA groups (2 : 1, 1 : 1, and 1 : 2) received HFD II (basic feed 75%, 10% lard, 1% cholesterol, casein 4%, and yolk powder 10%) plus mixed oil. The mixed oil (including sunflower seed, perilla, fish, and algal oils) was formulated by the previous method \[[@B24]\] for partial replacement of 5% lard, with adjustment of the n-6/n-3 ratio to 4 : 1 and with variation in the DHA/EPA ratios (2 : 1, 1 : 1, and 1 : 2). The diets were prepared according to the previous method \[[@B17], [@B24]\]. The FA profiles of oils, basic feed, and HFDs were quantified by gas chromatography \[[@B24]\]. The FA compositions of oils, basic feed, control diet, and HFDs are shown in [Table 1](#tab1){ref-type="table"}. The lipids were administered orally (1 g/kg body weight (BW)) for 12 weeks. The ND and HFD groups were given the same dose of physiological saline via intragastric administration. Their BWs were recorded once a week. The *Guide for the Care and Use of Laboratory Animals* by the National Institutes of Health (Bethesda, MD, USA) was followed during the experiments \[[@B25]\]. The animal protocol was approved by the Tongji Medical College Council on Animal Care Committee (Wuhan, China). At the end of the experiments, mice after 12 h of fasting were anesthetized with isoflurane before blood and tissue sample collections. Serum was collected from blood after agglutination and centrifugation at 4000 ×g at 4°C for 10 min and then stored at −80°C. Fresh tissue samples were fixed for histopathology determinations or were quick-frozen in liquid nitrogen for quantitative PCR (qPCR) and western blot analyses.
2.2. Lipid Extraction and FA Analysis {#sec2.2}
-------------------------------------
Total lipid from serum or liver tissue homogenates was extracted with ice-cold chloroform/methanol (2 : 1 *v*/*v*) with 0.01% butylated hydroxytoluene. After centrifugation, the phase interface was washed with chloroform/methanol/water (3 : 48 : 47 *v*/*v*/*v*). Methyl esterification of the lipids was conducted according to the previous report \[[@B26]\]. Fatty acid methyl esters (FAMEs) were quantified using the Agilent Technologies 6890 Gas Chromatograph (Agilent Technologies Inc., Savage, MD, USA) with a flame ionization detector. Separation of the FAMEs was performed on the HP-INNOWax capillary column (30 × 0.32, 0.25 *μ*m; Agilent) using helium as carrier gas at a constant flow of 1.5 mL/min. The samples were injected at a starting oven temperature of 50°C; the injector and detector temperatures were 250°C. The oven temperature was programmed as follows: 50°C, 1 min, 15°C/min to 175°C, 5 min, and 1°C/min to 250°C. The FAMEs were identified by comparing with authentic standards (Nu-Chek-Prep) and were calculated as the percent area of total FAs.
2.3. Histopathological Analysis {#sec2.3}
-------------------------------
Fresh liver slices were processed by hematoxylin and eosin (H&E) staining. Briefly, liver tissues were cut into slices and fixed, and then, the samples were dehydrated and embedded with paraffin. Paraffin-embedded tissue sections (5 *μ*m) were stained with H&E and observed under the Olympus BX50 light microscope (Olympus, Tokyo, Japan).
2.4. Measurements of Serum Parameters and Fat Liver Content {#sec2.4}
-----------------------------------------------------------
Serum total cholesterol (TC, mM), triglyceride (TG, mM), low-density lipoprotein cholesterol (LDL-C, mM), high-density lipoprotein cholesterol (HDL-C, mM) levels, and hepatic TC (mM/g protein) and TG (mM/g protein) were determined by spectrophotometric methods using the respective kits (Biosino Biotechnology Co. Ltd., Beijing, China) according to the manufacturer\'s instructions. Serum aspartate transaminase (AST, U/L), alanine transaminase (ALT, U/L), and alkaline phosphatase activities (AKP, U/L) were measured using specific diagnostic kits (Nanjing Jiancheng Corporation, Nanjing, China). Enzyme-linked immunoassay (ELISA) kits were used to assess the serum levels of tumor necrosis factor alpha (TNF-*α*, pg/mL), interleukin-1*β* (IL-1*β*, pg/mL), and adiponectin (Cloud-Clone Corp., Wuhan, China).
2.5. Analysis of Hepatic Malondialdehyde, Superoxide Dismutase, and Glutathione {#sec2.5}
-------------------------------------------------------------------------------
Hepatic malondialdehyde (MDA, *μ*M/g protein), glutathione (GSH, *μ*M/g protein), and superoxide dismutase (SOD, U/mg protein) were determined using the respective kits (Nanjing Jiancheng Corporation, Nanjing, China).
2.6. qPCR Analysis {#sec2.6}
------------------
Total RNA of mouse liver samples was extracted using the TRIzol reagent (Ambion®, Life Technologies, Austin, TX, USA) according to the manufacturer\'s instructions. Messenger RNA (mRNA) expression levels of the target genes were quantified using the SYBR Green-based Kit (Takara Bio Inc., Dalian, China) with specific primers and a real-time PCR machine for qPCR (IQ5; Bio-Rad, Hercules, CA, USA). The mRNA level of *β*-actin was used as the invariable control for quantification, and the results were calculated by the comparative 2^−ΔΔCt^ method. The sequences of the forward and reverse primers used for the detection of the target genes are listed in [Table 2](#tab2){ref-type="table"}.
2.7. Western Blot Analysis {#sec2.7}
--------------------------
The liver tissues were homogenized in radioimmunoprecipitation assay lysis buffer (1% Triton X-100, 1% deoxycholate, and 0.1% sodium dodecyl sulfate (SDS)), and protein concentration was measured. Equal amounts of protein extracts were mixed (3 : 1, *v*/*v*) and processed in loading buffer for electrophoresis in 10% acrylamide SDS gels and subsequently electroblotted to a nitrocellulose transfer membrane (Merck Millipore, Burlington, MA, USA) using a Trans-Blot SD semidry electrophoretic transfer cell (Bio-Rad). Target proteins were probed with specific primary antibodies, and then, the bound primary antibodies were recognized with species-specific secondary antibodies. The chemiluminescence intensity of the specific proteins on the membrane was subsequently detected using the SuperSignal West Pico Chemiluminescent Substrate (Thermo Fisher Scientific, Waltham, MA, USA) and a western blotting detection system (Bio-Rad). The optical densities (OD) of the bands were quantified using the Gel-Pro 3.0 software (Biometra, Goettingen, Germany). The density of the specific protein band was corrected to eliminate background noise and normalized to that of GAPDH (Boster Biological Technology Ltd., Wuhan, China) as OD/mm^2^.
2.8. Statistical Analysis {#sec2.8}
-------------------------
Statistical analysis was performed with the GraphPad Prism 4.0.3 software (GraphPad Prism Software Inc., San Diego, USA). Data were presented as mean ± standard error of the mean (SEM). One-way analysis of variance was performed with Fisher\'s least significant difference multiple comparison post hoc test. A *P* \< 0.05 was considered statistically significant.
3. Results {#sec3}
==========
3.1. Dietary DHA/EPA Reduces HFD-Induced Liver Injury {#sec3.1}
-----------------------------------------------------
Treatment with DHA/EPA did not change the BWs and liver weights in the study. The mice in the five dietary groups showed similar initial BWs, final BWs, and liver/BW ratio ([Table 3](#tab3){ref-type="table"}). The hepatic histological changes were observed by light microscopy of tissue sections with stained H&E ([Figure 1](#fig1){ref-type="fig"}). The main change that occurred in the liver from the HFD group was macrovesicular steatosis, as determined by the observation of lipid vesicles in the cytosolic compartment, along with neutrophil and lymphocyte infiltration. However, DHA/EPA-supplemented mice were much fewer and smaller hepatic fatty vesicles than the HFD group mice did.
As illustrated in [Table 4](#tab4){ref-type="table"}, compared with ND-fed mice, serum levels of AST, ALT, and AKP levels were higher (*P* \< 0.05) in HFD-fed mice. However, various ratios of DHA/EPA supplementation significantly alleviated HFD-induced liver injury by reducing serum levels of AST (ranging from 71.6% to 86.9%), ALT (ranging from 66.6% to 80.7%), and AKP (ranging from 22.4% to 53.6%). No significant change was observed in the activities of serum aminotransferases among the DHA/EPA groups; AST, ALT, and AKP levels were highest in the DHA/EPA 1 : 2 group.
Hepatic MDA was significantly boosted in HFD-fed mice compared to that in the ND-fed mice ([Table 4](#tab4){ref-type="table"}). The MDA production was markedly decreased by DHA/EPA supplementation. However, the inhibitory effects of different DHA/EPA ratios on MDA production were not significantly different ([Table 4](#tab4){ref-type="table"}). In contrast to that in the HFD group, serum levels of GSH (increased more than 2-fold) and SOD (increased by 18.5%) were notably elevated in DHA/EPA-treated mice ([Table 4](#tab4){ref-type="table"}). However, no significant differences of MDA, SOD, and GSH among the three DHA/EPA ratios were observed.
3.2. Dietary DHA/EPA Changes FA Composition of the Serum and Liver {#sec3.2}
------------------------------------------------------------------
FA compositions of the serum and liver samples in mice after the 12-week feeding of the HFD are shown in Tables [5](#tab5){ref-type="table"} and [6](#tab6){ref-type="table"}, respectively. When the FA compositions of total liver lipids were compared, a significant decrease (*P* \< 0.05) of total saturated fatty acids (SFAs) was observed in the HFD group compared with that in the ND group. This trend occurred in the abundance of total PUFAs (26.7% difference) (*P* \< 0.001), including total n-6 and n-3 PUFAs (16.6% and 54.7% difference, respectively) with an 84.9% increase in the ratio of n-6/n-3. Also, the content of total MUFAs was significantly increased (*P* \< 0.01) due to significant increases in 16 : 1 (palmitoleic acid) and C18:1 (oleic acid; 130% difference).
Among the varying ratios of DHA/EPA groups, we found an increase in SFAs (DHA/EPA 2 : 1 group, 19.6%; DHA/EPA 1 : 1 group, 14.5%), PUFAs n-6 series (DHA/EPA 2 : 1 group, 11.1%; DHA/EPA 1 : 1 group, 9.1%; and DHA/EPA 1 : 2 group, 17.9%), and PUFA n-3 series (DHA/EPA 2 : 1 group, 166.4%; DHA/EPA 1 : 1 group, 151.7%; and DHA/EPA 1 : 2 group, 126.3%) in the liver compared to the HFD group. Also, the amount of MUFAs (DHA/EPA 2 : 1 group, 49.7%; DHA/EPA 1 : 1 group, 41.8%; and DHA/EPA 1 : 2 group, 35.3%) and the ratio of n-6/n-3 (DHA/EPA 2 : 1 group, 58.3%; DHA/EPA 1 : 1 group, 55.9%; and DHA/EPA 1 : 2 group, 48.1%) showed a marked decrease after DHA/EPA supplementation. Among the three DHA/EPA groups, DHA/EPA 1 : 2 group had the lowest C18:0 and C20:1 concentration and the highest C18:2 and n-6 PUFA concentration. The DHA/EPA 2 : 1 group showed a tendency to raise n-3 PUFA concentration and lower SFAs, C20:5 and C22:0 concentrations, and n-6/n-3 ratio.
Concerning serum FA composition, the same trend was observed for the amount of MUFAs, PUFAs n-6 series, PUFAs n-3 series, and the ratio of n-6/n-3 in the three DHA/EPA groups compared with the HFD group. However, no significant difference among the three DHA/EPA ratios was found for the amount of SFAs, MUFAs, PUFAs n-6 series, and the ratio of n-6/n-3.
3.3. Dietary DHA/EPA Ameliorates HFD-Induced Hepatic Inflammation {#sec3.3}
-----------------------------------------------------------------
The serum concentrations of both IL-1*β* and TNF-*α* were significantly lower in the three DHA/EPA-treated groups than those in the HFD group ([Figure 2](#fig2){ref-type="fig"}). In DHA/EPA-treated mice, the TNF-*α* level decreased by more than 30%. A similar trend was observed for serum levels of IL-1*β*. Consistent with findings for serum levels of proinflammatory cytokines, the data of qPCR analysis demonstrated significantly reduced hepatic expression levels of IL-6, IL-1*β*, TNF-*α*, monocyte chemoattractant protein-1 (MCP-1), vascular cell adhesion molecule-1 (VCAM-1), and intercellular adhesion molecule-1 (ICAM-1) in DHA/EPA-treated mice compared to those in the HFD-treated mice ([Figure 2](#fig2){ref-type="fig"}). The mRNA expression levels of the anti-inflammatory cytokine IL-10 were increased by 51.0%, 47.8%, and 38.0% in mice treated with DHA/EPA ratios of 1 : 2, 1 : 1, and 2 : 1, respectively.
3.4. Dietary DHA/EPA Improves HFD-Induced Lipid Dyshomeostasis in Liver Tissue {#sec3.4}
------------------------------------------------------------------------------
DHA/EPA treatment for 12 weeks resulted in a significant reduction in serum levels of TC (reduced by 46.9--72%), TG (reduced by 45.4--75.6%), LDL-C (reduced by 38.3--63.7%), and ox-LDL (reduced by 36.2--38.3%) compared to the HFD group ([Table 4](#tab4){ref-type="table"}). Although the reduction effects of DHA/EPA on the hepatic lipid level have no significant difference among the three DHA/EPA groups, daily DHA/EPA treatment alleviated hepatic fatty accumulation. Moreover, the three groups treated with DHA/EPA had higher serum levels of HDL-C (increased by 61.5--169.2%) and adiponectin (increased by 27.4--141%) than the HFD group did. In particular, DHA/EPA 1 : 2 group had the lowest serum TC, TG, and LDL levels and the highest adiponectin level among the three DHA/EPA groups.
As illustrated in [Figure 3](#fig3){ref-type="fig"}, 66.5%, 69.7%, and 58.0% increases in the mRNA expression of ATP-binding cassette transporter A1 (ABCA1) were, respectively, observed in the DHA/EPA 1 : 2, DHA/EPA 1 : 1, and DHA/EPA 2 : 1 groups, compared with that in the HFD-treated mice ([Figure 3](#fig3){ref-type="fig"}). No significant difference of the ABCA1 expression level was found among the DHA/EPA groups. Compared to that in the HFD group, the same trend was observed in ATP-binding cassette transporter G1 (ABCG1) and acyl-coenzyme A:cholesterol acyltransferase (ACAT-1) in the DHP/EPA groups, although only the DHA/EPA 1 : 1 group showed a significant increase in lysosomal acid lipase (LAL) (*P* \< 0.05). In liver tissue, cluster of differentiation 36 (CD36), macrophage scavenger receptor 1 (MSR-1), and lectin-like oxidized low-density lipoprotein receptor 1 (LOX-1) expression levels were significantly downregulated at both the mRNA and protein levels in DHA/EPA-treated mice compared to that in the HFD-treated mice. Additionally, the feeding of the HFD significantly downregulated the protein levels of proliferator-activated receptor alpha (PPAR*α*) and adenosine monophosphate-activated protein kinase (AMPK) and upregulated the protein levels of sterol regulatory element-binding protein 1c (SREBP-1c), compared with that of the ND, which were partially reversed with the supplementation of dietary DHA/EPA ([Figure 3](#fig3){ref-type="fig"}).
4. Discussion {#sec4}
=============
Dietary n-3 PUFAs can reduce hepatic inflammation, fibrosis, and steatosis, decrease plasma TG concentrations, and regulate hepatic fatty acid and TG metabolism in NAFLD. We previously created a mouse model in which NAFLD, lipid disorder, oxidative stress, and inflammation were induced by an HFD in C57BL/6J mice \[[@B17]\]. Our findings showed that the consumption of diets with various ratios of DHA/EPA (2 : 1, 1 : 1, and 1 : 2) ameliorated liver steatosis in mice. This is probably due to the repletion of hepatic total n-3 PUFA content and decrease of the n-6/n-3 ratio, concomitant with a reduction of oxidative stress, proinflammatory cytokine secretion, and hepatic lipid content. ApoE is a class of proteins involved in the metabolism of fats in humans and mice. Its absence predisposes to metabolic syndrome (e.g., Alzheimer\'s disease, atherosclerosis, and obesity) and might be associated with NAFLD \[[@B27]\]. Therefore, ApoE^−/−^ mice have been extensively employed as models for metabolic syndrome and NAFLD in recent years \[[@B28], [@B29]\].
It has been reported that consuming DHA and EPA directly from foods and/or dietary supplements is the only practical way to increase the levels of these FAs in the body. The contents of DHA and EPA in the serum and liver tissue of DHA/EPA-treated mice were notably increased in our study. It is also well known that dietary fat, including DHA and EPA, alters the FA composition of various organs \[[@B12], [@B18]\]. Our results showed that the increased MUFAs and decreased SFAs, n-6 PUFAs, and n-3 PUFAs with an increase of the n-6/n-3 ratio were observed in liver tissue of HFD-fed mice compared to that in the ND-fed mice. This phenomenon is most likely due to the increased activity of Δ-9 desaturase activity \[[@B30], [@B31]\] and the defective pathway for desaturation and elongation of essential precursors, linoleic acid, and ALA \[[@B32]\]. Our findings are in agreement with the observations of other authors \[[@B13], [@B33]\]. Interestingly, these changes were either reversed or normalized to the control levels in mice fed the diets supplemented with DHA/EPA (2 : 1, 1 : 1, and 1 : 2). Our study showed that the DHA/EPA 2 : 1 group showed a tendency to raise DHA and n-3 PUFA concentration and lower the n-6/n-3 ratio in the liver. On the other hand, the DHA/EPA 1 : 2 group showed a tendency to raise EPA, n-6 PUFA concentration, and the n-6/n-3 ratio in the liver. The results suggest that DHA/EPA supplementation moderately attenuated the HFD-induced NAFLD, at least partly due to the alteration of FA composition of serum and liver tissue.
The impairment of normal redox homeostasis and the consequent accumulation of oxidized biomolecules have been linked to the onset and/or development of a large variety of diet-induced diseases. An established source of oxidative stress is reactive oxygen species (ROS), which are generated by free FA metabolism and can attack PUFAs and initiate lipid peroxidation within cells. The formation of aldehyde by-products during lipid peroxidation, including MDA, activates the inflammatory response, propagating tissue injury and activating cellular stress signaling pathways. We previously found that the supplementation of various DHA/EPA ratios with an n-6/n-3 ratio of 4 : 1 reversed HFD-induced oxidative stress, as evidenced by the lower content of MDA. These effects are correlated with the induction of serum SOD activity and enhancement in serum levels of GSH and serum total antioxidant capacity, although no significant differences were observed among the DHA/EPA groups (2 : 1, 1 : 1, and 1 : 2) \[[@B24]\]. However, Mendez et al. \[[@B21]\] revealed significant differences in the carbonylation status of albumin in plasma among the DHA/EPA dietary groups, and the EPA : DHA 1 : 1 ratio exhibited the lowest protein oxidation scores. In this study, the general changes in hepatic MDA, SOD, and GSH levels were similar to those observed in our previous report \[[@B24]\]. The difference between the results of our study and those of Mendez et al. may lie in the different FA compositions in the diets. HFD-induced liver oxidative stress is associated with progressively increasing availability and oxidation of FAs in the liver \[[@B34]\] and/or TNF-*α*-induced enhancement in mitochondrial ROS production \[[@B35]\], while the DHA/EPA-reversed liver oxidative stress is possibly related to liver n-6 PUFAs and n-3 PUFA repletion with a decreased n-6/n-3 ratio \[[@B36]\].
Dysfunction of fat storage in adipose tissue may increase adipocyte lipolysis, subsequently causing excessive adipose-derived fatty acid influx into the liver, eventually resulting in hepatic steatosis \[[@B37]\]. By upregulating genes encoding proteins involved in FA oxidation and downregulating genes encoding proteins involved in lipid synthesis, n-3 PUFAs provide their protective effects on NAFLD. SREBP-1c, the key lipogenic transcription factor that is highly expressed in the liver, increases the expression of genes connected with fatty acid and TG synthesis. Our recent study showed that the treatment of C57BL/6J mice with various DHA/EPA ratios repressed SREBP-1c-mediated downregulation of FA synthase, stearoyl desaturase-1, and acetyl-CoA carboxylase with a concomitant reduction in *de novo* lipogenesis and activated PPAR*α*-mediated upregulation of carnitine palmitoyl transferase-1 and acyl-CoA oxidase expression with a parallel enhancement in FA oxidation \[[@B17]\]. As one of the critical adipokines secreted by endocrine organs, adiponectin modulates hepatic lipid homeostasis towards a reduction of lipid content \[[@B10]\]. Activated adiponectin signaling leads to the activation of the AMPK pathway, which modulates hepatic lipid metabolism by simultaneously inhibiting *de novo* lipogenesis and stimulating FA *β*-oxidation \[[@B38]\]. In this study, the reduction of hepatic lipid accumulation in DHA/EPA-treated mice may be attributed to the elevated serum levels of adiponectin. Additionally, mice treated with DHA/EPA showed significant diminution in total liver fat content compared to untreated animals, a finding that may be related to changes in the pattern of lipid metabolism in the liver. To explain the potential mechanism causing the changes, proteins involved in cholesterol efflux (ABCA1 and ABCG1), cholesterol esterification (ACAT1), cholesterol lipolysis (LAL), and cholesterol uptake (CD36, MSR-1, and LOX-1) were examined. This is supported by the higher mRNA expression of the ABCA1, ABCG1, LAL, and ACAT-1 and the lower expression of CD36, MSR-1, and LOX-1. We demonstrated that diets lacking DHA and EPA have no effects on the expression of ABCA1, ABCG1, and LAL, which indicated that DHA and EPA are much more likely to regulate cholesterol homeostasis by increasing cholesterol efflux and lipolysis \[[@B24]\].
In both NAFLD patients and animals subjected to HFD, hepatic proinflammatory status is characterized by Kupffer cell activation, an increased number of hepatic neutrophils, and higher levels of serum transaminases, TNF-*α*, IL-1*β*, and IL-6 \[[@B39]\]. Our recent study showed that serum levels of ALT, AST, TNF-*α*, IL-1*β*, and IL-6 in C57BL/6J mice were all significantly lower in the DHA/EPA groups compared to those in the HFD group \[[@B17]\]. In agreement with these findings, the data presented here show that transaminase activity, TNF-*α*, and IL-1 *β* levels in serum and TNF-*α*, IL-1 *β*, IL-6, MCP-1, VCAM-1, and ICAM-1 mRNA expression in the liver were higher in HFD-fed ApoE^−/−^ mice compared to the controls, a condition that was reverted upon supplementation with various DHA/EPA ratios. Furthermore, mRNA expression of the anti-inflammatory cytokine IL-10 was significantly upregulated by DHA/EPA supplementation. Activating protein-1, including c-Jun and c-Fos, is an important signal transduction pathway component of proinflammatory mediator expression and is independent of NF-*κ*B. We previously found that the consumption of DHA/EPA significantly suppressed the expression of c-Jun and c-Fos protein and their respective genes. Additionally, the critical role of PPAR*α* in preventing fat-induced nonalcoholic steatohepatitis by alleviating liver steatosis, oxidative stress, and inflammation has been proven \[[@B40]\]. The underlying mechanisms by which n-3 PUFAs protected against HFD-induced liver steatosis are probably that n-3 PUFA-activated PPAR*α* interact with proinflammatory factor NF-*κ*B p65 with the formation of inactive PPAR*α*/NF-*κ*B p65 complexes \[[@B41]\] and the suppression of proinflammatory cytokine formation and secretion \[[@B7]\]. Moreover, DHA had a greater suppressive effect than EPA on an alcohol/high-fat diet-induced hepatic inflammation and ROS generation by increasing adiponectin production and secretion \[[@B42], [@B43]\], which has strong cellular protective properties, acting through the AMPK-activated mechanism \[[@B44]\]. In this study, DHA/EPA supplementation reversed the decrease of hepatic PPAR*α* expression in HFD-fed mice. Although only the DHA/EPA 2 : 1 group had significantly increased PPAR*α* expression, the DHA/EPA 2 : 1 group had the highest serum levels of adiponectin, the lowest hepatic mRNA expression of proinflammatory cytokines, and the highest protein levels of PPAR*α* and AMPK, which may be due to the higher ratio of DHA in this group. These results suggest that the alleviation of inflammatory responses in DHA/EPA-treated mice may correlate with an increase in serum levels of adiponectin and hepatic protein levels of PPAR*α* and AMPK.
5. Conclusion {#sec5}
=============
In addition to reducing oxidative stress, decreasing proinflammatory cytokine secretion, and improving hepatic lipid metabolism, a DHA/EPA-enriched diet with an n-6/n-3 ratio of 4 : 1 may reverse HFD-induced NALFD to some extent by increasing n-6 and n-3 PUFAs and decreasing the amount of MUFAs and the n-6/n-3 ratio. Although no significant difference was found in the expression of inflammation- and hepatic lipid metabolism-related genes in the three DHA/EPA groups, the DHA/EPA 2 : 1 group showed the highest DHA and n-3 PUFA concentration and the DHA/EPA 1 : 2 group showed the highest EPA, n-6 PUFA concentration, and n-6/n-3 ratio.
This work was supported by the National Key Research and Development Program of China (no. 2017YFC1600500), the National High-Tech Research and Development Projects (no. 2010AA023003), the National Natural Science Foundation of China (no. 31201351), the Young Elite Scientists Sponsorship Program by CAST (China Association for Science and Technology) (no. YESS20160164), and the 2015 Chinese Nutrition Society DSM Research Fund. We would like to thank LetPub for English language editing.
Data Availability
=================
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
=====================
The authors declare that there are no conflicts of interests regarding the publication of this paper.
![Effects of the supplementation of various DHA/EPA ratios on hepatic lipid metabolism. H&E staining of liver sections in each group, followed by observation under a light microscope (magnification 200x). Notice the fatty vesicles (black arrow) and lymphocyte infiltration (red arrow). (a) Normal diet (ND) group, (b) high-fat diet (HFD) group, (c) DHA/EPA 2 : 1 (DHA/EPA = 2 : 1) group, (d) DHA/EPA 1 : 1 (DHA/EPA = 1 : 1) group, and (e) DHA/EPA 1 : 2 (DHA/EPA = 1 : 2) group.](OMCL2018-6256802.001){#fig1}
![Effects of the supplementation of various DHA/EPA ratios on serum and hepatic inflammatory cytokine expression. (a) Serum inflammatory cytokines (*n* = 8). (b) Hepatic inflammatory cytokine expression (*n* = 6). The mRNA expression of *β*-actin was quantified as the endogenous control. (A) *P* \< 0.05 versus the ND group; (B) *P* \< 0.05 versus the HFD group; (C) *P* \< 0.05 versus the DHA/EPA = 2 : 1 group; (D) *P* \< 0.05 versus the DHA/EPA = 1 : 1 group.](OMCL2018-6256802.002){#fig2}
![Effects of the supplementation of various DHA/EPA ratios on hepatic lipid metabolism. (a) The mRNA expression of ABCA1, ABCG1, ACAT-1, LAL, CD36, MSR-1, and LOX-1 in liver tissues, as measured by qPCR (*n* = 6). (b, c) Protein expression of PPAR*α*, SREBP-1c, AMPK, CD36, MSR-1, and LOX-1 in liver tissues, as measured by western blotting (*n* = 3--4). (A) *P* \< 0.05 versus the ND group; (B) *P* \< 0.05 versus the HFD group; (C) *P* \< 0.05 versus the DHA/EPA = 2 : 1 group; (D) *P* \< 0.05 versus the DHA/EPA = 1 : 1 group.](OMCL2018-6256802.003){#fig3}
######
Fatty acid composition of oils and feed supplemented to mice.
Fatty acid mg per 100 mg total fatty acid
------------ -------------------------------- ------ ------ ------ ------ ------- ------- ------ ------ ------
C14:0 0.1 0 0.3 8.7 0 0 0 1.4 0.8 0.1
C14:1 0 0 0.2 0.3 0 0 0 0.1 0.1 0.1
C15:0 0 0 0.5 0.2 0 0 0 0.1 0.1 0.1
C16:0 6.5 0.4 10.1 15.8 27.0 42.0 43.7 8.6 8.1 8.0
C16:1 0.1 0.1 4.7 3.5 0 1.8 1.7 1.1 1.2 1.5
C17:0 0 0 0.1 0.3 0 0 0 0.1 0.1 0
C17:1 0 0 0.1 0 0 0 0 0 0 0
C18:0 5.2 0 2.8 0 4.7 0.2 0.3 4.4 4.6 4.9
C18:1 24.9 10.1 7.9 13.5 21.2 16.5 15.6 22.2 22.4 22.2
C18:2 60.7 15.4 1.9 1.6 36.9 21.5 21.7 48.4 49.3 49.9
C18:3 0 73.3 2.0 0 0 16.0 15.3 0.2 0.4 0.2
C20:1 0 0 0.1 0.3 0 0.13 0.12 0.1 0.1 0
C20:2 0 0 0.2 0 0 0 0 0 0 0.1
C20:4 0 0 1.5 0.5 0.5 0.19 0.20 0.2 0.3 0.4
C20:5 0.2 0 35.1 0.0 0 0 0 4.3 6.2 8.1
C22:1 0 0 0.3 3.5 0 0 0 0.5 0.3 0.1
C22:5 0 0 2.4 0.0 0 0 0 0.3 0.4 0.6
C22:6 0 0 18.2 40.1 0 0 0 8.0 6.0 4.3
∑SATs 11.8 0.4 13.8 25 31.7 42.2 44 14.6 13.7 13.1
∑MUFAs 25 10.2 13.3 21.1 21.2 18.43 17.42 24 24.1 23.9
∑PUFAs 60.9 88.7 61.3 42.2 37.4 37.69 37.2 61.4 62.6 63.6
∑n-6 60.7 15.4 3.4 2.1 36.9 21.5 21.7 48.6 49.6 50.3
∑n-3 0.2 73.3 57.7 40.1 0 16.0 15.3 12.8 13.0 13.2
n-6/n-3 1.3 1.4 3.9 3.8 3.8
EPA/DHA 0 0 1.9 0 0 0 0 1.9 1.0 0.5
######
Quantitative PCR primer sequences.
Gene Forward primer 5′--3′ Reverse primer 5′--3′
----------- ----------------------- -----------------------
IL-6 TCCAGTTGCCTTCTTGGGAC AGTCTCCTCTCCGGACTTGT
IL-10 GCTGCCTGCTCTTACTGACT CTGGGAAGTGGGTGCAGTTA
IL-1*β* TGCCACCTTTTGACAGTGATG TGATGTGCTGCTGCGAGATT
TNF-*α* ATGGCCTCCCTCTCATCAGT TTTGCTACGACGTGGGCTAC
MCP-1 TATTGGCTGGACCAGATGCG CCGGACGTGAATCTTCTGCT
VCAM-1 CTGGGAAGCTGGAACGAAGT GCCAAACACTTGACCGTGAC
ICAM-1 TATGGCAACGACTCCTTCT CATTCAGCGTCACCTTGG
CD36 CGGGCCACGTAGAAAACACT CAGCCAGGACTGCACCAATA
MSR-1 GACTTCGTCATCCTGCTCAAT GCTGTCGTTCTTCTCATCCTC
LOX-1 TCACCTGCTCCCTGTCCTT GGTTCTTTGCCTCAATGCC
ABCA-1 CGACCATGAAAGTGACACGC AGCACATAGGTCAGCTCGTG
ABCG-1 AGAGCTGTGTGCTGTCAGTC AGCAGGTCTCAGGGTCTAGG
LAL CCCACCAAGTAGGTGTAGGC GAGTTGCATCGGGAGTGGTC
ACAT-1 CCAATGCCAGCACACTGAAC TCTACGGCAGCATCAGCAAA
*β*-Actin TTCGTTGCCGGTCCACACCC GCTTTGCACATGCCGGAGCC
######
Effects of DHA/EPA supplementation on body and liver weights in each group.
Initial weight (g) Final weight (g) Weight gain (g) Liver weight (g) Liver ratio to weight (%)
----------------- -------------------- ------------------ ----------------- ------------------ ---------------------------
ND 20.6 ± 1.2 25.4 ± 2.8 4.8 ± 3.0 1.03 ± 0.23 4.2 ± 0.4
HFD 20.5 ± 1.1 27.0 ± 2.5 6.5 ± 2.7 1.15 ± 0.17 4.5 ± 0.4
DHA/EPA = 2 : 1 20.8 ± 1.0 26.8 ± 2.5 6.0 ± 2.5 1.10 ± 0.21 4.3 ± 0.4
DHA/EPA = 1 : 1 20.7 ± 1.2 25.8 ± 2.6 4.9 ± 2.6 1.04 ± 0.20 4.2 ± 0.4
DHA/EPA = 1 : 2 20.3 ± 1.5 25.6 ± 2.4 5.3 ± 2.5 1.06 ± 0.15 4.3 ± 0.3
Data are given as the mean ± SEM, *n* = 10.
######
General and biochemical parameters in serum and liver tissues.
ND HFD DHA/EPA = 2 : 1 DHA/EPA = 1 : 1 DHA/EPA = 1 : 2
----------------------- ---------------- ------------------- --------------------- --------------------- ----------------------
Serum parameters
TC (mM) 9.50 ± 0.46 19.38 ± 0.66^a^ 5.43 ± 0.52^a,b^ 7.82 ± 0.84^b,c^ 10.29 ± 0.31^b,c,d^
TG (mM) 1.19 ± 0.05 2.38 ± 0.24^a^ 0.58 ± 0.05^a,b^ 0.75 ± 0.08^a,b^ 1.30 ± 0.07^b,c,d^
LDL (mM) 3.51 ± 0.19 7.03 ± 0.46^a^ 2.55 ± 0.43^b^ 4.17 ± 0.40^b,c^ 4.34 ± 0.17^a,b,c^
HDL (mM) 0.30 ± 0.03 0.13 ± 0.03^a^ 0.26 ± 0.03^a,b^ 0.35 ± 0.05^b^ 0.21 ± 0.01^a,b,d^
Adiponectin (pg/mg) 159.76 ± 23.19 81.64 ± 8.36^a^ 196.77 ± 18.68^b^ 114.91 ± 14.16^c^ 103.97 ± 7.43^b,c^
OX-LDL (*μ*g/L) 223.46 ± 25.32 269.00 ± 14.73 171.58 ± 8.58^b^ 165.90 ± 8.29^b^ 165.89 ± 10.76^a,b^
AST (U/L) 143.79 ± 21.97 487.5 ± 95.19^a^ 63.62 ± 7.36^a,b^ 110.23 ± 13.31^b,c^ 138.32 ± 28.99^b,c^
ALT (U/L) 72.92 ± 9.06 210.82 ± 23.72^a^ 40.69 ± 4.88^a,b^ 51.67 ± 3.05^a,b^ 93.06 ± 13.03^b,c,d^
AKP (U/L) 48.94 ± 5.18 90.66 ± 7.22^a^ 42.08 ± 3.50^b^ 58.93 ± 5.92^b,c^ 70.34 ± 4.55^a,b,c^
Liver parameters
TC (mM/g protein) 74.94 ± 3.62 144.57 ± 4.72^a^ 73.81 ± 4.22^b^ 83.72 ± 3.30^b,c^ 77.59 ± 6.03^b^
TG (mM/g protein) 204.01 ± 25.74 231.19 ± 14.54 125.21 ± 14.26^a,b^ 114.81 ± 7.32^a,b^ 160.34 ± 15.76
MDA (*μ*M/g protein) 1.75 ± 0.17 2.31 ± 0.18^a^ 1.81 ± 0.14^b^ 1.73 ± 0.07^b^ 1.90 ± 0.08
SOD (U/mg protein) 6.2 ± 0.32 5.79 ± 0.16 6.9 ± 0.14^b^ 6.85 ± 0.03^a,b^ 6.86 ± 0.26^b^
GSH (*μ*M/g protein) 19.46 ± 2.37 10.8 ± 1.57^a^ 21.9 ± 2.59^b^ 23.81 ± 1.86^b^ 24.33 ± 2.69^b^
Data are given as mean ± SEM, *n* = 8. ^a^*P* \< 0.05 versus the ND group; ^b^*P* \< 0.05 versus the HFD group; ^c^*P* \< 0.05 versus the DHA/EPA = 2 : 1 group; ^d^*P* \< 0.05 versus the DHA/EPA = 1 : 1 group.
######
Fatty acid composition (%) of the serum of mice during the experimental period.
Serum fatty acid ND (*n* = 5) HFD (*n* = 5) DHA/EPA = 2 : 1 (*n* = 4) DHA/EPA = 1 : 1 (*n* = 5) DHA/EPA = 1 : 2 (*n* = 5)
------------------ ---------------- ------------------- --------------------------- --------------------------- ---------------------------
C16:0 22.881 ± 0.863 23.293 ± 0.271 22.759 ± 0.762 22.068 ± 0.763 20.838 ± 1.006
C16:1 1.069 ± 0.117 1.473 ± 0.145^a^ 0.773 ± 0.287^b^ 0.65 ± 0.168^b^ 0.666 ± 0.174^b^
C18:0 8.151 ± 0.345 11.729 ± 0.440^a^ 9.163 ± 0.510^b^ 9.071 ± 0.336^b^ 8.871 ± 0.576^b^
C18:1 16.136 ± 0.603 26.315 ± 0.857^a^ 17.374 ± 1.279^b^ 16.886 ± 1.21^b^ 16.174 ± 1.967^b^
C18:2 30.819 ± 1.416 20.912 ± 0.605^a^ 30.538 ± 1.028^b^ 30.515 ± 1.039^b^ 28.062 ± 1.046^a,b^
C18:3 0.00 ± 0.00 0.00 ± 0.00 0.04 ± 0.08 0.00 ± 0.00 0.00 ± 0.00
C19:0 0.836 ± 0.040 0.538 ± 0.137 0.500 ± 0.168 0.266 ± 0.1646^a^ 0.260 ± 0.162^a^
C20:0 0.112 ± 0.112 0.082 ± 0.0.082 0.133 ± 0.133 0.098 ± 0.098 0.00 ± 0.00
C20:1 0.106 ± 0.106 0.228 ± 0.140 0.748 ± 0.329 0.416 ± 0.289 0.424 ± 0.309
C20:4 4.265 ± 0.272 7.177 ± 0.458^a^ 3.67 ± 0.132^b^ 3.946 ± 0.126^b^ 4.242 ± 0.184^b^
C22:0 2.01 ± 0.059 0.660 ± 0.093 4.018 ± 1.264^a,b^ 4.383 ± 0.518^a,b^ 5.472 ± 0.775^a,b^
C20:5 0.752 ± 0.313 0.00 ± 0.00^a^ 0.275 ± 0.166 0.102 ± 0.102^a^ 0.130 ± 0.130^a^
C22:6 7.028 ± 0.305 3.555 ± 0.221^a^ 8.389 ± 0.371^a,b^ 8.347 ± 0.265^a,b^ 7.358 ± 0.429^b,c,d^
∑SFAs 33.987 ± 1.153 36.305 ± 0.321 36.575 ± 0.697 35.885 ± 0.986 35.439 ± 0.969
∑MUFAs 17.314 ± 0.598 28.018 ± 0.805^a^ 18.895 ± 1.131^b^ 17.952 ± 1.090^b^ 17.266 ± 1.845^b^
∑PUFAs 42.866 ± 1.969 31.648 ± 0.510^a^ 42.878 ± 1.189^b^ 42.908 ± 0.721^b^ 39.794 ± 1.124^b^
∑n-6 35.086 ± 1.659 28.092 ± 0.339^a^ 34.213 ± 1.021^b^ 34.460 ± 0.927^b^ 32.306 ± 1.134^b^
∑n-3 7.780 ± 0.393 3.556 ± 0.222^a^ 8.665 ± 0.493^b^ 8.448 ± 0.253^b^ 7.488 ± 0.388^b,c^
n-6/n-3 4.522 ± 0.162 8.016 ± 0.469^a^ 3.983 ± 0.241^b^ 4.105 ± 0.224^b^ 4.372 ± 0.320^b^
Data are given as the mean ± SEM. ^a^*P* \< 0.05 versus the ND group; ^b^*P* \< 0.05 versus the HFD group; ^c^*P* \< 0.05 versus the DHA/EPA = 2 : 1 group; ^d^*P* \< 0.05 versus the DHA/EPA = 1 : 1 group.
######
Fatty acid composition (%) of the liver of mice during the experimental period.
Hepatic fatty acid ND (*n* = 5) HFD (*n* = 4) DHA/EPA = 2 : 1 (*n* = 5) DHA/EPA = 1 : 1 (*n* = 4) DHA/EPA = 1 : 2 (*n* = 3)
-------------------- ------------------ ------------------- --------------------------- --------------------------- ---------------------------
C16:0 26\. 228 ± 0.799 20.211 ± 0.217^a^ 22.596 ± 0.689^a^ 21.261 ± 1.272^a^ 20.590 ± 0.116^a^
C16:1 0.00 ± 0.000 1.640 ± 0.247^a^ 0.48 ± 0.045^a,b^ 0.489 ± 0.104^a,b^ 0.577 ± 0.044^a,b^
C18:0 10.905 ± 0.797 8.772 ± 0.399 11.234 ± 0.922 10.545 ± 1.303 7.260 ± 0.297^a,c,d^
C18:1 13.451 ± 0.814 30.939 ± 0.911^a^ 15.386 ± 1.691^b^ 17.971 ± 3.113^b^ 20.368 ± 1.028^a,b^
C18:2 25.962 ± 0.722 19.456 ± 0.555^a^ 24.647 ± 0.708^b^ 23.941 ± 0.966^b^ 26.889 ± 0.404^b,d^
C18:3 0.452 ± 0.029 0.546 ± 0.059 0.099 ± 0.011^a,b^ 0.231 ± 0.049^a,b,c^ 0.133 ± 0.009^a,b^
C19:0 0.397 ± 0.049 0.293 ± 0.093 0.284 ± 0.057 0.188 ± 0.069^a^ 0.334 ± 0.024
C20:0 0.358 ± 0.019 0.617 ± 0.208 0.333 ± 0.026^b^ 0.431 ± 0.043 0.391 ± 0.032
C20:1 0.710 ± 0.074 0.689 ± 0.021 0.879 ± 0.099 0.894 ± 0.118 0.567 ± 0.041^c,d^
C20:4 6.371 ± 0.435 7.525 ± 0.381 5.332 ± 0.388^b^ 5.489 ± 0.738^b^ 4.932 ± 0.335^b^
C22:0 0.880 ± 0.041 0.156 ± 0.012^a^ 1.506 ± 0.046^a,b^ 1.98 ± 0.081^a,b,c^ 1.800 ± 0.138^a,b,c^
C20:5 0.804 ± 0.064 0.451 ± 0.076^a^ 1.086 ± 0.056^a,b^ 1.315 ± 0.074^a,b,c^ 1.527 ± 0.096^a,b,c^
C22:6 10.425 ± 0.223 4.289 ± 0.281^a^ 12.895 ± 0.530^a,b^ 11.761 ± 1.000^b^ 10.301 ± 0.483^b,c^
∑SFAs 38.770 ± 1.345 30.053 ± 0.432^a^ 35.948 ± 1.489^b^ 34.403 ± 2.263^a^ 30.377 ± 0.126^a,c^
∑MUFAs 14.166 ± 0.782 33.269 ± 1.066^a^ 16.746 ± 1.644^b^ 19.358 ± 3.044^a,b^ 21.517 ± 1.032^a,b^
∑PUFAs 44.016 ± 0.467 32.268 ± 0.757^a^ 44.064 ± 0.578^b^ 42.738 ± 0.826^b^ 43.787 ± 0.492^b^
∑n-6 32.334 ± 0.643 26.983 ± 0.484^a^ 29.982 ± 0.383^a,b^ 29.43 ± 0.249^a,b^ 31.823 ± 0.143^b,c,d^
∑n-3 11.682 ± 0.255 5.285 ± 0.293^a^ 14.082 ± 0.519^a,b^ 13.308 ± 1.069^b^ 11.963 ± 0.387^b,c^
n-6/n-3 2.778 ± 0.116 5.138 ± 0.202^a^ 2.141 ± 0.081^a,b^ 2.266 ± 0.228^a,b^ 2.665 ± 0.082^b,c^
Data are given as mean ± SEM. ^a^*P* \< 0.05 versus the ND group; ^b^*P* \< 0.05 versus the HFD group; ^c^*P* \< 0.05 versus the DHA/EPA = 2 : 1 group; ^d^*P* \< 0.05 versus the DHA/EPA = 1 : 1 group.
[^1]: Academic Editor: Joan Roselló-Catafau
| {
"pile_set_name": "PubMed Central"
} |
Ask HN: Good resources on the Business / Legal side of a UK Startup - JohnLBevan
Having read a number of articles on how start ups come about I've decided to knock up some code and get something on line soon. I have a few big ideas, but reading these articles has suggested to me it's best to start with something simple and quick, learn a few lessons from that, and then pursue my real ambitions. However, I'm anxious about one thing - once I start making money from a site, do I become a business / do I have to contact someone to request a self assessment tax form (even if I only make a couple of pounds from the venture) / what are my legal responsibilities?<p>I'm guessing there are lots of people in my situation all over the world, and each country will have its own rules and regulations around this sort of thing. I'm based in the UK, so am particularly interested in the rules there, but if you know of good resources for other countries, please post those also, as there's bound to be a few others in my position dotted around the globe who can benefit.<p>Thanks in advance,<p>JB
======
WorldMover
The HMRC is a decent starting place <http://www.hmrc.gov.uk/startingup/>, you
may also find <http://www.businesslink.gov.uk> useful. In addition you should
check with an accountant (perhaps someone you know).
~~~
JohnLBevan
Those are great links - thanks for the info :) I'm thinking of looking into
accountants after I start to make money; up 'til then I'll just keep records,
since I'm assuming that I won't be making enough to cover the costs of
accountants for a long time during my experimental phase. Thanks again, JB
------
mmahemoff
Another good thing to know would be any UK bank recommendations, e.g. with a
normal website and statements that aren't PDFs and go back to day one.
(Amazing in this day that bank websites won't provide a full history to the
account holder!)
There's no BankSimple in the UK, but some decent options would be welcome.
Heard good things about FirstDirect, but I don't think they do business
accounts.
~~~
JohnLBevan
Hmm, looks like a gap in the market. . . though I guess banks wouldn't be too
willing to provide an API to third parties who'd offer this service, so it
would require a bank to fill this gap, rather than an independent.
~~~
mmahemoff
Big gap if you ask me, because a clean UI would be a desirable and obvious
differentiator in what mostly looks like a commodity market.
Yeah, an API would be huge but I think we're a long way from that. Retail
banking in 2011==Mobile telephony in 2005.
| {
"pile_set_name": "HackerNews"
} |
SINGAPORE/SEOUL, July 11 (Reuters) - Korea East-West Power Co Ltd (EWP), one of South Korea’s state-run utilities, is seeking cleaner burning fuel oil for its power plants for the first time in five years to comply with stricter emissions regulations, according to two industry sources.
EWP issued a tender on Tuesday seeking to import 30,000 tonnes of low-sulphur fuel oil (LSFO) with a maximum sulphur content of 0.3 percent for late July arrival, according to the utility’s website.
“We are seeking to buy low sulphur fuel oil preemptively to meet the government’s emission standards and would keep buying low-sulphur fuel oil,” said an EWP source who declined to be identified as he was not authorised to speak to the media.
EWP plans to buy a total of 80,000 tonnes of low-sulphur fuel oil in August, the source added.
The utility last purchased a cargo of HSFO in early-April. The latest LSFO import requirement is occurring amid increased cooling demand during the summer.
Korea Western Power Co, which also runs a fuel oil power plant, has no plans so far to lower the specifications for their fuel oil purchases or to issue a new fuel oil tender, said a source at the utility who asked not to be named since they are not authorised to talk to the media.
South Korea mainly generates electricity with coal and nuclear power and fuel oil supplies only a small fraction of the country’s total electricity needs. (Reporting by Roslan Khasawneh in SINGAPORE and Jane Chung in SEOUL; Editing by Christian Schmollinger) | {
"pile_set_name": "Pile-CC"
} |
A steering apparatus of an automobile is configured as shown in FIG. 12. The motion of a steering wheel 1 operated by a driver is transmitted to an input shaft 6 of a steering gear unit 5 through a steering shaft 2, a universal joint 3a, an intermediate shaft 4 and another universal joint 3b. A pair of tie rods 7, 7 is pushed or pulled by a rack and pinion mechanism installed in the steering gear unit 5, so that an appropriate steering angle is applied to a pair of left and right steering wheels, in conformity to an operation amount of the steering wheel 1.
FIG. 13 illustrates an example of the intermediate shaft 4 that is mounted to the steering apparatus as described above. In this example, the intermediate shaft 4 is configured to expand and contract so as to prevent the steering wheel 1 from being pushed towards the driver upon a collision accident. The intermediate shaft 4 includes an inner shaft 9 having a male spline part 8 provided on an outer periphery of a tip portion (a left end portion in FIG. 13) thereof, and a circular tube-shaped outer tube 11 having a female spline part 10 formed on an inner periphery thereof to which the male spline part 8 can be inserted. The male spline part 8 and the female spline part 10 are spline-engaged with each other, so that the inner shaft 9 and the outer tube 11 are combined to be expandable and contractible. Also, base end portions of yokes 12a, 12b configuring the universal joints 3a, 3b are welded and fixed to base end portions of the inner shaft 9 and the outer tube 11, respectively.
FIGS. 14 and 15 illustrate a first example of a known universal joint, which can be used as the universal joints 3a, 3b and is illustrated in Patent Documents 1 and 2. In the meantime, the structure shown in FIGS. 14 and 15 is a so-called vibration preventing joint configured to prevent vibration transmission. However, a universal joint, which is a subject of the present invention, is not necessarily required to have a vibration preventing structure. Therefore, in the below, the vibration preventing structure is omitted and a main body structure of the universal joint 3 is described.
The universal joint 3 is configured by coupling a pair of bifurcated yokes 12a, 12b made of a metal material having sufficient stiffness through a cross shaft 13 made of hard metal such as alloy steel such as bearing steel so that torque can be transmitted. Both yokes 12a, 12b have base parts 14, 14, respectively, and each of yokes 12a, 12b has a pair of coupling arm parts 15, 15. Both base parts 14, 14 are configured to support and fix the base end portion of the inner shaft 9 or outer tube 11 (which is a rotary shaft) (or a front end portion of the steering shaft 2 or a rear end portion of the input shaft 6, refer to FIG. 12) so that the torque can be transmitted. Tips of the coupling arm parts 15, 15 are respectively formed with circular holes 16, 16 to be concentric each other, for each of the yokes 12a, 12b. Cylindrical bearing cups 17, 17 made of a plate material of hard metal such as bearing steel and case-hardening steel to have a bottom are fastened and fitted with openings thereof facing each other to the respective circular holes 16, 16, so that they are internally fitted. The cross shaft 13 has such a shape that intermediate parts of a pair of column parts are orthogonal to each other, and has four shaft parts 18, 18 each of which has a cylindrical shape. That is, base end portions of the respective shaft parts 18, 18 are coupled and fixed to four positions (at a state where center axes of the adjacent shaft parts 18, 18 are orthogonal to each other) equally spaced in a circumferential direction of a coupling base part 19 provided at a center part of the cross shaft. The center axes of the respective shaft parts 18, 18 exist on the same plane.
The shaft parts 18, 18 are inserted from axial intermediate parts to tip portions thereof in the respective bearing cups 17, 17. A plurality of needles 20, 20 each of which is a rolling body are arranged between inner peripheries of the respective bearing cups 17, 17 and outer peripheries of the tip portions of the respective shaft parts 18, 18, so that radial bearings 21, 21 are configured and both the yokes 12a, 12b can be pivotally displaced relative to the cross shaft 13 by small force. With this configuration, even when the center axes of both the yokes 12a, 12b do not coincide with each other, the rotational force can be transmitted between both the yokes 12a, 12b with transmission loss being suppressed.
According to the universal joint 3 as described above, center parts of the respective shaft parts 18, 18 are formed with bottomed insertion holes 22, 22 with being opened towards end surfaces of the respective shaft parts 18, 18 in axial directions of the respective shaft parts 18, 18. In the respective insertion holes 22, 22, pins 23, 23 made of a synthetic resin are inserted, respectively. The respective pins 23, 23 are supported between the respective bearing cups 17, 17 and the respective shaft parts 18, 18 to prevent the respective bearing cups 17, 17 from rattling relative to the respective shaft parts 18, 18 and both the yokes 12a, 12b from rattling relative to the cross shaft 13 and to prevent distances between the opening end portions of the respective bearing cups 17, 17 and the coupling base part 19 from being excessively narrowed. That is, in the case of the universal joint 3b, which is mounted at an outside (at a lower side in FIG. 12) of a vehicle interior, of the universal joints 3a, 3b configuring the steering apparatus shown in FIG. 12, seal rings 24, 24 are respectively provided between the base end portions of the respective shaft parts 18, 18 configuring the cross shaft 13 and the openings of the respective bearing cups 17, 17. In this example, the respective pins 23, 23 are provided to prevent the endurance of the respective seal rings 24, 24 from being lowered due to the excessive compression of the respective seal rings 24, 24 and to prevent the sealing characteristics of the respective seal rings 24, 24 from being deteriorated due to the excessive lowering of the compression amounts of the respective seal rings 24, 24.
FIGS. 16, 17A and 17B illustrate a second example of the structure of the known universal joint, which is disclosed in Patent Document 3. In the second example, a thrust piece 25 having a substantially disc shape and made of an elastic synthetic resin is interposed between a bottom inner surface of the bearing cup 17 configuring the radial bearing 21 and an end surface of a shaft part 18a configuring a cross shaft 13a, as shown in FIGS. 17A and 17B. In the second example, the thrust piece 25 is supported between the bottom inner surface of the bearing cup 17 and the end surface of the shaft part 18a, so that the rattling of the yoke 12 relative to the cross shaft 13a can be prevented.
In either structure, the pin 23 or thrust piece 25 is supported between the bearing cup 17 and the shaft parts 18, 18a, so that the rattling of the yokes 12, 12a, 12b relative to the cross shafts 13, 13a can be prevented. However, there is still room for improvement on the function of suppressing the rattling of the pair of yokes relative to the cross shaft while suppressing an increase in the manufacturing cost. That is, in order to suppress the rattling of both the yokes relative to the cross shaft, it may be conceivable of enlarging a fitting margin of the thrust piece to the bearing cup and the shaft part of the cross shaft. However, when the fitting margin is simply enlarged, a rotational resistance (pivotal resistance) of both the yokes relative to the respective shaft parts is increased. Therefore, in order to prevent the increase in the rotational resistance while suppressing the rattling, it is necessary to form the thrust piece (or pin and insertion hole) with high precision and to enhance the assembling precision (an insertion amount of the shaft part into the bearing cup) of the cross shaft and the yoke, which increase the manufacturing cost of the cross shaft universal joint. | {
"pile_set_name": "USPTO Backgrounds"
} |
// Copyright (c) .NET Foundation and contributors. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
using Microsoft.DotNet.Cli.CommandLine;
using Microsoft.DotNet.Tools;
using LocalizableStrings = Microsoft.DotNet.Tools.Sln.LocalizableStrings;
namespace Microsoft.DotNet.Cli
{
internal static class SlnCommandParser
{
public static Command Sln() =>
Create.Command(
"sln",
LocalizableStrings.AppFullName,
Accept.ExactlyOneArgument()
.DefaultToCurrentDirectory()
.With(name: LocalizableStrings.SolutionArgumentName,
description: LocalizableStrings.SolutionArgumentDescription),
CommonOptions.HelpOption(),
SlnAddParser.SlnAdd(),
SlnListParser.SlnList(),
SlnRemoveParser.SlnRemove());
}
} | {
"pile_set_name": "Github"
} |
The rank and file FLDS are expecting Warren and Lyle Jeffs to break out of prison/jail on Wednesday, April 6:
But if a certain group of polygamous religious extremists in a lonely corner of southern Utah are to be believed, this Wednesday the walls will split open and fall when one of their leaders, Lyle Jeffs, appears before the judge in a major fraud case, according to former followers of his sect.
Simultaneously, an earthquake will apparently cause the walls of a prison in Texas to crumble and Lyle’s brother, Warren Jeffs, the group’s “prophet” and supreme leader, will also walk free – despite the fact he has been serving a sentence of life plus 20 years in that state since 2011, convicted of having sex with underage girls as young as 12 that he took as polygamous wives.
By divine coincidence, perhaps, Wednesday is 6 April, the date most Mormons – and the outlawed, rejected offshoot sect of that religion known as the Fundamentalist Latter-day Saints (FLDS) – proclaim is the actual birthday of Jesus Christ.
“I am hearing from people inside the FLDS that on April 6 there is going to be a kind of apocalypse,” said Elissa Wall, who escaped from the repressive FLDS community after being forced by Warren Jeffs to marry her cousin when she was just 14. “It is prophesied.”
jtmunkus wrote:Really?! When you're testifying and there's an earthquake, your charges are immediately dropped? Also two if an earthquake hits your prison, all the prisoners get to go home, scot-free?
I often wonder why such a large amount of prisoners turn to "God", when He's clearly so bad at getting people out of prison.
Yeah but God is busy punishing other folk, such as giving America a Muslim president because gayz and fires in wildlife refuges because rents and ranchers or summat. He hasn't enough free time to help minority churches when he has all the Orthodox and Catholic churches to brief on his ever-changing yet eternally immutable wishes.
Nothing in the news confirming the rumors of apocalypse ahead of FLDS leader's detention hearing.
FLDS Bishop Lyle Jeffs asks judge to free him until trial
Polygamy » Judge has already released three co-defendants .
In a hearing that dealt with polygamy and child sex abuse as much as alleged food stamp fraud, a federal judge Wednesday considered whether Fundamentalist Church of Jesus Christ of Latter-Day Saints Bishop Lyle Jeffs should remain in jail until his trial.
U.S. District Judge Ted Stewart is expected to issue a ruling later Wednesday or perhaps later this week.
Prosecutors want Jeffs, 56, to remain in jail, where he has been held since indictments against 11 FLDS members were unsealed Feb. 23. Jeffs' lawyer, Kathryn Nester, asked Stewart to release her client to a home his family or supporters have in Provo and to be tracked by a GPS ankle monitor.
The hearing was supposed to be about whether Jeffs, if freed, would return to court for future proceedings, and whether he would tamper with witnesses or evidence. In the course of those discussions, the 90-minute hearing veered into whether Jeffs had married three underage girls and how much contact he has with his infamous older brother, FLDS President Warren Jeffs.
.......
Lyle Jeffs is the last of 11 food stamp fraud defendants still in jail.
......
A trial for all 11 food stamp scam defendants is scheduled for May 31.
Witnesses: Scallops for the bishop, toast for the kids
Story highlights
Former FLDS members reveal secrets in FBI documents in federal welfare fraud case
Witnesses describe a caste system with Warren Jeffs' brothers at the top
'We had little children that were starving, big people that were starving'
FBI: 'Women were being asked to engage in group sex' at Warren Jeffs' direction
Oddly enough, the US Geodetic Survey website on seismic disturbances shows absolutely no earthquakes in all of Texas for the entire week (perhaps longer) preceding April 7th, so the jail walls did not shake during Jeffs's testimony. This seems to have been an uncommonly quiet week.
Suranis wrote:For those that dont appreciate my snark, I should mention that I AM Catholic.
Off Topic
I think I mentioned this before. Two houses south of me is a Mormon family, and four houses north is a Catholic one. They're good friends with each other, but the Mormon one says Catholics are not Christians, and the Catholic one says Mormons aren't Christians.
One day the Mormons were explaining to me why Catholics aren't Christians, and I said that it didn't matter to me, they were all gentiles as far as I was concerned. But then the Mormon told me that wasn't true; Mormons are the real Jews, and Jews aren't.
I think I mentioned this before. Two houses south of me is a Mormon family, and four houses north is a Catholic one. They're good friends with each other, but the Mormon one says Catholics are not Christians, and the Catholic one says Mormons aren't Christians.
One day the Mormons were explaining to me why Catholics aren't Christians, and I said that it didn't matter to me, they were all gentiles as far as I was concerned. But then the Mormon told me that wasn't true; Mormons are the real Jews, and Jews aren't.
Off Topic
Mormons consider anyone that isn't Mormon a Gentile, even the Jews. That said, they claim a special kinship with the Jews because they consider themselves also of Israel, they of Ephraim and the Jews of Judah.
vic wrote:One day the Mormons were explaining to me ...... Mormons are the real Jews, and Jews aren't.
Lemmee tell you about the time Mormon missionaries -- "elders" age about 20 -- came to my apartment and I showed them my Hebrew Bible, which they said they had studied .... and they opened it and held it upside down.
vic wrote:One day the Mormons were explaining to me ...... Mormons are the real Jews, and Jews aren't.
Lemmee tell you about the time Mormon missionaries -- "elders" age about 20 -- came to my apartment and I showed them my Hebrew Bible, which they said they had studied .... and they opened it and held it upside down. | {
"pile_set_name": "Pile-CC"
} |
Ext.Loader.setConfig({enabled: true});
Ext.Loader.setPath('Ext.ux', '../ux/');
Ext.require([
'Ext.grid.*',
'Ext.data.*',
'Ext.util.*',
'Ext.grid.plugin.BufferedRenderer',
'Ext.ux.form.SearchField'
]);
Ext.onReady(function(){
Ext.define('ForumThread', {
extend: 'Ext.data.Model',
fields: [{
name: 'title',
mapping: 'topic_title'
}, {
name: 'forumtitle',
mapping: 'forum_title'
}, {
name: 'forumid',
type: 'int'
}, {
name: 'username',
mapping: 'author'
}, {
name: 'replycount',
mapping: 'reply_count',
type: 'int'
}, {
name: 'lastpost',
mapping: 'post_time',
type: 'date',
dateFormat: 'timestamp'
},
'lastposter', 'excerpt', 'topic_id'
],
idProperty: 'post_id'
});
// create the Data Store
var store = Ext.create('Ext.data.Store', {
id: 'store',
model: 'ForumThread',
// allow the grid to interact with the paging scroller by buffering
buffered: true,
// The topics-remote.php script appears to be hardcoded to use 50, and ignores this parameter, so we
// are forced to use 50 here instead of a possibly more efficient value.
pageSize: 50,
// This web service seems slow, so keep lots of data in the pipeline ahead!
leadingBufferZone: 1000,
proxy: {
// load using script tags for cross domain, if the data in on the same domain as
// this page, an HttpProxy would be better
type: 'jsonp',
url: 'http://www.sencha.com/forum/topics-remote.php',
reader: {
root: 'topics',
totalProperty: 'totalCount'
},
// sends single sort as multi parameter
simpleSortMode: true,
// Parameter name to send filtering information in
filterParam: 'query',
// The PHP script just use query=<whatever>
encodeFilters: function(filters) {
return filters[0].value;
}
},
listeners: {
totalcountchange: onStoreSizeChange
},
remoteFilter: true,
autoLoad: true
});
function onStoreSizeChange() {
grid.down('#status').update({count: store.getTotalCount()});
}
function renderTopic(value, p, record) {
return Ext.String.format(
'<a href="http://sencha.com/forum/showthread.php?p={1}" target="_blank">{0}</a>',
value,
record.getId()
);
}
var grid = Ext.create('Ext.grid.Panel', {
width: 700,
height: 500,
collapsible: true,
title: 'ExtJS.com - Browse Forums',
store: store,
loadMask: true,
dockedItems: [{
dock: 'top',
xtype: 'toolbar',
items: [{
width: 400,
fieldLabel: 'Search',
labelWidth: 50,
xtype: 'searchfield',
store: store
}, '->', {
xtype: 'component',
itemId: 'status',
tpl: 'Matching threads: {count}',
style: 'margin-right:5px'
}]
}],
selModel: {
pruneRemoved: false
},
multiSelect: true,
viewConfig: {
trackOver: false,
emptyText: '<h1 style="margin:20px">No matching results</h1>'
},
// grid columns
columns:[{
xtype: 'rownumberer',
width: 50,
sortable: false
},{
tdCls: 'x-grid-cell-topic',
text: "Topic",
dataIndex: 'title',
flex: 1,
renderer: renderTopic,
sortable: false
},{
text: "Author",
dataIndex: 'username',
width: 100,
hidden: true,
sortable: false
},{
text: "Replies",
dataIndex: 'replycount',
align: 'center',
width: 70,
sortable: false
},{
id: 'last',
text: "Last Post",
dataIndex: 'lastpost',
width: 130,
renderer: Ext.util.Format.dateRenderer('n/j/Y g:i A'),
sortable: false
}],
renderTo: Ext.getBody()
});
}); | {
"pile_set_name": "Github"
} |
Combined oral toxicity of azaspiracid-1 and yessotoxin in female NMRI mice.
For many years, the presence of yessotoxins (YTXs) in shellfish has contributed to the outcome of the traditional mouse bioassay and has on many occasions caused closure of shellfisheries. Since YTXs do not appear to cause diarrhoea in man and exert low oral toxicity in animal experiments, it has been suggested that they should be removed from regulation. Before doing so, it is important to determine whether the oral toxicity of YTXs is enhanced when present together with shellfish toxins known to cause damage to the gastrointestinal tract. Consequently, mice were given high doses of YTX, at 1 or 5 mg/kg body weight, either alone or together with azaspiracid-1 (AZA1) at 200 μg/kg. The latter has been shown to induce damage to the small intestine at this level. The combined exposure caused no clinical effects, and no pathological changes were observed in internal organs. These results correspond well with the very low levels of YTX detected in internal organs by means of LC-MS/MS and ELISA after dosing. Indeed, the very low absorption of YTX when given alone remained largely unchanged when YTX was administered in combination with AZA1. Thus, the oral toxicity of YTX is not enhanced in the presence of sub-lethal levels of AZA1. | {
"pile_set_name": "PubMed Abstracts"
} |
New prognostic markers revealed by evaluation of genes correlated with clinical parameters in Wilms tumors.
Current treatment protocols for Wilms tumor achieve 90% cure rates, but relapse risk and side effects from therapy remain challenging. Over the last decade, numerous markers have been proposed for classification and/or prediction of outcome. However, cohort sizes were quite variable and often small. We now provide a large-scale reassessment by real-time RT-PCR of 40 markers in 102 Wilms tumors followed by validation of potentially relevant markers in an independent set of 74 tumors. In the first data set, individual comparison with clinical data combined with adjustment for multiple testing and multivariate analysis revealed potentially relevant alteration of CA9, DKK1, EGR1, HEY2, MYC, MYCN, TERT, TOP2A, TRIM22, and VEGF expression in association with CTNNB1 mutation status, histological risk, response to chemotherapy, metastasis, relapse, or mortality. To further validate these data, potentially relevant genes for specific outcomes were reanalyzed in a second, independent tumor set. Here, univariate analysis confirmed the association of HEY2 with high-risk tumors and of TRIM22 with mortality. Even where significance levels could not be reached, the direction and extent of differential expression were generally reproducible. Multivariate analysis verified a weak correlation of TOP2A expression with metastasis and of TRIM22 with fatal outcome. Although we could corroborate only some of the previously reported associations of expression changes with clinical parameters, our results indicate that real-time RT-PCR analysis can facilitate further classification of Wilms tumor and prediction of outcome to adjust treatment accordingly. This article contains Supplementary Material available at http://www.interscience.wiley.com/jpages/1045-2257/suppmat. | {
"pile_set_name": "PubMed Abstracts"
} |
A kinetic study of histopathological changes in the subcutis of cats injected with non-adjuvanted and adjuvanted multi-component vaccines.
The aim of this study was to investigate the subcutaneous tissue response to administration of a single dose of multi-component vaccine in the cat. Three groups of 15 cats were injected with one of three vaccine products with saline as a negative control. Cats in group A received non-adjuvanted vaccine; cats in group B received vaccine with a lipid-based adjuvant; whilst those in group C were vaccinated with a product adjuvanted with an alum-Quil A mixture. The vaccine and saline injection sites were sampled on days 7, 21 and 62 post-vaccination. Biopsies of these vaccine sites were examined qualitatively and scored semi-quantitatively for a series of parameters related to aspects of the inflammatory and tissue repair responses. These data were analysed statistically, including by principal component analysis. At all three time points of the experiment, there was significantly less inflammation associated with administration of non-adjuvanted vaccine (p=0.000). Although there was evidence of tissue repair by day 62 in all groups, those cats receiving adjuvanted vaccines had evidence of residual adjuvant material accumulated within macrophages at this late time point. The severity of tissue reactions may vary significantly in response to vaccines which include adjuvants or are non-adjuvanted. | {
"pile_set_name": "PubMed Abstracts"
} |
In the year 2000, an estimated 22 million people were suffering from cancer worldwide and 6.2 millions deaths were attributed to this class of diseases. Every year, there are over 10 million new cases and this estimate is expected to grow by 50% over the next 15 years (WHO, World Cancer Report. Bernard W. Stewart and Paul Kleihues, eds. IARC Press, Lyon, 2003). Current cancer treatments are limited to invasive surgery, radiation therapy and chemotherapy, all of which cause either potentially severe side-effects, non-specific toxicity and/or traumatizing changes to ones body image and/or quality of life. Cancer can become refractory to chemotherapy reducing further treatment options and likelihood of success. The prognosis for some cancer is worse than for others and some, like lung or pancreatic cancer are almost always fatal. In addition, some cancers with a relatively high treatment success rate, such as breast cancer, also have a very high incidence rate and, thus, remain major killers.
For instance, there are over 1 million new cases of breast cancer, worldwide, each year. Treatments consist of minimal to radical surgical removal of breast tissue and lymph nodes with radiation and chemotherapy for metastatic disease. Prognosis for localized disease is relatively good with a 5 years survival rate of around 50% but once the cancer has metastasized, it is incurable with an average survival of around 2 years. Despite improving treatment success rates, nearly 400,000 women die of breast cancer each year, the highest number of deaths to cancer in woman, ahead of deaths to lung cancer. Among the short and long term survivors, most will suffer the life-long trauma of the invasive and disfiguring surgical treatment.
Another example is liver cancer, with more than half a million new cases each year and nearly the same number of deaths due to poor treatment efficacy. Hepatocellular carcinomas represent around 80% of all liver cancers and are rarely curable. Five-year survival rate is only about 10% and survival after diagnosis often less than 6 months. Although surgical resection of diseased tissue can be effective, it is not an option for the majority of cases because of the presence of cirrhosis of the liver. Hepatocellular carcinomas are largely radiation resistant and response to chemotherapy is poor.
Yet another example is that of pancreatic cancer with around 200,000 new cases per year and a very poor prognosis. In fact, the majority of patients die within a year of diagnosis and only a few percent of patients survive five years. Surgery is the only available treatment but is associated with high morbidity and complication rates because it involves not only the resection of at least part of the pancreas, but also of all of the duodenum, part of the jejunum, bile duct and gallbladder and a distal gastrectomy. In some cases, the spleen and lymph nodes are also removed.
Bladder cancer is the 9th most common cancer worldwide with an estimated 330,000 new cases and 130,000 deaths each year. In Europe, this disease is the cause of death for approximately 50,000 people each year. Current treatment includes the intravesicular delivery of chemotherapy and immunotherapy with the bacille Calmette-Guerin (BCG) vaccine that involves the additional risk of systemic infection with the tuberculosis bacterium. Despite this aggressive treatment regime, 70% of these superficial papillary tumors will recur over a prolonged clinical course some will progress into invasive carcinomas. The high rate of recurrence of this disease and associated repeated course of treatment makes this form of cancer one of the most expensive to treat over a patient's lifetime. For patients with recurring disease, the only options are to undergo multiple anesthetic-requiring cystoscopy surgery or major, radical, life-altering surgery (usually cystectomy). Radical cystectomy consists of excision of the bladder, prostate and seminal vesicle in males and of the ovaries, uterus, urethra and part of the vagina in females.
There are many more examples of cancer where current treatments do not meet the needs of patients either due to their lack of efficacy and/or because they have high morbidity rates and severe side-effects. Those selected statistics and facts however, illustrate well the need for cancer treatments with better safety and efficacy profiles.
One of the causes for the inadequacy of current cancer treatments is their lack of selectivity for affected tissues and cells. Surgical resection always involves the removal of apparently normal tissue as a “safety margin” which can increase morbidity and risk of complications. It also always removes some of the healthy tissue that may be interspersed with tumor cells and that could potentially maintain or restore the function of the affected organ or tissue. Radiation and chemotherapy will kill or damage many normal cells due to their non-specific mode of action. This can result in serious side-effects such as severe nausea, weight loss and reduced stamina, loss of hair etc., as well as increasing the risk of developing secondary cancer later in life. Treatment with greater selectivity for cancer cells would leave normal cells unharmed thus improving outcome, side-effect profile and quality of life.
The selectivity of cancer treatment can be improved by using antibodies that are specific for molecules present only or mostly on cancer cells. Such antibodies can be used to modulate the immune system and enhance the recognition and destruction of the cancer by the patient's own immune system. They can also block or alter the function of the target molecule and, thus, of the cancer cells. They can also be used to target drugs, genes, toxins or other medically relevant molecules to the cancer cells. Such antibody-drug complexes are usually referred to as immunotoxins or immunoconjugates and a number of such compounds have been tested in recent year [Kreitman R J (1999) Immunotoxins in cancer therapy. Curr Opin Immunol 11:570-578; Kreitman R J (2000) Immunotoxins. Expert Opin Pharmacother 1:1117-1129; Wahl R L (1994) Experimental radioimmunotherapy. A brief overview. Cancer 73:989-992; Grossbard M L, Fidias P (1995) Prospects for immunotoxin therapy of non-Hodgkin's lymphoma. Clin Immunol Immunopathol 76:107-114; Jurcic J G, Caron P C, Scheinberg D A (1995) Monoclonal antibody therapy of leukemia and lymphoma. Adv Pharmacol 33:287-314; Lewis J P, DeNardo G L, DeNardo S J (1995) Radioimmunotherapy of lymphoma: a UC Davis experience. Hybridoma 14:115-120; Uckun F M, Reaman G H (1995) Immunotoxins for treatment of leukemia and lymphoma. Leuk Lymphoma 18:195-201; Kreitman R J, Wilson W H, Bergeron K, Raggio M, Stetler-Stevenson M, FitzGerald D J, Pastan I (2001) Efficacy of the anti-CD22 recombinant immunotoxin BL22 in chemotherapy-resistant hairy-cell leukemia. N Engl J Med 345:241-247]. Most antibodies tested to date have been raised against known cancer markers in the form of mouse monoclonal antibodies, sometimes “humanized” through molecular engineering. Unfortunately, their targets can also be present in significant quantities on a subset of normal cells thus raising the risk of non-specific toxic effects. Furthermore, these antibodies are basically mouse proteins that are being seen by the human patient's immune system as foreign proteins. The ensuing immune reaction and antibody response can result in a loss of efficacy or in side-effects.
The inventors have used a different approach in their development of antibodies for cancer treatment. Instead of immunizing experimental animals with cancer cells or isolated cancer cell markers, they have sought out only those markers that are recognized by the patient's own immune system or, in other words, that are seen by the immune system as a foreign molecule. This implies that the markers or antigens are usually substantially absent on normal cells and, thus, the risk of non-specific toxicity is further reduced. Hybridoma libraries are generated from cancer patient-derived lymphocytes and the antibodies they secrete are tested for binding to normal and tumor cells. Only antibodies showing high selectivity for cancer cells are retained for further evaluation and development as a cancer therapeutic or diagnostic agent. One such highly selective antibody is the subject of this patent application. In addition to being selective, this antibody is fully compatible with the patient's immune system by virtue of being a fully-human protein. The antibody of the invention can be used for diagnostic or therapeutic uses or as a basis for engineering other binding molecules for the target antigen.
The basic structure of an antibody molecule consists of four protein chains, two heavy chains and two light chains. These chains are inter-connected by disulfide bonds. Each light chain is comprised of a light chain variable region and a light chain constant region. Each heavy chain is comprised of a heavy chain variable region and a heavy chain constant region. The light chain and heavy chain variable regions can be further subdivided into framework regions and regions of hypervariability, termed complementarity determining regions (CDR). Each light chain and heavy chain variable region is composed of three CDRs and four framework regions.
CD44 represents a family of cell surface glycoproteins encoded by a single gene comprising a total of 20 exons. Exons 19 and 20 are expressed together as the cytoplasmic tail and therefore grouped as “exon 19” by most research groups (Liao et al. J. Immunol 151:6490-99, 1993). The term exon 19 will be used henceforth to designate genomic exons 19 and 20. Structural and functional diversity is achieved by alternative splicing of the messenger RNA involving 10 “variant” exons identified as exons 6-15 or, most often, as “variant exons” 1-10 (v1-v10). In human, variant exon 1 contains a stop codon and is not usually expressed. The longest potential CD44 variant is therefore CD44v2-10 (see Naor et al. Adv Cancer Res 71:241-319, 1997 for review of CD44).
Exons 1-5 and all variant exons are part of the extracellular domain and contain many potential sites for post-translational modifications. The transmembrane domain is highly conserved across species but the intracellular tail can be truncated leading to another type of variant. One such variant comprises variant exons 8-10 but lacks part of exon 19. Changes to the intracellular domain has been shown to change the function of CD44, in part with respect to binding and internalization of hyaluronic acid (HA). CD44 is not only involved in binding to the extracellular molecules but it also has cell signaling properties (see Turley et al. J Biol Chem 277(7):4589-4592, 2002 for review).
The “standard” CD44 (CD44s), the most commonly expressed form of CD44, contains exons 1-5 and 16-19 and none of the variant exons. The molecular weight for the core protein is 37-38 kDa but posttranslational modification can result in a molecule of 85-95 kDa or more (Drillenburg et al., Blood 95(6):1900, 2000). It binds hyaluronic acid (HA), an extracellular glycosaminoglycan, constitutively and CD44 is often referred to as the HA receptor. It is interesting that the presence of variant exons can reduce the binding of HA by CD44 such that CD44 variants cannot be said to constitutively bind HA but such binding can be inducible (reviewed in Naor et al. Adv Cancer Res 71:241-319, 1997). See FIG. 17 for some examples of variants.
CD44E, also called CD44v8-10, contains variant exons 8-10 in addition to the exons 1-5 and 16-19. Other variants include CD44v3-10, CD44v6, CD44v7-8 and many others. The variant exons are part of the extracellular domain of the CD44.
CD44E can be present on certain normal epithelial cells, particularly by generative cells of the basal cell of stratified squamous epithelium and of glandular epithelium (Mackay et al. J Cell Biol 124(1-2):71-82, 1994) and in the fetus at certain stages development. But importantly, it has been shown to be overexpressed on various types of cancer cells. Using RT-PCR, Iida & Bourguignon (J Cell Physiol 162(1):127-133, 1995) and Kalish et al. (Frontiers Bioscience 4(a):1-8, 1999) have shown that CD44E is present in normal breast tissue and is more abundant than CD44s. They have also shown that CD44, including CD44E and CD44s are overexpressed, and preferentially located in metastatic breast cancer tissues. Miyake et al. (J Urol 167(3):1282-87, 2002) reported that CD44v8-10 mRNA is strongly expressed in urothelial cancer and can even be detected in urinary exfoliated cells of patients with invasive vs superficial urothelial cancer. The ratio of CD44v8-10 to CD44v10 mRNA increases in cancer and was shown to have diagnostic value in breast, lung, laryngeal and bladder. The presence of CD44v8-10 was also confirmed by immunohistochemistry with a polyclonal antibody (Okamoto et al. J Natl Cancer Inst 90(4): 307-15, 1997). CD44v8-10 can also be overexpressed in gallbladder cancer (Yamaguchi et al. Oncol Rep 7(3):541-4, 2000), renal cell carcinoma (Hara et al. Urology 54(3):562-6, 1999), testicular germ cell tumors (Miyake et al. Am J Pathol 152(5):1157-60, 1998), non-small cell lung carcinomas (Sasaki et al. Int J Oncol 12(3):525-33, 1998), colorectal cancer (Yamaguchi et al. J Clin Oncol 14(4):1122-27, 1996) and gastric cancer (Yamaguchi et al. Jpn J Cancer Res 86(12): 1166-71, 1995). Overexpression of CD44v8-10 was also shown to have diagnostic value for prostate cancer (Martegani et al. Amer J Pathol 154(1): 291-300, 1999).
Alpha-fetoprotein (AFP) is a major serum protein synthesized during fetal life. Its presence in adults is usually indicative of carcinomas, particularly those of the liver and teratocarcinomas. It is part of the albuminoid gene family that also comprises serum and alpha albumins and vitamin D-binding protein. AFP comprises 590 amino acids for a molecular weight of about 69-70 kDa and has one site for glycosylation. (Morinaga et al., Proc Natl Acad Sci 80:4604-08, 1983; Mizejewski Exp Biol Med 226(5):377-408, 2002). Molecular variants have been studied and identified in rodents, but in humans there are no reports of variant proteins being detected. A recent report has identified a variant mRNA that, if expressed, would code for a 65 kDa protein. This protein is expected to remain in the cytoplasm (Fukusawa et al. J Soc Gynecol Investig May 20, e-publication, 2005). | {
"pile_set_name": "USPTO Backgrounds"
} |
25 Md. App. 458 (1975)
336 A.2d 145
WILLIAM EVERDELL
v.
JOHN LEE CARROLL ET UX.
No. 478, September Term, 1974.
Court of Special Appeals of Maryland.
Decided April 3, 1975.
*459 The cause was argued before MOYLAN, MENCHINE, DAVIDSON and MELVIN, JJ.
Roger D. Redden, with whom were Francis X. Wright and Robert R. Price, Jr., on the brief for appellant.
Howard Wood and James D. Wright for appellees.
MENCHINE, J., delivered the opinion of the Court.
Time was that the private lane commencing at Tilghman's Neck Public Road (now DeCoursey Thom Road) served without incident the 590 acre tract through which it ran to the farthest reaches of the land. Nor was its course impeded, save by farm buildings that forced in part a snake-like course, by intersecting lanes and by 90 degree turns. While the tract was singly owned, its users traversed the lanes in seeming harmony, their free passage unobstructed by gates, the speed of their travel limited only by the caution induced by its described design, by the screening effect of natural and planted trees and shrubs and by the signs posted by its owners to indicate its hazards. Unhappily, this harmony, this heaven's first law, changed to discord when the owners of the entire tract disposed of parts of the estate.
The entire tract, known as Blakeford Farm, had been in the ownership of Clarence W. Miles and wife (Miles). The southern and western borders of Blakeford Farm, in Queen Anne's County, Maryland, extended to the waters of Queenstown Creek and of the Chester River, respectively. By deed dated March 13, 1958, Miles conveyed to Potter a tract *460 of 18.113 acres situate and lying at the actual confluence of the two bodies of water at the southwest corner of the whole tract. That deed contained the following clauses:
"TOGETHER with the buildings and improvements thereupon erected, made and being, and all and every the rights, roads, ways, waters, privileges, appurtenances and advantages to the same belonging or in anywise appertaining; and TOGETHER WITH the right of ingress to and egress from the above described property in common with the said Clarence W. Miles and Eleanor A. Miles, his wife, their assigns, the survivor of them, his or her heirs and assigns, over the private lane leading from the Tilghman's Neck Public Road into and through the farm building area of Blakeford Farm, and thence by a farm lane to the northerly end of a twenty-foot Right-of-Way surveyed by Shew & Bartlett on July 29, 1957, and thence by said twenty-foot Right-of-Way to the land hereinabove described and hereby conveyed."[1]
On June 15, 1970 Potter conveyed the entirety of the 18.113 acre tract to the appellant, William Everdell (Everdell). The latter deed contained the following clause:
"TOGETHER with the buildings and improvements thereupon erected, made or being and all and every the rights, roads, ways, waters, privileges, appurtenances and advantages to the same belonging or in anywise appertaining; and especially together with the right of ingress to and egress from the above described property in common with Clarence W. Miles and Eleanor A. Miles, his wife, their assigns, the survivor of them, his or her heirs and assigns, over the private lane leading from the Tilghman's Neck Public Road by the route fully described in said deed; * * *"
*461 It was purchased at a cost of $185,000.00.
On June 30, 1970 Miles conveyed to appellee, John Lee Carroll[2] (Carroll) a 266.256 acre tract from the remaining acreage. That deed contained the following clauses:
"* * * TOGETHER with a perpetual easement of ingress and egress at all times by all means and for all purposes, upon, over and across the existing entrance lane thirty (30) feet wide leading from the public road now known as the DeCoursey Thom Road, formerly known as the Tilghman's Neck Public Road in a generally southerly direction to the real estate hereinabove described and hereby conveyed, in common with the said Clarence W. Miles and Eleanor A. Miles, his wife, their heirs and assigns.
SUBJECT NEVERTHELESS to the legal effect of the easements granted to Virginia B. Potter, her heirs and assigns, by the said Clarence W. Miles and Eleanor A. Miles, his wife, by deed dated March 13, 1958, and recorded among said land records in Liber T.S.P. No. 40, folio 23; * * *."
The Carroll tract enveloped all land boundaries of the Everdell tract, extending from Queenstown Creek to the Chester River.
The previously recited clause in the deed from Miles to Potter, coupled with that in the subsequent deed from Potter to Everdell, had the legal effect of making the remaining property of Miles and ergo, the property of Carroll, servient to the dominant right of Everdell to the extent of the interest thereby created and conveyed. Desch v. Knox, 253 Md. 307, 310, 252 A.2d 815, 817.
In the subject litigation Everdell, owner of the dominant estate, sought to enjoin Carroll, owner of the servient estate, from maintaining allegedly unlawful obstructions within *462 the right-of-way. The answer of Carroll admitted placement of "bumps" and barriers along the lane, but maintained in substance that they did not impinge upon Everdell's reasonable use of the lane and were within Carroll's dominion as reasonably necessary for his enjoyment of the fee through which the lane ran. The answer also alleged that Everdell was estopped to seek injunction because he had made "representations that he would join in such experimentation." Carroll also filed a counterclaim, alleging agreement by Everdell to relocate the 20 foot right-of-way leading from the Everdell property to the east-west leg of the farm lane. The trial court denied Everdell's claim for injunction and dismissed Carroll's counterclaim.
Although both Everdell and Carroll entered appeals from the decree of the trial court, the brief of Carroll declares: "The denial of the Counterclaim is not a subject of this appeal." Such denial will, accordingly, not be considered in this opinion.
We hold that the recited clause in the deed from Miles to Potter, supra, granted a right-of-way only. The deed evidenced a clear intent to retain in Miles such other rights or benefits of his fee simple estate as were not inconsistent with such grant.
In 1829 it was declared in Bosley v. Susquehanna Canal, 3 Bland 63, 67:
"A right of way, whether public or private, is essentially different from a fee simple right to the land itself over which the way passes. A right of way is nothing more than a special and limited right of use; and every other right or benefit derivable from the land, not essentially injurious to, or incompatible with the peculiar use called the right of way, belongs as absolutely and entirely to the holder of the fee simple as if no such right of way existed. He is, in fact, for every purpose considered as the absolute owner of the land, subject only to an easement or servitude; he may recover the land so charged by ejectment; he may *463 bring an action of trespass against any one who does any injury to it, not properly incident to an exercise of the right of way; he has a right to the trees growing upon it; to all minerals under its surface; he may carry water in pipes under it; and the freehold with all its profits, not inconsistent with the right of way, belong to him."
There has been no departure from that rule of law. Desch v. Knox, supra.
The deed from Miles to Carroll granted and conveyed by metes and bounds description all the right, title, interest and estate in the 266.256 acre tract, subject only to the right-of-way previously granted by the Potter deed. Contained within that description was that part of the bed of the farm lane involved in the subject proceeding. Thus Carroll became seized and possessed of the fee simple estate therein except to the extent that the same had become servient to the right-of-way previously granted by Miles to Potter. All rights of Potter, of course, by mesne conveyance had been granted and conveyed to Everdell.
We find that our decision is controlled by the rules of law laid down in Baker v. Frick, 45 Md. 337. Although an action at law,[3]Baker v. Frick is a leading case, applicable to and cited with approval in injunction cases. This case first enunciated in Maryland the rules of law under which the respective rights of the owners of dominant and servient estates are to be determined in disputes relating to modification of rights-of-way. At page 340, et seq., it was said:
"The road in question is a private way over the defendant's lands. `Nothing passes as incident to such a grant, but that which is necessary for its reasonable and proper enjoyment.' 3 Kent, 419, 420.
"What is necessary for such reasonable and *464 proper enjoyment of the way granted, and the limitations thereby imposed on the use of the land by the proprietor, depends upon the terms of the grant, the purposes for which it was made, the nature and situation of the property subject to the easement, and the manner in which it has been used and occupied.
"As said by Marshall, C.J., in Maxwell v. McAtee, 9 B. Mon. 21, `Notwithstanding such a grant, there remains with the grantor the right of full dominion and use of the land, except so far as a limitation to his right is essential to the fair enjoyment of the right of way which he has granted. It is not necessary that the grantor should expressly reserve any right which he may exercise consistently with a fair enjoyment of the grant. Such rights remain with him, because they are not granted. And for the same reason, the exercise of any of them cannot be complained of by the grantee, who can claim no other limitation upon the rights of the grantor, but such as are expressed in the grant, or necessarily implied in the right of reasonable enjoyment.'
"In that case it was decided that `the grant of a right of way over or through the lands of an individual, does not imply that the grantor may not erect gates at the points, where the way enters and terminates.' That decision has been approved by courts of high authority in other States. Bean v. Coleman, 44 N.H. 539; Garland v. Farber, 47 N.H. 301; Hoopes v. Alderson, 22 Iowa, 161; Bakeman v. Talbot, 31 N.Y. 366, 370, 371; Huron v. Young, 4 Lansing, 63."
The Court added at 343:
"The questions whether under all the circumstances of the case, as disclosed by the testimony, the gates were necessary to the defendant for the useful and beneficial occupation *465 of his land, looking to the situation of his property; and whether the particular gates complained of, were usual and proper under the circumstances, and the further question whether their existence upon the road interfered with the reasonable use of the right of way by the plaintiff, considering the situation of his property and the manner in which it was occupied, and the interest of the parties as to the mode in which the right of way was to be used; these were all questions proper to be decided by the jury, upon the evidence in the case. On all these questions testimony was offered, legally sufficient to be submitted to the jury.
"`The doctrine that the facilities for passage where a private right of way exists, are to be regulated by the nature of the case, and the circumstances of the time and place, is very well settled by authority. Hemphill v. Boston, 8 Cush. 195; Cowling v. Higinson, 4 M. & W. 245. The last case determines in effect, that the extent of the privilege created by the dedication of a private right of passage, depends upon the circumstances and raises a question for the determination of the jury.' Bakeman v. Talbot, 31 N.Y. 370. We refer also to Hawkins v. Carbine, 3 Exch. 914, and Huron v. Young, 4 Lansing, 64."
In substance, Baker v. Frick had stated the proposition that, unless the terms of the grant itself prohibited such a course, or the purposes for which the grant was made, and the nature and situation of the property subject to the easement and the manner in which it has been used and occupied, implied such prohibition, the installation of a gate at the terminii of the right-of-way would be permissible if:
1. Its installation was necessary for the useful and beneficial occupation of the land of the servient estate; and
2. The particular gates complained of, were usual and proper under the circumstances; and
*466 3. The installation did not interfere with the reasonable use of the right-of-way by the dominant estate.
We address ourselves to the threshold question, namely, whether the right of the servient owner to make any modification of the right-of-way is prohibited either by the specific terms of the grant or by necessary implication.
The granted right-of-way did not in express terms deprive Carroll of the right to erect gates and thus did not expressly grant to Everdell an open road without gates. The purpose for which the grant was made is apparent from its very terms, namely, "the right of ingress to and egress from the [Everdell] property in common with the [fee owner]."
The nature, situation, use and occupation of the property subject to the easement at the time of its creation, however, is not capable of such easy definition. That part of the total right-of-way with which the present litigation is concerned was not the subject of a metes and bounds description, the document of its creation delineating it merely as a "private lane leading from the Tilghman's Neck Public Road into and through the farm building area of Blakeford Farm and thence by a farm lane to [a fully described twenty foot right-of-way].[4] The lane, replete with 90° turns, literally divided the Miles (now Carroll) tract into segments of various shapes and sizes. Movement, either afoot or by vehicle, from segment to segment of the servient estate, compelled continuing passage along or across its irregular route by its owner in the daily use of the lands through which it passed. Numerous dwellings, garages, sheds, silos and other farm buildings in close proximity to the lane were in such positions along its serpentine course that their utilization compelled movements into, across and along its path.
We are persuaded that a necessary modification of the right-of-way by the owner of the servient estate was not *467 explicitly or implicitly forbidden by the grant in the subject case. This conclusion simply means that the threshold requirement of Baker v. Frick has been met and we are required to pass to the elements essential to the application of its doctrine, namely, whether the evidence shows: (a) that modification of the right-of-way was necessary for the useful and beneficial occupation of the servient estate; (b) that the installations were usual and proper under the circumstances; and (c) that the installations did not interfere with the reasonable use of the right-of-way by Everdell.
Both Everdell and Carroll had rented their respective homes for about two years prior to their purchase. No traffic devices had been installed during that period. There was testimony to the effect that speeding vehicles on the lane had been a problem since 1956 with three or four accidents. None of the accidents had occurred at the points where "bumps" or barricades had been placed by Carroll. Both Carroll and Everdell are seasonal users of their respective properties.
The installation by Carroll of a series of "bumps" was a forerunner to the subject litigation. Although their installation initiated the Carroll-Everdell dispute, no substantial role was played by the "bumps" in continuing complaints by Everdell against Carroll or in the testimony of the witnesses. Indeed, the record tends to suggest that a decrease in the elevation of the "bumps" had created conditions whereby they no longer presented objectionable deterrent to the movement of traffic.
While the dispute concerning the "bumps" was raging between Everdell and Carroll and their respective counsel, discussions turned to possible use of some type of barricade to accomplish speed reduction of vehicles using the lane, in lieu of the "bumps". Everdell acknowledged such discussions, but protested vehemently when Carroll, without further notice, caused to be constructed and installed six gate-like wooden barricades, each approximately of the same size and shape, with a length approximately one half the width of the lane. The six partial barricades were set up in three pairs. These pairs, described by the witnesses as *468 barricades 1, 2 and 3, were installed in the lane in that progressive order from the direction of the public road toward the Everdell property. Placed on opposite sides of the lane, each half-gate of the pair would be set up at varying distances from its companion. We have reproduced a portion of plaintiff's Exhibit D showing the passage of the lane through the Carroll property with locations of the "bumps" and barriers indicated upon it.[5]
Barricade No. 1 was set up at a point where the lane began its passage through the first turn in the farm building area, with its first barrier being set up on the north side of the lane and the other barrier set up at a point near the stable on the opposite side. The outer wings of the respective barriers were 24 feet apart.
Barricade No. 2 was installed at the southernmost end of the farm building area with its first barrier placed at a point in the lane near the side of a garage and the other barrier set up beyond it on the opposite side. The outer wings of the respective barriers were 25 feet apart.
Barricade No. 3 was set up along the east-west course of the lane with its first barrier placed on the south side of the lane in approximate line with a hedge row 6 feet high at the end of a field planted in corn. Both the hedge row and the growing corn served to restrict visibility at the point. Its second barrier was placed on the opposite side of the lane in front of a garage or tool shed and shop. The outer wings of these respective barriers were 20 feet apart. For reasons that will hereafter become manifest, barricade No. 3 is at once the principal vexation of Everdell and the indispensable requisite of Carroll. The separate wings of barricade 3 are the closest of any. We have reproduced, in part, plaintiff's Exhibit D-3, a plat showing the point of placement in the lane.[6]
The barricades initially were constructed in the form of half gates standing upon small attached platforms at either end. Later the first two barricades were hinged to metal *469 rods inserted in pipes driven into the ground at or near the edge of the road. The third barricade was hinged to wooden posts but installed in such a way that swinging movement was limited by copper tubing so fixed upon the barrier that it could be moved up or down into or out of a pipe sunk into approximately the center of the right-of-way.
Carroll's reasons for the placement of the "bumps" and barricades are substantially fully articulated in the following quotation from his testimony:
"We gave this a good deal of thought and decided there were three real bad places, one at this corner of the farm yard, at the south corner of the farm yard, and in this work shed area just north of our house, those seem to be the three main danger points. The reason for the latter was cars would come out here at the north end of Mr. Everdell's described right-of-way and they would build up whatever, according to the type of car they had, whatever speed they could and really come in this turn with quite a bunch of speed. All of a sudden there was a very straight road and a very settled area and we felt we had to have some way of slowing them down at this place."
In further reference to barricade No. 3 Carroll said: "In order to get from our house to any main enjoyment of our living area of the farm we have to cross this road, and we do it continually."
Motion picture films showing the movement of vehicles through the several barricades formed part of the evidence presented in the trial court. Those films were shown also in this Court during the argument on appeal. The films showed the movement of: (a) a passenger vehicle and (b) a truck through the barricades in each direction. Such movements could be made at slow speed through barricades 1 and 2 without leaving the established course of the lane, although those barricades did occupy a portion of the bed of the existing lane. The passage of the vehicles through barricade 3 could be accomplished only at slower speed and by leaving *470 the established course of the lane. The trial judge described this condition as follows:
"The film shown by Defendant of this area [Barricade No. 3] indicates a modest weave of a vehicle of less than 50 degrees for a distance of less than 25 feet."
Our view of the films indicated that the vehicles shown were compelled to leave the established lane and to encroach upon the lands of Carroll, the truck to a greater extent than the passenger vehicle.
Everdell summarized his basic objection to the barricades as follows: (1) they forced vehicles to leave the established way; (2) they operated to delay heavy equipment that could produce very serious harm or inconvenience; (3) they may not be movable in the event of snow; (4) they forced motor vehicles onto the wrong side of the lane, thereby placing them in the path of oncoming traffic, and (5) at night, people unfamiliar with their existence were endangered by their presence.
Leonard Yates, a realtor, testifying as an expert witness for Everdell, said that the "bumps" and barricades reduced the value of the Everdell property. He explained that prospective purchasers would ask themselves, "If this can be constructed what else can be done to this right-of-way?" He said that close inquiry normally is made by prospective buyers about rights-of-way. On the other hand, Robert Sharp, a real estate broker called by Carroll, testified that in his opinion the "bumps" and barricades had no depreciating effect upon the value of the Everdell property. The rhetorical question asked by Yates points up the dangers inherent in unilateral modification of a right-of-way particularly here, where it is conceded that actual collisons have occurred on other parts of the lane but never within the areas of the subject modifications.
Two witnesses, one the Chief of Police of Centreville, called by Everdell, thought that barricade No. 3 would constitute an added danger because of its screening effect *471 upon the movement of small children across the lane. No other lane in Queen Anne's County had such barricades.
The garbage truck could pass through the barricades only by swinging them upon their hinges. The fire chief of the nearby Volunteer Fire Department testified that fire equipment consisting of an "aerial platform" and an "800 gallon pumper" went through the first two barricades "without much difficulty, the third one we did have to slow down and move the gate."[7] He doubted whether the tank wagon and the tractor trailer of the fire department "could get through that barn area * * * even without the gates there."
That the trial judge gave great weight to the testimony of Steven D. Peterson, an expert witness produced by Carroll, is apparent from this excerpt of the court's opinion:
*472 "The Court also places great weight and confidence in the testimony of Steven D. Peterson, a traffic engineer who qualified as an expert in the field, employed by the Carrolls to make a study of the right-of-way and recommend traffic control devices for the safety of those residing on the property and all who use the right-of-way.
"Mr. Peterson stated that Defendants were warranted in their fear of traffic accidents at two blind zigzag corners where traffic might go 25 m.p.h. if not slowed; and also there is a danger spot in the area of the machinery shed. This is a 3/10 mile straightway that traffic could approach at a high rate of speed unless controlled in some manner.
"Mr. Peterson recommended the traffic controls now in use and feels that they are proper to control the traffic in the areas considered dangerous on the right-of-way. He also recommended warnings to the public of the location of the barricades, as well as trimming of the shrubbery where it might obscure parts of the roadway for travelers.
"Mr. Peterson also stated that there had been no substantial number of accidents, but close calls; and that he estimated there were not over 100 vehicles a day on the property, around 30 in the farm building area, and around 5 to 7 go through and use the third barricade."
Although Baker v. Frick dealt with a case involving gates placed at the terminii of the right-of-way, we do not limit application of its rule to such cases. Indeed, in Frank v. Benesch, 74 Md. 58, 21 A. 550, where the Court approved modification along the course of the right-of-way by the owner of the servient estate, Baker v. Frick was cited with approval. Nor is there any doubt that the rule of Baker v. Frick continues viable. It was cited with approval in Simon Distributing Corp. v. Bay Ridge Civic Association, 207 Md. 472, 114 A.2d 829, and in Reddick v. Williams, 260 Md. 678, 273 A.2d 153, although the right of the servient estate to *473 modify the right-of-way was denied in both. It was denied in Simon because the proof did not even meet the threshold question the purpose of the grant itself implied a prohibition against gates. It was denied in Reddick because the installation of a gate interfered with the reasonable use of the right-of-way by the dominant estate.
In the subject case the trial judge, declaring that he was "impressed by the manner and seriousness with which Mr. Carroll * * * addressed himself to the dangerous traffic problem on the right-of-way" found that Carroll "has not done anything unreasonable in making an effort to control traffic on the right-of-way through his 266+- acre property, for the protection of those living there as well as others using the right-of-way," and "that the offset barricades are not an unreasonable interference with plaintiff's right of common use of the right-of-way for ingress and egress and are necessary for the safe and normal use of the right-of-way by defendants and others." Determination of those questions where the facts are in dispute, is a matter for the trier of facts. Baker v. Frick, supra, at 343; Gillett v. Van Horne, 36 S.W.2d 305 (1931 Tex. Civ. App.); Annotation, 52 A.L.R.3d 9, et seq. We cannot say that the conclusion of the trial judge is clearly erroneous as to barricades no. 1 and no. 2. Maryland Rule 1086.
Barricade no. 3, however, stands in a different legal position. The testimony is uncontroverted that in the movement of vehicles through the barriers forming this barricade, they are compelled to leave the established right-of-way. A right-of-way may not be relocated without the consent of the owners of both the dominant and servient estates. Millson v. Laughlin, 217 Md. 576, 588, 142 A.2d 810, 816.
We hold accordingly, that the appellant is entitled to an injunction requiring removal of barricade no. 3 in its present form unless barred by estoppel.
Estoppel
Carroll urges that in any event, Everdell is estopped by conduct from obtaining the relief he seeks. Conversations *474 and correspondence concerning the right-of-way had been carried on between Carroll and Everdell and their respective counsel. Initially they related to protests by Everdell because Carroll had installed, without prior agreement, two "bumps" in the area of the farm buildings. Everdell's objections were disregarded and additional "bumps" were installed by Carroll in the east-west section of the lane after notice to, but again without the consent of Everdell. The latter installation infuriated Everdell because of the severe jolting effect upon vehicles and their passengers upon approaching and leaving his dwelling. The "bumps" were scaled down following meetings between Carroll and Everdell, but even then were left at an elevation regarded as too high by Everdell. Everdell continued to protest. Meetings followed at which discussions were had concerning introduction of a new right-of-way to eliminate the need for the "bumps." As heretofore stated, Carroll did not press his appeal from the decree rejecting his claim that agreement upon a new right-of-way had been reached by the parties. It was during such meeting that experimentation with barricades was discussed. The trial judge said that he could not find that the parties had reached a meeting of the minds on an agreed location for barricades. We cannot say that his conclusion was clearly erroneous. Rule 1086.
J.F. Johnson Lumber Co. v. Magruder, 218 Md. 440, 147 A.2d 208; Vogler v. Geiss, 51 Md. 407; and Millson v. Laughlin, supra, all cited by the appellees on the estoppel issue, do not aid them. In Johnson, the Court declared at 448 [212] that the doctrine "* was educed to prevent the unconscientious and inequitable assertion of rights or enforcement of claims which might have existed or been enforceable, had not the conduct of a party, including his spoken and written words, his positive acts and his silence or negative omission to do anything, rendered it inequitable and unconscionable to allow the rights or claims to be asserted or enforced." Everdell's protests, early and late, negate estoppel. He had expressed willingness to negotiate, but this seems to have been followed only by unsatisfactory unilateral action presenting him with a fait accompli. In *475 Vogler, the Court reversed because evidence tending to show agreement to an easement change had been ruled inadmissible. That case offers no guide to our decision. In Millson, (589 [817]) the record showed that "[t]here was evidence from which the conclusion could be drawn that the defendant had abandoned the old road when she never used it after acquiescing in the construction of the new straighter road and had never protested the closing of the old road." All are patently distinguishable from the subject case. Here there was a mere agreement to experiment. We urge continuation of experimentation in efforts to reach agreement. We can find in this record, however, no action or inaction by Everdell such as operated to bar his right to claim relief in the subject litigation.
Reversed in part and affirmed in part and case remanded for issuance of an injunction as to barricade No. 3.
Costs to be divided.
*476
*477
NOTES
[1] Detailed description of the last mentioned right-of-way is omitted because it does not affect, save in a quite collateral way, the present dispute between the parties.
[2] By deed of even date, John Lee Carroll conveyed an undivided one-half interest in the tract to his wife, Cornelia T. Carroll. She was joined as a party defendant and is an appellee in this Court.
[3] The precise nature of the proceedings is obscure, the opinion stating at page 338: "The case was docketed by consent, and all errors of pleadings and questions of jurisdiction were waived by agreement."
[4] The fully described right-of-way leading from the last mentioned farm lane to the Everdell residence has undergone no change. It thus plays no part in this aspect of the litigation. It does bear collaterally upon an alleged estoppel urged by Carroll. This will be discussed infra.
[5] Attached hereto in Appendix.
[6] Attached hereto in Appendix.
[7] The record shows the following testimony concerning barricade 3 as it existed at the time of trial:
"`Mr. Carroll has just modified the final set of barricades by swinging them on posts and hinges so that they swing very freely. Each of these gates is held in place in the roadway by a short piece of three-eighths inch copper tubing inserted through a staple on the gate into a pipe socket in the ground. It would take hardly any force to break any such tubing. Therefore, Mr. Carroll wanted me to notify you that the fire truck should just bump the gate open without any need to stop and lift it aside as formerly.'
Q. Now, assuming the description in my letter is accurate in this final modification this would mean, would it not, that your equipment could go all the way to Mr. Everdell's house without having to stop, is that right?
A. That is correct."
The trial court then inquired:
"THE COURT: What would be the effect of a snow on this Mr. Starkey?
THE WITNESS: I would say we would have to stop and open them up if it was any amount at all.
THE COURT: In the meantime, the Everdell's house could burn down?
THE WITNESS: If we couldn't get there, this is true, right."
The ancient manor house on the land had been destroyed by fire in 1970 during the ownership of Miles.
There was other evidence that it was the imposed duty of a tenant farmer engaged by both Carroll and Everdell, to keep the lane clear of snow and that the barriers furnished no impediment to its removal.
| {
"pile_set_name": "FreeLaw"
} |
Q:
Code first entity framework inheritance
I am using entity framework to build a data driven app. I have a base class which has lots is shared properties such as timestamp,Id,creator etc and I subclass this for all of my actual objects.. Is this a good design? Is there a limit to the amount of entities I can create like this?
A:
It is good design. It would be bad design, of course, if Entity Framework didn't support inheritance as an ORM feature. But it does!
As far as I know, there is no practical limit to the number of entities you can define and use in Entity Framework.
| {
"pile_set_name": "StackExchange"
} |
Electron image series reconstruction of twin interfaces in InP superlattice nanowires.
The twin interface structure in twinning superlattice InP nanowires with zincblende structure has been investigated using electron exit wavefunction restoration from focal series images recorded on an aberration-corrected transmission electron microscope. By comparing the exit wavefunction phase with simulations from model structures, it was possible to determine the twin structure to be the ortho type with preserved In-P bonding order across the interface. The bending of the thin nanowires away from the intended 110 axis could be estimated locally from the calculated diffraction pattern, and this parameter was successfully taken into account in the simulations. | {
"pile_set_name": "PubMed Abstracts"
} |
Named Entity Results, Robert Edward Lee
Enter the name of a place, like "Springfield" or
"Athens", to find all locations matching the name, or enter a state
("Illinois") or country ("Canada") to find all places within that state or
nation. You may also enter more than one of these to narrow your search
("Athens, Greece" or "Springfield, Illinois, United States").
Note that abbreviations ("USA", "Ill.") do
not work at present--please stick to full names!
Search for a person:
In:
Forenames
Surnames
Full name
Searching for "Washington" in "Forenames" and
"Surnames" will return all people with Washington as a first or last name,
respectively. A full-name search will find anyone who matches the entire
search string ("Washington Irving").
Search for dates:
From:
,
Month
Day
Year
To:
,
Enter a month, day and/or year to search for references to that date.
You do not need to fill out every field: searching only for "1863" will
find all references to the year 1863, while searching for "July 4" will
find all references to the 4th of July, regardless of year.
Enter a starting date and an ending date to find
all occurrences of dates in between.
on Major James W. Thomson, who lost his life while leading a cavalry charge at High Bridge on General Lee's retreat from Petersburg.
Captain McDonald said:
The mighty throng of the living strewinfitting presence in which to real the memory of one who, among all the brave hearts that followed Lee and Jackson, was unsurpassed by none in a romantic devotion to the lost cause.
The mountains thaief and made him a conspicuous figure in that last drama of the war. On that memorable retreat of Lee to Appomattox, when disasters thickened and famine and the sword was destroying his gallant army,d in the arm, fought his last battle.
The Pitch field was near High Bridge, over which a part of Lee's army expected to cross the Appomattox.
A picked body of Federal cavalry and infantry under Colnd heroism of the Federal soldiers.
He paid a tribute to General Grant for refusing to allow General Lee to be indicted and imprisoned.
At the conclusion of General Hooker's address Captain Willi
other; while monuments to our heroes stand all over the land, yet we want a monument in which should be represented the mothers, wives, daughters, and sisters of R. E. Lee, Stonewall Jackson, Albert Sidney Johnston, Jubal A. Early, G. T. Beauregard, J. E. B. Stuart, George E. Pickett, Fitz Lee, and all the mothers, wives, sisters, sent, my dear comrades, the brave men and beauteous women who surround me, when I say that we should be unworthy of the banner we once followed and unworthy of Robert E. Lee if we were not, twenty-nine years after Appomattox, as loyal to the country and the Star-Spangled Banner as any northern man living or dead.
Brave men do not the West, coming across ocean and continent, passing over the city of the dead (Hollywood) and of the living (Richmond), light up the heroic forms in bronze of Robert E. Lee and George Washington, forming, as they reach the Confederate soldier and the Confederate woman, through the falling rain, a gorgeous rainbow, spanning the who
y fast and became wealthy.
Another who went to Egypt was General A. W. Reynolds.
He served awhile, dropped out of service, and then settled down in the country of his adoption.
The careers of Early and Beauregard are well known.
They lived and prospered in New Orleans, where they superintended the drawings of the Louisiana Lottery Company. General Early's death occurred in Virginia only a few months ago. He was one of the last of the great southern generals.
The latter days of General R. E. Lee's life were passed in the quiet at Lexington, in his native State, where he became an instructor of young men. The duties of a college president were faithfully carried out by him, although it was probable that the last years of his life were filled with infinite sadness.
Of the remaining brilliant leaders of the Lost Cause some dropped from sight and memory, others had a quiet and prosperous old age, but few fared worse than General Thomas Benton Smith.
He passed his later years i
The Confederate Navy.
What it accomplished during the Civil War. [from the Richmond, Va., times, April 15 and 22, 1894.]
A very interesting and valuable paper read before R. E. Lee Camp by Mr. Virginius Newton.
This valuable resume is from a corrected copy kindly furnished by Mr. Newton, a live citizen of Richmond, whose agency is felt, if not proclaimed.
His modesty would fain keep in the shade his merit.
His heart holds all of the memorable past, as the readers of the Papers, as well as the local press, warmly know—Ed.
Southern Historical Society papers.
Several weeks ago Mr. Virginius Newton, of this city, was requested by the members of Lee Camp to read before that body a paper relating to some of the numerous episodes during the late war. Mr. Newton responded with the promptness of a gallant soldier, and selected as his subject the Confederate Navy and its noble deeds
He succeeded in giving in the most condensed form a statement of the many noble deeds e
e in General Hancock's five regiments in great confusion and caused his guns rapidly to flee away, and indeed, would probably have captured them all had they not been ordered to halt and return, for these were the same Virginians of whom wrote General Lee on a late occasion: We tried very hard to stop Pickett's men from capturing the breastworks of the enemy, but could not.
It is this Virginia charge, led soon after it opened, by myself (the major), General Early, Colonel Terry and Lieutenanopening of that memorable campaign, not only stunned the enemy—who never attacked again on the Peninsula!— but furnished the whole army with an inspiring example, which could not but have an admirable effect.
General Hill found them, as did General Lee afterwards, too ready to get ahead, for he says that the Twenty-fourth pressed before all the other regiments, and without waiting for them to come up and the line to be formed, dashed at the enemy as soon as they saw him, and before he was re
ops were withdrawn and sent to reinforce General Grant about Cold Harbor, and all of General Beauregard's forces, except Bushrod Johnson's Brigade, of which my regiment, the Sixty-third Tennessee Infantry, formed a part, were sent to reinforce General Lee.
Johnson's Brigade suffered heavily in the battle of Drewry's Bluff, my regiment losing fifty per cent. in killed and wounded; the brigade at this time numbered only five hundred effective men.
About the middle of June General Grant seems to have stolen a march on General Lee, and suddenly throwing his entire army to the south side of the James, moved upon Petersburg, which, notwithstanding it was regarded as the key to Richmond, was wholly unprotected except by home guards and some reserve artillery which had been stationed there.
On the afternoon of June 15th, General Johnson was notified of the threatened attack upon Petersburg, and he immediately ordered the evacuation of the line in front of Bermuda Hundreds, and marched
. L. Long, the chief of artillery of the expedition, the gallant officer, who, notwithstanding the loss of his eyesight, spent his declining years in writing a history of this operation, in which he took a worthy part, says in his memoir of General R. E. Lee: This campaign of General Early's is remarkable for accomplishing more in proportion to the force employed and for having given less public satisfaction than any other campaign of the war. This is entirely due to the erroneous opinion that e reason to rejoice.
If none but those who did as well threw the first stone, it would remain long unflung.
Zzzlees faith in Early.
General Early had the satisfaction of retaining the confidence and good opinion of his great commander, R. E. Lee.
After all reverses in the Valley, Lee, on the 20th of February, 1865, extended his command to embrace the Department of West Virginia and East Tennessee, previously commanded by General John C. Breckinridge, who had now become the Secretary o
s noble Vindi-Cation of the Southern cause.
A demonstration but little less imposing than the parade on the occasion of the Dedication of the Monument to Gen. R. E. Lee in 1890.
The Confederate Soldiers' and Sailors' monument stands unveiled in all its towering and majestic proportions—the suggestion of a grand eternal beaof peace had but once before been seen in Richmond.
There were possibly more soldiers here on the day that the equestrian statue to the memory of the immortal Robert E. Lee was unveiled, but upon no other occasion has there been such a parade.
There were in the parade more than two thousand veterans, who, fast passing beyond the rigade was under the charge of First Lieutenant Grand Commander C. W. Murdaugh, with Colonel John Murphy as aide.
It comprised the following camps and bands:
R. E. Lee Camp, No. 1, E. Leslie Spence commanding; 250 men. The Social Home Band, of Richmond.
Maury Camp, of Fredericksburg, T. F. Proctor commanding; thirty men. | {
"pile_set_name": "Pile-CC"
} |
// Copyright 2019 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.firebase.crashlytics.internal.unity;
public interface UnityVersionProvider {
/**
* Get the Crashlytics Unity package version.
*
* @return {@link String} Crashlytics Unity package version if available, or <code>null</code> if
* the Crashlytics Unity package is not installed.
*/
String getUnityVersion();
}
| {
"pile_set_name": "Github"
} |
Eeny Meeny Murder Mo
"Eeny Meeny Murder Mo" is a Nero Wolfe mystery novella by Rex Stout, first published in the March 1962 issue of Ellery Queen's Mystery Magazine (#220). It first appeared in book form in the short-story collection Homicide Trinity, published by the Viking Press in 1962.
Plot summary
Bertha Aaron, a secretary at a law firm, comes to the brownstone to hire Wolfe to investigate a possibly serious ethical lapse by a member of the firm. She has no appointment and arrives during Wolfe's afternoon orchid session, so Archie gets the particulars from her.
The firm she works for is representing Morton Sorell in a messy, highly publicized divorce. A few evenings ago, Aaron noticed a junior member of the law firm – she won't say which one – in a cheap eatery, tête-à-tête with Mrs. Rita Sorell, the firm's opponent in the divorce action. That sort of ex parte communication is highly improper. Later, she asked the lawyer about it, and he wouldn't discuss the matter. She won't take the problem to the firm's senior member, Lamont Otis, because she fears that the news, coupled with Otis's advanced age and heart condition, will kill him. But it has to be investigated.
It's a novel problem, and Archie takes the unusual step of consulting Wolfe in the plant rooms. Because the case concerns a divorce, it's one that Wolfe normally would not touch. But because legal ethics, not the divorce itself, is the central issue, Archie thinks there's a chance Wolfe will take it. Even so, Wolfe tells Archie he won't do it, and Archie returns to the office to give Aaron the bad news.
Back in the office, Archie finds he can't give the news to her because she's dead, hit on the head with a heavy paperweight and then strangled with a necktie. It's Wolfe's paperweight. Even worse, it's Wolfe's necktie. He had spilled some sauce on it at lunch, removed it, and left it on his desk where someone could find it and use it to strangle Bertha Aaron.
Late that night, after Inspector Cramer and other police investigators have left, Otis arrives, along with one of the law firm's associates, Ann Paige. The death of his valued secretary has upset Otis, and he wants to know what happened.
Wolfe allows Otis to read a copy of the statement Archie gave the police, and Otis is clearly shaken by the report of the ex parte communication. Otis asks Paige to leave Wolfe's office – he wants to discuss things privately – and Archie escorts her to the front room. Wolfe and Otis discuss the situation at length, and Wolfe gets Otis's take on the three junior members of the firm, one of whom Aaron saw talking with Mrs. Sorell. During their discussion, Archie checks on Paige, and finds that she has opened the window in the front room and, apparently, jumped down to the sidewalk. She is nowhere to be found.
The next morning, Archie calls on Rita Sorell, using as entrée a note he's written, informing her that she and the unidentified junior member were seen together in the restaurant. He wants to bring her to talk with Wolfe, but she plays dumb, and the best Archie can get from her is a promise to phone later in the day.
On returning to the brownstone, Archie finds the office occupied only by a man he doesn't recognize. He finds Wolfe at the peephole, and learns that the man's name is Gregory Jett, one of the law firm's junior members. Jett is there to complain that Wolfe's behavior caused Otis undue stress. Brushing aside Jett's complaint, Wolfe learns that Jett is engaged to marry Ann Paige, and also that he had a brief fling with Rita Sorell a year earlier.
Then the two other junior members, Frank Edey and Miles Heydecker, arrive looking for information and acting like lawyers. Mrs. Sorell's promised phone call comes, and she tells Archie that Bertha Aaron must have seen her talking with Gregory Jett. Wolfe and Archie regard this information with skepticism: she seems to them devious.
Now Wolfe tells them what Aaron had to say before she was murdered – as yet, that's been disclosed only to the police and to Lamont Otis. Wolfe also states his assumption that the guilty lawyer followed Aaron to Wolfe's office, convinced her to admit him while Archie was in the plant rooms with Wolfe, and then took the opportunity to kill her.
The problem is that the three lawyers share a mutual alibi for the date and time that Aaron was murdered: they were in conference together at their office, fully a mile from the brownstone. The lawyers leave, suspicious of one another, and not happy.
When Wolfe then learns from Inspector Cramer that the timing apodictically exonerates Edey, Heydecker and Jett, he arranges for all involved to be brought to the brownstone for the traditional climax. This time, though, all but one are in the front room, listening via hidden microphone to Wolfe talk things over with the murderer.
Cast of characters
Nero Wolfe — The private investigator
Archie Goodwin — Wolfe's assistant (and the narrator of all Wolfe stories)
Bertha Aaron — Private secretary to the senior partner in a law firm, and murder victim
Rita Sorell — Retired stage actress, suing her husband for divorce
Lamont Otis — Senior member of the law firm representing Mrs. Sorell
Frank Edey, Miles Heydecker and Gregory Jett — Other members of the firm
Ann Paige — Associate in the firm
Inspector Cramer and Sgt. Purley Stebbins — Representing Manhattan Homicide
The unfamiliar word
"Readers of the Wolfe saga often have to turn to the dictionary because of the erudite vocabulary of Wolfe and sometimes of Archie," wrote Rev. Frederick G. Gotwald.
Schlampick. A variant of schlampig. Chapter 1, spoken by Archie.
Publication history
"Eeny Meeny Murder Mo"
1962, Ellery Queen's Mystery Magazine, March 1962
1962, Ellery Queen's Mystery Magazine, British edition, July 1962
Homicide Trinity
1962, New York: The Viking Press, April 26, 1962, hardcover
Contents include "Eeny Meeny Murder Mo", "Death of a Demon" and "Counterfeit for Murder".
In his limited-edition pamphlet, Collecting Mystery Fiction #10, Rex Stout's Nero Wolfe Part II, Otto Penzler describes the first edition of Homicide Trinity: "Blue cloth, front cover stamped in blind; spine printed with deep pink; rear cover blank. Issued in a mainly blue dust wrapper."
In April 2006, Firsts: The Book Collector's Magazine estimated that the first edition of Homicide Trinity had a value of between $150 and $350. The estimate is for a copy in very good to fine condition in a like dustjacket.
1962, Toronto: Macmillan, 1962, hardcover
1962, New York: Viking (Mystery Guild), August 1962, hardcover
The far less valuable Viking book club edition may be distinguished from the first edition in three ways:
The dust jacket has "Book Club Edition" printed on the inside front flap, and the price is absent (first editions may be price clipped if they were given as gifts).
Book club editions are sometimes thinner and always taller (usually a quarter of an inch) than first editions.
Book club editions are bound in cardboard, and first editions are bound in cloth (or have at least a cloth spine).
1963, London: Collins Crime Club, February 18, 1963, hardcover
1966, New York: Bantam #F-3118, February 1966, paperback
1993, New York: Bantam Crime Line August 1993, paperback, Rex Stout Library edition with introduction by Stephen Greenleaf
1997, Newport Beach, California: Books on Tape, Inc. October 31, 1997, audio cassette (unabridged, read by Michael Prichard)
2010, New York: Bantam Crimeline July 7, 2010, e-book
Adaptations
A Nero Wolfe Mystery (A&E Network)
"Eeny Meeny Murder Mo" was adapted for the first season of the A&E TV series A Nero Wolfe Mystery (2001–2002). Directed by John L'Ecuyer from a teleplay by Sharon Elizabeth Doyle, the episode made its debut June 17, 2001, on A&E.
Timothy Hutton is Archie Goodwin; Maury Chaykin is Nero Wolfe. Other members of the cast (in credits order) are Bill Smitrovich (Inspector Cramer), Saul Rubinek (Lon Cohen), Colin Fox (Fritz Brenner), George Plimpton (Lamont Otis), Kari Matchett (Rita Sorell), Trent McMullen (Orrie Cather), Conrad Dunn (Saul Panzer), Robert Bockstael (Gregory Jett), R.D. Reid (Sergeant Purley Stebbins), Christine Brubaker (Bertha Aaron), Janine Theriault (Angela Paige), David Schurmann (Miles Heydecker) and Wayne Best (Frank Edey).
In addition to original music by Nero Wolfe composer Michael Small, the soundtrack includes music by Ib Glindemann (titles), David Cabrera and Phil McArthur (opening sequence), Luigi Boccherini, Felix Mendelssohn and Jeff Taylor.
In international broadcasts, the episodes "Eeny Meeny Murder Mo" and "Disguise for Murder" are linked and expanded into a 90-minute widescreen telefilm titled "Wolfe Stays In." The two episodes are connected by scenes of Archie playing poker with Saul, Orrie and Lon — extensions of the Stout originals written by head writer and consulting producer Sharon Doyle.
"These poker scenes were put in for marketing reasons," executive producer Michael Jaffe told Scarlet Street magazine. "Nero Wolfe airs as a two-hour show overseas and the two episodes had to be tied together. So we looked for ways to do that. We've heard Archie talk about poker a million times. So there was nothing abnormal about seeing them play poker, except that we don't see them do it in the book."
A Nero Wolfe Mystery began to be released on Region 2 DVD in December 2009, marketed in the Netherlands by Just Entertainment. The third collection released in April 2010 made the 90-minute features "Wolfe Goes Out" and "Wolfe Stays In" available on home video for the first time; until then, the linked episodes "Door to Death"/"Christmas Party" and "Eeny Meeny Murder Moe"/"Disguise for Murder" were available only in the abbreviated form sold in North America by A&E Home Video (). The A&E and Just Entertainment DVD releases present the episodes in 4:3 pan and scan rather than their 16:9 aspect ratio for widescreen viewing, and neither is offered in high-definition video.
Nero Wolfe (CBC Radio)
"Eeny Meeny Murder Mo" was adapted as the ninth episode of the Canadian Broadcasting Corporation's 13-part radio series Nero Wolfe (1982), starring Mavor Moore as Nero Wolfe, Don Francks as Archie Goodwin, and Cec Linder as Inspector Cramer. Written and directed by Toronto actor and producer Ron Hartmann, the hour-long adaptation aired on CBC Stereo March 13, 1982.
"Before the 2001 A&E television series, the best non-book Wolfe came via this 1982 CBC radio series, which hewed closely in style and content to Stout’s actual stories and was far superior, for instance, to the long-running American radio series from the 1940s and ’50s.," wrote Tom Nolan in Mystery Scene magazine. Of "Eeny Meeny Murder Mo", Nolan wrote, "It’s a typical Stout story: at once breezy and thoughtful, serious and semi-comic—with a full cast of plausible suspects, drawn just sharply enough to hold one’s interest for the length of the tale. Moore’s Wolfe exudes the perfect mix of ire and insight; and Franck’s Archie is both street-smart and suave. The rest of the Canadian players are equally good, and Don Gillis’s original score also strikes just the right note."
References
External links
A Nero Wolfe Mystery — "Eeny Meeny Murder Mo" at The Wolfe Pack, official site of the Nero Wolfe Society
Category:1962 short stories
Category:Nero Wolfe short stories
Category:Works originally published in Ellery Queen's Mystery Magazine | {
"pile_set_name": "Wikipedia (en)"
} |
About Arnold
A Filipino tech guy who reluctantly pursued entrepreneurship and now leads TeamSparrow, a team of web developers, designers and marketers based in Mandaluyong City, Philippines. A husband, father and a follower of Jesus. | {
"pile_set_name": "Pile-CC"
} |
/*
* txn_size_test.c
*
* This source file is part of the FoundationDB open source project
*
* Copyright 2013-2019 Apple Inc. and the FoundationDB project authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "test.h"
#include <assert.h>
#include <stdio.h>
#include <pthread.h>
#include <foundationdb/fdb_c.h>
#include <foundationdb/fdb_c_options.g.h>
pthread_t netThread;
const int numKeys = 100;
uint8_t** keys = NULL;
#define KEY_SIZE 16
#define VALUE_SIZE 100
uint8_t valueStr[VALUE_SIZE];
fdb_error_t getSize(struct ResultSet* rs, FDBTransaction* tr, int64_t* out_size) {
fdb_error_t e;
FDBFuture* future = fdb_transaction_get_approximate_size(tr);
e = maybeLogError(fdb_future_block_until_ready(future), "waiting for get future", rs);
if (e) {
fdb_future_destroy(future);
return e;
}
e = maybeLogError(fdb_future_get_int64(future, out_size), "getting future value", rs);
if (e) {
fdb_future_destroy(future);
return e;
}
fdb_future_destroy(future);
return 0;
}
void runTests(struct ResultSet *rs) {
int64_t sizes[numKeys];
int i = 0, j = 0;
FDBDatabase *db = openDatabase(rs, &netThread);
FDBTransaction *tr = NULL;
fdb_error_t e = fdb_database_create_transaction(db, &tr);
checkError(e, "create transaction", rs);
memset(sizes, 0, numKeys * sizeof(uint32_t));
fdb_transaction_set(tr, keys[i], KEY_SIZE, valueStr, VALUE_SIZE);
e = getSize(rs, tr, sizes + i);
checkError(e, "transaction get size", rs);
printf("size %d: %u\n", i, sizes[i]);
i++;
fdb_transaction_set(tr, keys[i], KEY_SIZE, valueStr, VALUE_SIZE);
e = getSize(rs, tr, sizes + i);
checkError(e, "transaction get size", rs);
printf("size %d: %u\n", i, sizes[i]);
i++;
fdb_transaction_clear(tr, keys[i], KEY_SIZE);
e = getSize(rs, tr, sizes + i);
checkError(e, "transaction get size", rs);
printf("size %d: %u\n", i, sizes[i]);
i++;
fdb_transaction_clear_range(tr, keys[i], KEY_SIZE, keys[i+1], KEY_SIZE);
e = getSize(rs, tr, sizes + i);
checkError(e, "transaction get size", rs);
printf("size %d: %u\n", i, sizes[i]);
i++;
for (j = 0; j + 1 < i; j++) {
assert(sizes[j] < sizes[j + 1]);
}
printf("Test passed!\n");
}
int main(int argc, char **argv) {
srand(time(NULL));
struct ResultSet *rs = newResultSet();
checkError(fdb_select_api_version(700), "select API version", rs);
printf("Running performance test at client version: %s\n", fdb_get_client_version());
keys = generateKeys(numKeys, KEY_SIZE);
runTests(rs);
freeResultSet(rs);
freeKeys(keys, numKeys);
return 0;
}
| {
"pile_set_name": "Github"
} |
Branch-and-Bound Applications in Combinatorial Data Analysis
9780387250373
ISBN:
0387250379
Pub Date: 2005Publisher: Springer
Summary: Michael J. Brusco is a Professor of Marketing Operations Research at Florida State University.
Stahl, Stephanie is the author of Branch-and-Bound Applications in Combinatorial Data Analysis, published 2005 under ISBN 9780387250373 and 0387250379. Two hundred four Branch-and-Bound Applications in Combinatorial Data Analysis textbooks are available for sale on ValoreBooks.com, fifty two used from the cheapest ...price of $19.94, or buy new starting at $35.86.[read more] | {
"pile_set_name": "Pile-CC"
} |
UNPUBLISHED
UNITED STATES COURT OF APPEALS
FOR THE FOURTH CIRCUIT
No. 07-4864
UNITED STATES OF AMERICA,
Plaintiff - Appellee,
v.
CARLOS IBARRA-ZELAYA, a/k/a Carlos Zelay,
Defendant - Appellant.
Appeal from the United States District Court for the Eastern
District of North Carolina, at Raleigh. James C. Dever III,
District Judge. (5:07-cr-00096-D)
Submitted: April 29, 2008 Decided: May 20, 2008
Before NIEMEYER and GREGORY, Circuit Judges, and HAMILTON, Senior
Circuit Judge.
Affirmed by unpublished per curiam opinion.
Thomas P. McNamara, Federal Public Defender, G. Alan DuBois,
Assistant Federal Public Defender, Raleigh, North Carolina, for
Appellant. George E. B. Holding, United States Attorney, Anne M.
Hayes, Banumathi Rangarajan, Assistant United States Attorneys,
Raleigh, North Carolina, for Appellee.
Unpublished opinions are not binding precedent in this circuit.
PER CURIAM:
Carlos Ibarra-Zelaya pled guilty to illegal reentry of an
aggravated felon, in violation of 8 U.S.C. § 1326(a)(2), (b)(2)
(2000). He appeals his fifty-seven-month sentence, arguing that it
is unreasonable. Finding no reversible error, we affirm.
Following United States v. Booker, 543 U.S. 220 (2005),
a district court must engage in a multi-step process at sentencing.
After calculating the appropriate advisory guidelines range, a
district court should consider the resulting range in conjunction
with the factors set out in 18 U.S.C.A. § 3553(a) (West 2000 &
Supp. 2007) and determine an appropriate sentence. United
States v. Davenport, 445 F.3d 366, 370 (4th Cir. 2006).
This court reviews a sentence to determine whether it is
reasonable, applying an abuse of discretion standard. Gall v.
United States, 128 S. Ct. 586, 594 (2007). This court presumes
that a sentence imposed within the properly calculated guidelines
range is reasonable. United States v. Go, 517 F.3d 216, 218
(4th Cir. 2008); see Rita v. United States, 127 S. Ct. 2456,
2462-68 (2007). A district court must explain the sentence it
imposes sufficiently for this court to effectively review its
reasonableness, but need not mechanically discuss all the factors
listed in § 3553(a). United States v. Montes-Pineda, 445 F.3d 375,
380 (4th Cir. 2006), cert. denied, 127 S. Ct. 3044 (2007). The
district court’s explanation should indicate that it considered the
- 2 -
§ 3553(a) factors and the arguments raised by the parties. Id.
This court does not evaluate the adequacy of the district court’s
explanation “in a vacuum,” but also considers “[t]he context
surrounding a district court’s explanation.” Id. at 381.
On appeal, Ibarra-Zelaya does not contest the calculation
of his guidelines range. Rather, he argues that the district court
failed to consider the impact of the sixteen-level enhancement
under U.S. Sentencing Guidelines Manual § 2L1.2(b)(1)(A)(ii)
(2006), which he claims was unduly severe and resulted in a
sentence longer than necessary to achieve the purposes of 18
U.S.C.A. § 3553(a). Specifically, he argues that this immigration
guideline was enacted by the Sentencing Commission “with little
deliberation and no empirical justification,” and is therefore not
entitled to the same deference as other guidelines. He further
posits that the enhancement, by double-counting his criminal
history and distorting both the severity of the offense and the
potential for recidivism, undermines the purposes of § 3553(a).
We find that Ibarra-Zelaya has not overcome the
presumptive reasonableness of his sentence within the guidelines
range. The heart of Ibarra-Zelaya’s appeal amounts to a policy
attack on the applicable guidelines enhancement provision. A
sentence may be substantively unreasonable if the court misapplies
the guidelines or “rejects policies articulated by Congress or the
Sentencing Commission.” United States v. Moreland, 437 F.3d 424,
- 3 -
433 (4th Cir. 2006). Here, Ibarra-Zelaya argues that his sentence
is unreasonable because the district court failed to reject a
policy adopted by the Sentencing Commission. At sentencing, the
district court, aware of its discretion to impose a sentence below
the advisory guidelines range, specifically considered and rejected
Ibarra-Zelaya’s position as to USSG § 2L1.2 based on the facts of
this case. The district court considered the § 3553(a) factors at
length and concluded that neither Ibarra-Zelaya’s criminal history
category nor total offense level was overstated in any way and that
the advisory guideline range was properly calculated and
appropriate. We therefore find that Ibarra-Zelaya has not
demonstrated that his sentence is unreasonable.
Accordingly, we affirm Ibarra-Zelaya’s sentence. We
dispense with oral argument because the facts and legal contentions
are adequately presented in the materials before the court and
argument would not aid the decisional process.
AFFIRMED
- 4 -
| {
"pile_set_name": "FreeLaw"
} |
OFFICE OF THE ATTORNEY GENERAL OF TEXAS
AUSTIN
Your recent req ion of thi# 4qmrt
mont upon the qawtlonr le ted haa been re-
aolted.
8' Co&lktloglly
in ~4dltlon
00.00 to 6QTW
expelme, do-
~8 ier tho oourthowe,
00.00, and when It Is oon-
e foas of offloe till Bob
rotorthe year?
"2. Ii the above plan 1s not authorisod
in this oounty wham the tooe 02
by law,
orffao will not @,xoeed#Og.OO ror the per,
my thlroeurt pay, in adbition to an ex
0rrhi0 selary for tha sheer, any amount
for deputy, Jailer hire and expenses of ear
Bonorable John R, Moors, F'arge
2
used in oounaotlonwith dutlea of the Shar-
iii*a dqmrtsmnt , - or, la the aathority of
the Comm.laslonsra~Oourt with referonoe to
mlarlea and allowsnoofor osr axpenss
lialtad to the approvrl ar aettlng ot
amounts whiuh may ba dtiuoted by the ahorlff
ot tha end of the pear only In the ovaat
thst he has axoesa fess to looount fos?
*The aitustion oonfronting the Commla-
stonera' Court in this oounty la to properly
finmao the Eh4rlrr'D orfi04 nh4n the r448
, 0r that orrioe ha4 never axaaada4 &NO.OO.
By tho reason of Incnaao la the papulatlon
0r thin 00mty, th4 sh4rirf*a orri04 ~8s
aepsratrd from the ofiiao QS Tax dolleotor-
A44*440r. The oontamplatadfeaa derlra4
from the Sherlft*s offio~ alone will not
be edequete for the propar oonllootof tha
ofiioe, aa viewad by tho ~oaorriaalonara*
spurt.
-Umlor Artlolc 6849 St. C!. 8. aa amaided
by Wnaral $awa, &l@‘ii~Se& 1aC’oallq& aaa-
41011, 1929. it naaa ---- Tro~lde4 iurthsr,
that if, In ths opinion ot the Coraiealoners*
Court fess Or the .Shariif*a otfioe ax-4 not
surrfcisntto justify the payment of mleriee
of suoh doputles, the Ooamiaslonera~Court
shall hate the poioerto p8y the same out
or the Gmutral Fund 0r 4814 oouaty'.
voes this Artlolr sa aaaaded llit the
1Mtetlon upon the powsr or the Oommlasionere’
Court M) aa to allow other oofapaneatlon than
grnntad under ~Artlala3883 and 3891 H. 0. S.
when in their opinion thao la a aao~saity,
for the sdequete end propar couduot or the
f4heritr'm ofriae, to pay aeld 4eputi48 out
of the generel fund, WC118 id.50 Pl.bWiUg the
Sheriff sn ex orriolo salary within the
limitstfonsof Article 3895 R. 6. R.*
V?s818 inl~wmd thst Jaok County haa a '~opulstlon
of 10,196 inhRbttsnteacoording to the lost Bea~srsl mnaua,
Honorable John W. Mocre, Page 3
ana the tb oounty orrioi~la or so14 00unty a~ 00rpti-
asted oa a tab kale.
Artlolb 3934, Pornon' Annotated Civil Ststutoa,
prarldma I
Wherirfa ahall alao raooiyo the follow-
ing ooaponsatlonl
'1. ?oorall proossa issued from the
Suprome Court or CIourt.8 or Civil A$psale,
and aoned by them, the smo fees a8 are
ellowmd then for alailor aorvico upon proowe
iseued from the dlmtrlot courts.
“2..
Yo rlumo nlng
jur oin
r dlatriot
s
and aounty oourta, oonln~ all elsctloa nc-
tloea, notloea to worawra of roads and &o-
lng all bthsr pubLl0 buainor8 not othorwleo
provided for, not eice~ding one bhotiaaad
dollar8 per alum to bs fixed by tho oom-
miaaionars~ SourB at the same tlms other
ox otrlaio rlarlea dire ihod, and to k
psld sut~ot.th* goaozal fund4 oi tha aouxktyf
providti~d, tbt'no auok 81 offiolc salarj
aholl k alleaod any &oklf? who hoe ra-
osivbd the wxlmu~~ aslarp allowed by low.*
Art1018 3995, Parnon~a Anaotsted Civil statuka,
mods as follower
'*rheCommissioneratCourt la horebg da-
barred fro@ allowing oompensationfor ex
0rrioi0 aervloee to 00tmty 0rri0i4ia uhea th0
compensationand exocss fees which t&my are
ellowd to retain shall reqch t&a mnxlmum
prcvldrd fur in thla ohapter. In oasoa where
the ocaiponaationand oxcena aa v&ioh tho
ofrloora sro allmod to rots, It ahell not
reach Ohs m~xixt,mprcyidod for in thin
ohapter, the Commissionerm~Court ahall
allm crcmiponraticmr0r ox orri0f0 eervicea
when, lritheir judgment, such ccwqqneetlon
is neoemery, prmlded, such ocmponsation
C
Eonorabla d0hn.W. Ibore, Pago 4
for ex offialo servl6es U1a.d shall not
lnaraase the eampensatloaaf tha offlolal
beyoad the Mxlmum or ampeasatlon and ar-
aaas Seer allowed to be rrtalned by him
uader this ahapter. Pravided, however,
the es orrlolo hsnla lathorlsed shall ba
allowad only a fter la oppartanlty rOr a
pubilo hearing and anly apon the drir8+
tit* rots of at Least thm maabwa 0r
the Commlsalonera* Courti.*
Undar Artlola 3883 aad Artlole 3S91, Bemoa's
dnnotatea clvll Statutes,ths maximum oapensatim 0r the
! aharlfi ok Jaok County mna o tlxaeed ~,OOO.OO per annum.
.
l3nderthe abave rntloned atatutes, the Com-
missionan' Court oau le@ly alla thwsherltf ex oftiolo
ocmpenaetlon not to lxond #1,09(3.00per year, provlde4I,
that the said ax orri810 oompansatlondoa8 BOt inoreaa$
tha eapensation or tha aherift beyond the xmsxlmum as /:’
prwlded by law,
P
Artisle lOU, 004% 46 CririnalProaedure, pro-
,,.i
,I
:
:
,,,,,.
easus
vldas la part,
r hsr u~.~ o r ::i*a
v%,allo?aaae nrkrde
shall eta l
,-:‘:~
lle?n,no$~ tide
&::r
i ih ~ss
For the,baard
r o rjailer
o rturnkey, lxoept in aaantla8 0r r0rtp thou-
sand populitlon QC ame.* With rererenoe to the ooapen-
satlon or Janitor for tha aourthausa,said aompenaatlon
la not be ba ragarded a8 part oi the aompensatlonoi the
akarlrf, but such oampensatlonmay ba alloaad and paid
bg the Corri.lasloners*Court under the general power (md
authority of said oourt by virtue of ArtloX 2351, Vernon's 'L
Annotatad Civil Statrotos.Tikenrore, your rmt question
is respeotiollyaaswred in tha negative.
Artlole 6869, Vemon’a Annotated Clrll Statutes,
reads an followsr
'Sheriff8 ahall have tha parer, by writ-
ing, to appoint one or more deputies for their
raepectlva oountias, to oontlnue in orrh
during the pleasure 0r tha therm, rho ahall
have power and authority to parfox% a11 tba
aote and dutiaa of their prlnolpals;and evsry
person so appointed shall, before he eater8
upon the dutlaa ai'his oitloe, take end aub-
aarlba to thenofi%alal oath, uhlab shall ba
lndoreed on hi6 appointment,togethar with
Honorabla John W. Moera, Page 5
‘the oertlrioataof the offioer admialsterlsg
tha sama; and auoh appointmantand oath shall
be raoordad in the otfioe af the &wnty Clark
and degoslted In naid otiioe. The IWR~~T Of
deputies appolntad by the sberirf of any ens
oounty sball ba 1lnltnclto not axoaedlag
three la tha Juatl.aepreolnot in Uhlah is
luuatsd the oountg alta or such oounty,
and one la esah Justloe preoinat, anb e list
of these appointmentsshall be posted up In
oonrpiououaplaae in the Clerk's ofrios. An
in6lotmentror a relony or any boputy sheriff
appointed shall operate a retooat!an of his
, appolntmantaa au& deputy aherltlt. Fro-
rlded further, that if in tha opinion of the
Conmlrslonwa* Court reea 0r the sherlSf*a
orrloe are not 8urrlolent to ~uatli~ the
payment or aslarlea of auah drputlea, the
Commlsalanars*Court shall have the power
to pay the same out at the General Fund oi
6ald OOUnty.*
Article 6871 provides la paa38
Wh8na*er in a~~“houut~ It bwoou ae-
eeuary to employ yarda fbr the aare,keepln&
or prisoners and the leourlty or fs@a thr
shwirr ply, with the approtsl 0s th& boa-
missioner8 Court, or in oat30 0r amergenoy,
titb the approval oi the County 3udga,
employ nuah number of guards aa may be
noceseary; and hia aoommt therefor, duly
ltemlud aad sworn to, shall be allowad
by uid Oourt, an6 paid out ot the County
Treasury. . . .*
In tleu or Artiolea 6869 and 6871, 8uprat In
answor to your seaoralquastlan you are odvlrad that it is
our oplslon that the ComaLseloners*Court aan lagallg pey
the salerles of deputy &erlffe out of the #general. fuad
or the oouaty, if in the opinion of tha Corraloaioueral
Court, fees or the aherirfie orriee ore not sutfloient
to justify the paymant ai aalarias of euoh deputies.
The salaries of suah daputlea are to be detvrorlnad an
provided by ArtlaZe 3902, Vernon58 Annotated Clrll Stat-
utes. Alav, the salarise 0r guards for the aarekoaping
of prisoners and the aeourlty of'jaila, may ba paid by
the Commle#lonsrr'Court aa provided by Artlola 6871.
Konoreble John W. Moors, Pega 6
In rim’ of Artlole 3899, V*rnOn’~ Annotatea Civil st~tutw,
you we rurthor aatisoa that it la our opinion that am
a0m omn0t logdly pay 8utomoBi~~ ox-
a0uimis~ioeer**
PUIWI or the sherirf*adepartment,b u t le ProrUW by
Se&ion A of Artlole 3899, suo hlxpeneoa met be paid out
of the ri3e6aarnea by ad.4 offioer.
Trusting that the forrgolng fully anenver8your
inquiry, w are
VW-y truly yours
A?PROWD PEB 7, 1941 ATPORKEYUlWRAL CFl’EXAS
!
(8) tireid c. )aenn
ATTcmmi omx8AL CF TXAS By (elgeattlra
1
Aradl mii0~0
A88ll¶suBO
p AW:W/JUP ~moms~-~~~~cm aoimm5
p.wJl<, OEAIRMm
‘,~ V&F.
| {
"pile_set_name": "FreeLaw"
} |
1. Field of the Invention
The present invention relates generally to Global Positioning System (GPS) receivers, and in particular, to a two-bit offset canceling Analog-to-Digital (A/D) converter with improved common mode rejection and threshold sensitivity, typically used in GPS receivers.
2. Description of the Related Art
The use of GPS in consumer products has become commonplace. Hand-held devices used for mountaineering, automobile navigation systems, and GPS for use with cellular telephones are just a few examples of consumer products using GPS technology.
As GPS technology is being combined with these devices, the GPS chips are being placed in widely ranging applications. Some of these applications require that the GPS chip be made smaller, or more efficient, presenting challenges to GPS receiver chip designers. Many of the functions of GPS chips are now being pushed to the edges of performance capabilities.
One of these functions is the ability to separate a GPS signal from background noise. Noise is often interpreted as a component of the GPS signal, and, as such, creates problems with position determination and accuracy of the GPS functionality. As GPS chips are placed in lower signal strength environments, and GPS chips are designed to be placed in smaller and smaller devices, the ability of a GPS receiver to separate signal from noise becomes more important.
It can be seen, then, that there is a need in the art to provide GPS chips with increased ability to separate noise from desired GPS signals. | {
"pile_set_name": "USPTO Backgrounds"
} |