text
stringlengths
0
1.59M
meta
dict
Advances in drug metabolism screening. Developments in automation, analytical technologies and molecular biology are being exploited by drug metabolism scientists in order to provide enhanced in vitro systems for the study of the metabolic disposition of potential drug candidates. Routine investigation of factors such as metabolic stability and induction and inhibition of drug metabolizing enzymes is now preferred in the early stages of drug discovery. This, in turn, should provide a greater understanding of the underlying principles governing these processes and allow a greater role for drug metabolism in the design of new drug molecules.
{ "pile_set_name": "PubMed Abstracts" }
1. Field of the Invention The invention relates to a shift control apparatus and shift control method for a vehicular automatic transmission, which enables suppression of a drop in rotational speed of an input shaft which occurs during a clutch-to-clutch downshift executed while a vehicle is decelerating. 2. Description of the Related Art A shift control apparatus for a vehicular automatic transmission is known which, when executing a clutch-to-clutch downshift, executes shift hydraulic pressure control so as to reduce an apply pressure of a hydraulic friction device to be released, which was applied in order to achieve a predetermined speed before the downshift, while increasing an apply pressure of a hydraulic friction device to be applied in order to achieve a predetermined speed after the downshift. According to JP(A) 11-287318, for example, during the clutch-to-clutch downshift, feedback control is performed on the apply pressure of the hydraulic friction device to be applied so that a transmitted torque capacity of the hydraulic friction device to be applied becomes constant, i.e., so that a rotational speed of an input shaft of the automatic transmission increases at a constant rate. In the aforementioned shift control apparatus for a vehicular automatic transmission, the engine speed drops during the clutch-to-clutch downshift when the vehicle is decelerating, and then increases again when the hydraulic friction device to be applied is applied. This combination of a drop followed by an increase in engine speed results in shift shock or a delay in the shift time. Also, fuel efficiency may be reduced if the drop in engine speed is large enough to require that the fuel supply be restarted. In comparison, it is conceivable to automatically suppress the drop in engine speed during the clutch-to-clutch downshift while the vehicle is decelerating, and appropriately reduce or eliminate shift shock or a delay in shift time caused by that drop. It is also possible to appropriately reduce the adverse effect on fuel efficiency caused by fuel being supplied to the engine again due to a further drop in engine speed. However, doing so may result in shift shock occurring when there is little or no drop in engine speed.
{ "pile_set_name": "USPTO Backgrounds" }
AIRCRAFT ENGINEERAND AIRSHIPS FIRST AERONAUTICAL^WEEKLY IN THE^WORLD .• FOUNDED tooo Editor M. P0UU3EN Managing Editor G. GEOFFREY SMITH Chief Photographa JOHN YOXALL r i Editorial, Advertising and Publishing Offices: DORSET HOUSE, STAMFORD STREET, LONDON, S.E.I I iTelegrams : Truditur, Sefflit, London. Telephone : Waterloo 3333 (50 lines).HERTFORD ST., COVENTRY. Telegrams : Autocar, Coventry.Telephone: Coventry 5210. GUILDHALL BTTILOTNGS,NAVIGATION ST., BIRMINGHAM, 2. Telegrams: Autopress, Birmingham.Telephone: Midland 2971. 860, DEANSQATfi,MANCHESTER, 3. Telegrams: Iliffe, Manchester. Telephone: Blackfriars 4412. 26B, BENFIELD ST.,XJLA3GOW, C.2. Telegrams: IUSe, Glasgow.Telephone: Central 4857. SUBSCRIPTIONEATE8: Home and Canada:Other Countries; year, £1Year, SI 13 0.16 0. 6 months, 16a. Gel.G mouths, 18s. Od. 3 months. 8s. 6d.3 mouths, 9s. Od. • m No. 1515. Vol. XXXIII. JANUARY 6, 1938. Thursdays, Price 66. The Outlooks ResolutionsT ilS is the time of year for good resolutions, but before they can be formulated it is usually necessary to take stock in order to see where one has failed in the past, no less than to derive from previous successes encouragement for further effort. At the present moment the expansion and re-equipment of the R.A.F. is the most vital preoccupation of the British aircraft industry. One may, perhaps, say that it is going on as well as could be expected, although the fact that Germany is believed to have produced 7,000 aero engines in 1937 and to be pro- ducing aircraft of all types at the rate of 400 per month gives no cause for complacency. At any rate, there is obviously nothing that can be done about it. If one turns from military to civil aviation, there is good cause to be alarmed. We publish in this issue an article in which an American correspondent lifts the veil and reveals an activity on the other side of the Atlantic which makes our own efforts look lethargic. And last week Capt. Wil- cockson, Imperial Airways' well-known pilot, expressed the view that in the matter of training of personnel, operating experience and instrument development and use, this coun- try is three years behind the United States. Taken to- gether, the two give one something to think about. Losing a LeadF ROM the point of view of technical progress, military and civil aviation cannot be altogether segregated, developments in one field having their immediate re- percussions in the other, although possibly with modifica- tions and adaptations. It is now a good many years ago that Great Britain established a lead in flying-boat design and construction. That lead appears to be in very great danger of being lost, not because of inability among our designing staffs to produce the world's best, but through lack of support. Until Imperial Airways placed the order for 28 Empire flying boats with Short Brothers, no British firm had had any great encouragement to develop civil flying boats. The Air Ministry ordered a few boats at a time, but could not seem to make up its mind about what it wanted, and changes in policy occurred which did not help towards continuity of effort. Without regarding as other than ambitious projects which may or may not be built some day, the two American designs for a 55O-ft.-span flying boat and a 500-passenger one, there is sufficient actual planning, building and test- ing going on in America to show how much in earnest are our cousins on the other side. Glenn Martin is developing a 100-passenger boat of 188 ft. span, intended for trans- oceanic commercial work; his firm has already built and flown the 157-ft-span boat for Russia described and illus- trated in Flight recently. Martin, Boeing and Sikorsky are building 60-tonners for the U.S. Navy, and Sikorsky is reported, in addition, to be developing a 50-tomier for the Atlantic service which will carry 36 passengers when oper- ating Atlantic ranges. Some warning that America intended to apply herself seriously to the flying-boat problem has been evident for some years from the fact that her research establishments have carried out very extensive tests on hull forms. Thus, when the Government purse-strings were loosened, Ameri- can constructors had ready to hand a wealth of informa- tion upon which to base, their designs. -,-.. Although it is known that Shorts have on the drawing board an improved version of the Empire boat, and although there are rumours of a very large machine, one cannot quite feel that encouragement sufficient to enable one firm to cope with the competition from three or four American companies is being given British firms. ExperienceI N the meantime, as Capt. Wilcockson pointed out, the Americans have been steadily accumulating experi- ence in long-range transoceanic flying-boat operation. Although weather conditions over the Pacific are very dif- ferent from those over the Atlantic, the operation of the Pacific route has enabled Pan-American Airways to train their crews in long-distance navigation, and when the Americans come to operate an Atlantic service, they will start with a very great advantage compared with us. Navi- gational equipment also has been greatly developed and thoroughly tested in actual operational conditions, com- pared with which the very limited, although very promis- ing, experience which Imperial Airways had an opportunity to accumulate during ten crossings of the Atlantic is insig- nificant. Operation of the Empire routes will help to fill the gap, but is not strictly applicable to Atlantic conditions.
{ "pile_set_name": "Pile-CC" }
Q: What approach should I use to do client side filtering? I am making the front end of a asp.net mvc3 web application. A controller action sends a database driven list to a view model which then populates a series of divs. I have a filtering section above the div list. I am not sure which approach to take to implement the filter. I have considered rolling my own (I always keep this option on the table), using jQuery's .filter(), or finding some JavaScript functionality to use. What is the standard way to filter client side with JavaScript (or a js derived library)? EDIT For gdoron's lack of context: js var gdoronArray = []; for(var i = 0; i < 10000; i++){ gdoronArray.push("text" + i + " " + (i*10)); } Is there a standard library to pull only the items in gdoronArray which contain "ext5" or is this just a roll your own situation? A: gdoronArray.filter( function(v){ return !!~v.indexOf("ext5"); }); https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array/filter
{ "pile_set_name": "StackExchange" }
Italian ship Cassiopea Cassiopea has been borne by at least three ships of the Italian Navy and may refer to: , a launched in 1906 and discarded in 1927. , a launched in 1936 and stricken in 1959. , a launched in 1988. Category:Italian Navy ship names
{ "pile_set_name": "Wikipedia (en)" }
/* (No Commment) */ "CFBundleName" = "餐厅";
{ "pile_set_name": "Github" }
Introduction {#sec1} ============ In contrast to bulk silver, nanometric silver materials exhibit many extraordinary properties such as a high surface-to-volume ratio,^[@ref1],[@ref2]^ quantum tunneling effects,^[@ref3],[@ref4]^ an abundance of free electrons,^[@ref5]^ surface plasmon resonance,^[@ref2],[@ref6]−[@ref8]^ and antibacterial behaviors.^[@ref9],[@ref10]^ Because of these unique properties, noble silver nanomaterials are widely applied in diverse areas, including thermotherapy,^[@ref5],[@ref11]^ medicine,^[@ref2],[@ref5],[@ref12]^ sensors,^[@ref13]−[@ref16]^ surface-enhanced spectroscopy,^[@ref6],[@ref17]−[@ref20]^ biology,^[@ref2]^ catalysis,^[@ref21]−[@ref28]^ and electronics.^[@ref29]−[@ref31]^ Among these many applications, the catalysis of the reduction of nitroarenes to aromatic amines is increasingly attracting attention because of pharmaceutical needs and the importance of this industry.^[@ref25],[@ref26],[@ref32]−[@ref38]^ Various strategies have been proposed to reduce nitroarenes more efficiently and more rapidly using silver nanomaterials. These strategies include depositing silver nanoparticles (AgNPs) on supports used as heterogeneous catalysts,^[@ref24],[@ref26],[@ref39]−[@ref42]^ combining AgNPs with reduced graphene oxide or graphene oxide as catalysts,^[@ref43]−[@ref47]^ and using silver nanocolloids as a quasi-homogeneous nanocatalyst.^[@ref25],[@ref33],[@ref48]−[@ref54]^ All of the aforementioned catalytic approaches are efficient and selective. However, the reaction rate for heterogeneous catalysis is rather low and quasi-homogeneous catalysis suffers from possible aggregation of the nanocatalyst. Furthermore, the procedures for preparing such nanocatalysts are somewhat complex and time-consuming. Applications of magnetic nanoparticles have also been extensively investigated in recent years because they feature many notable characteristics, such as a high surface-to-volume ratio, easy attraction and redispersion, and paramagnetism. Because of these crucial properties and advantages, the combination of AgNPs and magnetic nanoparticles has become one of the most favorable approaches for the catalytic reduction of nitroarenes.^[@ref55]−[@ref60]^ In this study, a simple but facile method was applied in a single step to prepare silver-doped magnetic nanoparticles (AgMNPs) for the catalytic reduction of nitroarenes through spontaneous oxidation--reduction and coprecipitation. When mixing Fe^2+^ with Ag^+^, a spontaneous reaction is caused by the difference in standard reduction potential between the ionic species. When Ag^+^ is reduced to Ag^0^, an equivalent number of moles of Fe^2+^ ions are simultaneously oxidized to Fe^3+^. After the addition of precipitation agents, AgNPs were coprecipitated with iron oxide magnetic nanoparticles, which led to the formation of AgMNPs. The proposed preparation can be achieved in a single step, and the prepared AgMNPs can subsequently be utilized as nanocatalysts for the reduction of *o*-nitroaniline (*o*-NA). The parameters (pH, temperature, and amount of nanocatalyst) that affect the morphology and composition of the prepared AgMNPs and efficiencies of the catalytic reduction were systematically studied to gain a greater understanding of the characteristics of the AgMNPs prepared using the method proposed in this study. Additionally, the catalytic activity of the AgMNPs prepared for the reduction of other nitroarenes and their recyclability were investigated to fully evaluate their potential for practical applications. Results and Discussion {#sec2} ====================== Effect of Oxidation--Reduction Time on AgMNP Preparation {#sec2.1} -------------------------------------------------------- [Figure S1](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf) shows the typical measured hysteresis loops of the prepared AgMNPs, which confirmed that the prepared AgMNPs were paramagnetic and usable for further applications. [Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"} depicts the transmission electron microscopy (TEM) images of AgMNPs obtained using various reaction times. From the images, dark-sphere-like Ag nanoparticles were mixed with light-colored Fe~3~O~4~ NPs because Ag has a higher electron density that allows fewer electrons to transmit.^[@ref61],[@ref62]^ The AgNPs formed after a 10 min reaction time were larger than those formed after a 2 min reaction time. Notably, the size of the Fe~3~O~4~ NPs was mostly unaffected by the reaction time. ![TEM images of prepared AgMNPs with oxidation--reduction times of (a) 2 min, (b) 8 min, and (c)--(f) 10 min, where the \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ ratios are (a)--(c) 3:1, (d) 2:1, (e) 4:1, and (f) 6:1. \[Fe^2+^\]~0~ are all 12 mM, and the magnifications of the images are all 100 000×. The yellow arrows indicate the examples of AgNPs for each sample.](ao-2017-019876_0008){#fig1} [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a shows the evolution of the UV--vis spectra for the reduction of *o*-NA catalyzed by AgMNPs over time. The absorbance peak at 412 nm, which corresponds to the characteristic *o*-NA peak,^[@ref63]^ decreased as the reaction proceeded. The variations of the spectra indicated that *o*-NA was reduced to 1,2-phenylenediamine (1,2-PPD).^[@ref41],[@ref64]^ The relative concentration (*C*~t~/*C*~0~) of *o*-NA was obtained by dividing the absorbance recorded at 412 nm at the specified time (*C*~t~) by the absorbance at 412 nm before the addition of AgMNPs (*C*~0~). The results in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}b were plotted using various AgMNPs prepared using various reduction durations. In [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}b, the catalytic efficiency of the AgMNPs shows no significant difference between when the reaction time was 4--10 min and when more than 95% of *o*-NA was reduced within 240 s. This finding suggests remarkable catalytic activity. By contrast, achieving the same conversion percentage required more than 500 s when the reaction time to prepare the AgMNPs was 2 or 12 min. Because the catalytic efficiency of nanocatalysts depends on their size and the amount of catalyst loaded,^[@ref26]^ we concluded that the reaction time to achieve the optimal morphology and catalyst loading was 10 min. For comparison, the results of an experiment conducted in parallel, where AgMNPs were replaced with Fe~3~O~4~ NPs, are also plotted in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}b, showing that the reduction of *o*-NA can proceed only when AgNPs are doped. In the absence of AgNPs, the reduction of *o*-NA is suspended. ![(a) UV--vis spectra of 1 mM *o*-NA reduced by 30 mM NaBH~4~ at room temperature (RT) in the presence of 20 mg of AgMNPs with increasing time, where the pH was 9.8. The AgMNPs were prepared under the condition described in [Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"}c. (b) *C*~t~/*C*~0~ of 1 mM *o*-NA (412 nm) versus the catalytic reduction time in the presence of 30 mM NaBH~4~ and AgMNPs prepared with (■) 2 min, (●) 4 min, (▲) 6 min, (▼) 8 min, (◆) 10 min, and (★) 12 min of oxidation--reduction time during the preparation, where the other conditions are the same as described in (a). Parallel experiment (□) uses 20 mg of Fe~3~O~4~ NPs as nanocatalysts, where the other conditions are the same as described as (a).](ao-2017-019876_0001){#fig2} Effect of Fe^2+^/Ag^+^ on AgMNP Preparation {#sec2.2} ------------------------------------------- As described in the previous section, catalytic efficiency is related to the size and amount of doped AgNPs. In addition, we expected the morphology and amount of doped AgNPs to be affected by the ratio of initial concentration of Fe^2+^ to Ag^+^ because AgNO~3~ acts as the oxidation agent in the formation of AgMNPs. [Figure [1](#fig1){ref-type="fig"}](#fig1){ref-type="fig"} shows the TEM images of the AgMNPs with different ratios of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~. The images reveal that the morphologies of the AgNPs are similar. [Figure S2](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf) shows the typical X-ray diffraction (XRD) spectra of the prepared AgMNPs and illustrates that the locations and intensity of the diffraction peaks were consistent with the standard patterns for JCPDS card no. (79-0417) magnetite and JPCPDS card no. (4-0783) standard Ag crystal. The size of AgNPs can by estimated using the Scherrer equation, and all AgNPs are approximately 20 nm.^[@ref10]^ To further explore the effect of the initial Fe^2+^ to Ag^+^ concentration ratio, energy-dispersive X-ray spectroscopy (EDS) analysis was performed and the typical spectra for the AgMNPs are shown in [Figure S3A](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf). The atomic percentages of Ag, Fe, and O for various initial concentration ratios of Fe^2+^ to Ag^+^ obtained through EDS are plotted in [Figure S3B](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf), which shows that the atomic percentage of Ag increased as the initial concentration ratio of Fe^2+^ to Ag^+^ decreased. When the initial concentration ratio of Fe^2+^ to Ag^+^ was 2.0, the atomic percentage of Ag in the AgMNPs was 8.23%. The atomic percentage decreased to 0.76% when the initial concentration ratio of Fe^2+^ to Ag^+^ was 6.0. [Figure [3](#fig3){ref-type="fig"}](#fig3){ref-type="fig"} plots the *C*~t~/*C*~0~ of *o*-NA as a function of the reduction time in the presence of AgMNPs prepared with various ratios of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~. The figure shows that the AgMNPs prepared with a smaller ratio of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ had a higher catalytic efficiency. Through correlation with the EDS results, we concluded that the highest catalytic efficiency was obtained for the highest AgNP loading. Notably, although the catalytic efficiency of the AgMNPs was even higher when the initial concentration ratio of Fe^2+^ to Ag^+^ was 1.5, the yield of AgMNPs was very low because of weak magnetization and most particles formed after the coprecipitation stage could not be collected by the magnet. Similar results were observed when the initial concentration ratio of Fe^2+^ to Ag^+^ was 6.0. Therefore, we can conclude that the initial concentration ratio of Fe^2+^ to Ag^+^ should be between 2 and 3 to ensure that a sufficient amount of Fe^2+^ is oxidized so that the ratio of Fe^2+^ to Fe^3+^ is close to 2 before the coprecipitation agent is added to form magnetic Fe~3~O~4~ NPs.^[@ref65]^ ![*C*~t~/*C*~0~ of 1 mM *o*-NA versus catalytic reduction time in the presence of 30 mM NaBH~4~ and AgMNPs prepared with \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ ratios of (■) 6:1, (●) 4:1, (▲) 3:1, (▼) 2:1, and (◆) 1.5:1, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0002){#fig3} Effects of pH and Temperature on the Catalytic Reaction {#sec2.3} ------------------------------------------------------- The acceleration of the reduction reaction by AgMNPs originates in a relay between the nucleophile and electrophile.^[@ref39]^ Therefore, the catalyzed reduction rate is affected by the abundance of electrons in the reaction system. To observe the relationship between the electron abundance and reaction rate, the catalysis reaction was performed at various pH values. The relative *o*-NA concentration (*C*~t~/*C*~0~) was recorded as a function of reaction time at various pH values, and the results are plotted in [Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}a. The pH played a key role in the catalysis reaction; when the pH was 9.8, the reduction of *o*-NA was completed within 4 min. The reduction rate significantly decreased when the pH was set to 8.8 and less than 85% of *o*-NA was reduced after approximately 5 min of reaction. When the pH was lower than 8, almost no conversion of *o*-NA could be observed. Because the acceleration of the reduction reaction by AgMNPs originates in a relay between the nucleophile and electrophile,^[@ref39]^ alkaline conditions enriched the electron densities on the AgNP surfaces by adsorbing more OH^--^, which promoted the reduction of *o*-NA. When the pH was set to 10.8, the reduction of *o*-NA decelerated, possibly because of the formation of yellow-colored 2,3-diaminophenazine under extremely alkaline conditions.^[@ref66]^ ![(a) *C*~t~/*C*~0~ of 1 mM *o*-NA versus catalytic reduction time in the presence of 30 mM NaBH~4~ under different pHs of (■) 6.8, (●) 7.8, (▲) 8.8, (▼) 9.8, and (◆) 10.8, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a. (b) *C*~t~/*C*~0~ of 1 mM *o*-NA versus catalytic reduction time in the presence of 30 mM NaBH~4~ under different temperatures of (■) 0 °C, (●) 25 °C, and (▲) 40 °C, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0003){#fig4} The reaction rate was affected by the temperature of the reaction system because the reactants had more kinetic energy at higher temperatures and were able to cross the activation state more easily. [Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}b depicts the relationship between *C*~t~/*C*~0~ and reaction times at various reaction temperatures. As shown in [Figure [4](#fig4){ref-type="fig"}](#fig4){ref-type="fig"}b, the higher the temperature of the reaction system, the faster was the observed reduction of *o*-NA. Furthermore, when the temperature was 0 °C, the reduction rate of *o*-NA was similar to that at 25 °C in the first minute and *C*~t~/*C*~0~ was almost unchanged after the first minute, suggesting that the reduction was almost interrupted after 1 min at 0 °C. Effect of the Catalyst Amount on the Catalytic Reaction {#sec2.4} ------------------------------------------------------- As described in the previous sections, the conversion efficiency is related to the amount of AgNP loading. Accordingly, the conversion efficiency can also be related to the amount of AgMNPs used per experiment. [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"} plots the relationship between the *C*~t~/*C*~0~ of *o*-NA and reduction time when various amounts of AgMNPs were used. As shown in [Figure [5](#fig5){ref-type="fig"}](#fig5){ref-type="fig"}, when 1 mg of AgMNPs was used per experiment, approximately 20% of *o*-NA was reduced after 5 min of conversion and the conversion efficiency increased with the amount of AgMNPs used. When 20 mg of AgMNPs was used per experiment, the conversion of *o*-NA was almost 100% within 250 s at room temperature, which is promising for further applications. ![*C*~t~/*C*~0~ of 1 mM *o*-NA versus catalytic reduction time in the presence of 30 mM NaBH~4~ and different amounts of AgMNPs of (■) 1 mg, (●) 5 mg, (▲) 10 mg, (▼) 15 mg, and (◆) 20 mg, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0004){#fig5} Activity of AgMNPs for the Catalytic Reduction of Other Nitroarenes {#sec2.5} ------------------------------------------------------------------- After studying the characteristics and catalytic properties of the prepared AgMNPs, we investigated the catalytic reduction of other nitroarenes, including *m*-NA, *p*-NA, and *p*-NP, through the same procedures to investigate the ability of the AgMNPs to accelerate the reduction of other nitroarenes. [Figure S4](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf) plots the evolution of the UV--vis spectra for reducing *m*-NA, *p*-NA, and *p*-NP in the presence of AgMNPs over time. The absorption maxima, which were located at 360 nm for *m*-NA, 380 nm for *p*-NA, and 400 nm for *p*-NP, decreased as the catalytic reduction proceeded. The relationship between *C*~t~/*C*~0~, where *C*~0~ is the absorbance at the initial time and *C*~t~ is the absorbance after the specific reaction time, for the nitroarenes tested and the reaction time under optimal conditions is plotted in [Figure [6](#fig6){ref-type="fig"}](#fig6){ref-type="fig"}. As indicated in the figure, the reductions of the four nitroarenes examined in this study were all completed within 4 min, which suggested that the AgMNPs prepared in this study were capable of catalytically reducing various nitroarenes. ![*C*~t~/*C*~0~ of different 1 mM nitroarenes versus catalytic reduction time in the presence of 30 mM NaBH~4~ and 20 mg of AgMNPs (■) *o*-NA, (●) *m*-NA (358 nm), (▲) *p*-NA (380 nm), and (▼) *p*-NP (400 nm), where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0005){#fig6} Furthermore, as reported in other works, the catalytic reduction of nitroarenes follows the pseudo-first-order reaction kinetics.^[@ref26],[@ref43],[@ref51]^ The linear relations between ln(*C*~0~/*C*~t~) of nitroarenes examined in this study against reaction time were obtained and the rate constant (*k*) can be estimated by calculating the slopes of the obtained lines. The calculated *k* values at room temperature are 0.0192, 0.0145, 0.0185, and 0.0196 s^--1^ for the catalytic reduction of *o*-NA, *m*-NA, *p*-NA, and *p*-NP, respectively. To compare the results obtained in this study to those reported recently, [Table [1](#tbl1){ref-type="other"}](#tbl1){ref-type="other"} tabulates the catalytic activities of various AgNP-based catalytic systems. As can be observed in the table, the rate constants of AgMNPs prepared in this study for the reduction of nitroarenes are as good as other reported nanocatalysts. Moreover, because the effects of temperatures have also been studied in the previous sections, the thermodynamic parameters for the catalytic reduction of *o*-NA by AgMNPs prepared in this study can be calculated by following the Arrhenius and Eyring equations.^[@ref67],[@ref68]^ The calculated activation energy (*E*~a~) is 39.88 kJ/mol, activation enthalpy (Δ*H*) is 37.34 kJ/mol, and activation entropy (Δ*S*) is −123.29 J/(mol K). These results suggest that the AgMNPs prepared in this study are excellent nanocatalysts for the reduction of nitroarenes. The catalytic reduction of nitroarenes by metal nanoparticles is generally explained by the Langmuir--Hinshelwood mechanism, where both reactants are adsorbed on nanocatalyst surfaces and reaction occurred after the adsorption.^[@ref67],[@ref68]^ As a result of conversion, products are formed and then desorb from nanocatalyst surfaces. According to the Langmuir--Hinshelwood mechanism, the rate of catalytic reduction depends on the surface coverage reducing agent and nitroarene molecules.^[@ref69]−[@ref71]^ This mechanism rationalizes the large rate constant and small activation energy obtained in this study because the high surface-to-volume ratio and the quasi-homogeneous reaction conditions increase the surface coverage of reactants significantly. ###### Comparison of Catalytic Activities of Several AgNP-Based Systems catalysts nitroarene (final concentration; mM) concentration of NaBH~4~ (mM) temperature apparent rate constant (s^--1^) ref ------------------------------------------ -------------------------------------- ------------------------------- ------------------------------------ --------------------------------- ------------ AgNPs/polydopamine/anodic aluminum oxide *o*-NA (1.13) 400 RT[a](#t1fn1){ref-type="table-fn"} 0.0013 ([@ref40]) biogenic AgNPs *p*-NP (0.20) 10 [b](#t1fn2){ref-type="table-fn"} 0.00406 ([@ref41]) AgNPs/partially reduced graphene oxide *p*-NP (0.10) 13 RT 0.0374 ([@ref43]) AgNPs on porous glass filters *o*-NA (1.00) 30 50 °C 0.0094 ([@ref26])   *p*-NA (1.00) 30 50 °C 0.0071 ([@ref26]) AgNPs in microgels *p*-NP (0.08) 24 22 °C 0.0153 ([@ref50]) AgNPs in microgels *o*-NA (0.09) 18 22 °C 0.0067 ([@ref51])   *p*-NA (0.09) 18 22 °C 0.0101 ([@ref51])   *p*-NP (0.09) 18 22 °C 0.0052 ([@ref51]) AgNPs on fibrous nanosilica *o*-NA (0.17) 22 RT 0.0043 ([@ref52])   *p*-NP (0.099) 83 RT 0.01 ([@ref52]) Fe~3~O~4~\@SiO~2~/Ag nanocomposite *p*-NP (0.06) 6 25 °C 0.00767 ([@ref59]) AgNPs/HLaNb~2~O~7~ *p*-NP (0.091) 18 [b](#t1fn2){ref-type="table-fn"} 0.00301 ([@ref53]) this study *o*-NA (1) 30 RT 0.0192     *p*-NA (1.00) 30 RT 0.0185     *p*-NP (1.00) 30 RT 0.0196   Room temperature. Not mentioned. Recyclability of the Ag Nanocatalysts {#sec2.6} ------------------------------------- The recyclability of the AgMNPs prepared in this study was evaluated by consecutively reusing the nanocatalysts for the catalytic reduction of *o*-NA. As shown in [Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}, only approximately 40% of *o*-NA was reduced to 1,2-PPD after 10 min of reaction in the second consecutive test. This is possibly related to the adsorption of *o*-NA or 1,2-PPD on the surface of the silver nanocatalysts, which consequently reduced the electron transferability.^[@ref26]^ To reactivate the silver nanocatalysts, we soaked the used AgMNPs in an aqueous solution at pH 3 for 20 min and rinsed them with neutral water before the next use. As shown in [Figure [7](#fig7){ref-type="fig"}](#fig7){ref-type="fig"}, more than 90% of the *o*-NA was reduced within 5 min and more than 95% was reduced within 8 min. Furthermore, after treating the reused AgMNPs with an aqueous acidic solution, the AgMNPs could be recycled and their performance was similar to that in the previous run. Therefore, the AgMNPs prepared in this study can be recycled after treatment with an acidic solution, which significantly extends the practical applicability of these silver nanocatalysts. ![*C*~t~/*C*~0~ time profile of 1 mM *o*-NA (■: first run; ●: second run without regeneration; ▲: third run after regeneration; and ▼: fourth run after regeneration) in the presence of 30 mM NaBH~4~ and 20 mg of AgMNPs, where the other conditions are the same as described in [Figure [2](#fig2){ref-type="fig"}](#fig2){ref-type="fig"}a.](ao-2017-019876_0006){#fig7} Conclusions {#sec3} =========== In this study, a simple but facile approach to prepare silver-doped magnetic hybrid nanoparticles was proposed based on a chemical reduction and coprecipitation method. The nanoparticles prepared through this method were used as nanocatalysts for the reduction of *o*-NA. Using the AgMNPs prepared in this study as nanocatalysts exploits the advantages of quasi-homogeneous reaction conditions and enables the easy removal of nanocatalysts from the solution with a magnet. The results indicated that the composition of the AgMNPs prepared can be tuned by adjusting the ratio of \[Fe^2+^\]~0~ to \[Ag^+^\]~0~ and the chemical reduction time during the production of AgMNPs. During the catalytic reduction of *o*-NA, the pH and temperature of the system affect the reduction rate, which is also affected by the amount of nanocatalyst used in the reaction. Furthermore, the prepared AgMNPs were applicable to the catalytic reduction of other nitroarenes. Finally, the silver-doped magnetic nanocatalysts proposed in this study have several advantages, namely, easy preparation, significant catalytic activity at room temperature, high conversion ability, and recyclability, all of which enhance their usefulness for real applications. Experimental Section {#sec4} ==================== Materials {#sec4.1} --------- Ferrous sulfate and ferric chloride were obtained from Showa Chemical (Tokyo, Japan). Silver nitrate, *o*-NA, *m*-nitroaniline (*m*-NA), *p*-nitroaniline (*p*-NA), and *p*-nitrophenol (*p*-NP) were purchased from Alfa Aesar (Ward Hill, MA). Sodium borohydride was obtained from Acros Organics (Geel, Belgium). Ammonium hydroxide (28--30%, v/v) and nitric acid were purchased from Fisher Scientific (Hampton, NH). All chemicals were of reagent grade and used as received without further purification. Deionized Milli-Q water (Simplicity, Millipore, Burlington, MA) was used throughout this study. Preparation of AgMNPs {#sec4.2} --------------------- The preparation of AgMNPs was based on a chemical reduction and coprecipitation method. Briefly, 100 mL of 12 mM ferrous aqueous solution was mixed with various volumes of 200 mM silver nitrate aqueous solution under vigorous stirring for a specified amount of time. During stirring, a spontaneous oxidation--reduction reaction occurred between Ag^+^ and Fe^2+^. Ag^+^ was reduced to Ag^0^ and an equivalent number of moles of Fe^2+^ ions were oxidized to Fe^3+^. After the specified reaction time, 50 mL of 1.44 M ammonia solution, which acted as the precipitating agent, was rapidly added to the solution under vigorous stirring for 10 min to complete the coprecipitation process. After 3 h in storage, the formed nanoparticles were collected with a magnet and washed three times with distilled water and ethanol. Finally, the washed AgMNPs were dried in an oven at 140 °C for 8 h before further use. As an alternative for comparison, magnetite nanoparticles (Fe~3~O~4~ NPs) without silver doping were prepared following previous reports.^[@ref72]^ We conducted transmission electron microscopy (TEM) with a Hitachi HT-7700 microscope operated at 100 kV, energy-dispersive X-ray spectroscopy (EDS) analysis with a Hitachi SU-8010 microscope at an accelerating voltage of 15.0 kV, and powder X-ray diffraction (PXRD) with a Siemens D5000 XRD system to characterize the morphologies and compositions of the prepared AgMNPs. Hysteresis loops of the prepared AgMNPs were recorded at room temperature with a Quantum Design MPMS 3 SQUID vibrating sample magnetometer system. Reduction of Nitroaniline Catalyzed by AgMNPs {#sec4.3} --------------------------------------------- The catalytic efficiency of the AgMNPs was evaluated using the nanoparticles as prepared for the catalytic reduction of *o*-NA. A specific amount of AgMNPs was mixed with 15 mL of an aqueous solution consisting of 1 mM *o*-NA and 30 mM NaBH~4~ at room temperature. Ultraviolet--visible (UV--vis) spectra of the solution were recorded at chosen intervals. All UV--vis spectra in this study were measured using a Thermo Fisher Scientific Genesys 10S Bio UV--Vis spectrometer with a 1 nm resolution. The spectra were recorded within a wavelength range of 250--550 nm. The optical path of the UV--vis cell was 3 mm. The Supporting Information is available free of charge on the [ACS Publications website](http://pubs.acs.org) at DOI: [10.1021/acsomega.7b01987](http://pubs.acs.org/doi/abs/10.1021/acsomega.7b01987).Hysteresis loops, XRD spectrum, and EDS analysis results of the AgMNPs, and the UV--vis spectra of the reduction of *m*-NA, *p*-NA, and *p*-NP in the presence of AgMNPs with increasing times ([PDF](http://pubs.acs.org/doi/suppl/10.1021/acsomega.7b01987/suppl_file/ao7b01987_si_001.pdf)) Supplementary Material ====================== ###### ao7b01987_si_001.pdf The authors declare no competing financial interest. The authors acknowledge the financial support from the Taiwan Ministry of Science and Technology under grant MOST106-2113-M-037-016. This work was also supported by the Kaohsiung Medical University Research Foundation under grant KMU-M106016.
{ "pile_set_name": "PubMed Central" }
Q: How can I generate all possible IPs from a list of ip ranges in Python? Let's say I have a text file contains a bunch of ip ranges like this: x.x.x.x-y.y.y.y x.x.x.x-y.y.y.y x.x.x.x-y.y.y.y x.x.x.x-y.y.y.y x.x.x.x-y.y.y.y x.x.x.x is start value and y.y.y.y is end value of range. How can I convert these ip ranges to all possible IPs in a new text file in python? PS: This question is not same as any of my previous questions. I asked "how to generate all possible ips from cidr notations" in my previous question. But in here I ask "how to generate from ip range list". These are different things. A: This function returns all ip addresses like from start to end: def ips(start, end): import socket, struct start = struct.unpack('>I', socket.inet_aton(start))[0] end = struct.unpack('>I', socket.inet_aton(end))[0] return [socket.inet_ntoa(struct.pack('>I', i)) for i in range(start, end)] These are the building blocks to build it on your own: >>> import socket, struct >>> ip = '0.0.0.5' >>> i = struct.unpack('>I', socket.inet_aton(ip))[0] >>> i 5 >>> i += 1 >>> socket.inet_ntoa(struct.pack('>I', i)) '0.0.0.6' Example: ips('1.2.3.4', '1.2.4.5') ['1.2.3.4', '1.2.3.5', '1.2.3.6', '1.2.3.7', ..., '1.2.3.253', '1.2.3.254', '1.2.3.255', '1.2.4.0', '1.2.4.1', '1.2.4.2', '1.2.4.3', '1.2.4.4'] Read from file In your case you can read from a file like this: with open('file') as f: for line in f: start, end = line.strip().split('-') # .... A: Python 3 only, for IPv4, same idea with @User but use new Python3 standard library: ipaddress IPv4 is represented by 4 bytes. So next IP is actually next number, a range of IPs can be represented as a range of integer numbers. 0.0.0.1 is 1 0.0.0.2 is 2 ... 0.0.0.255 is 255 0.0.1.0 is 256 0.0.1.1 is 257 By code (ignore the In []: and Out []:) In [68]: from ipaddress import ip_address In [69]: ip_address('0.0.0.1') Out[69]: IPv4Address('0.0.0.1') In [70]: ip_address('0.0.0.1').packed Out[70]: b'\x00\x00\x00\x01' In [71]: int(ip_address('0.0.0.1').packed.hex(), 16) Out[71]: 1 In [72]: int(ip_address('0.0.1.0').packed.hex(), 16) Out[72]: 256 In [73]: int(ip_address('0.0.1.1').packed.hex(), 16) Out[73]: 257 ip.packed.hex() returns the hexadecimal form of 4 bytes, as it is in hexadecimal, it is shorter (e.g: 0xff hex == 255 decimal == 0b11111111 binary), and thus, often used for representing bytes. int(hex, 16) returns integer value corresponding to the hex value as it is more human friendly, and can be used as input for ip_address. from ipaddress import ip_address def ips(start, end): '''Return IPs in IPv4 range, inclusive.''' start_int = int(ip_address(start).packed.hex(), 16) end_int = int(ip_address(end).packed.hex(), 16) return [ip_address(ip).exploded for ip in range(start_int, end_int)] ips('192.168.1.240', '192.168.2.5') Returns: ['192.168.1.240', '192.168.1.241', '192.168.1.242', '192.168.1.243', '192.168.1.244', '192.168.1.245', '192.168.1.246', '192.168.1.247', '192.168.1.248', '192.168.1.249', '192.168.1.250', '192.168.1.251', '192.168.1.252', '192.168.1.253', '192.168.1.254', '192.168.1.255', '192.168.2.0', '192.168.2.1', '192.168.2.2', '192.168.2.3', '192.168.2.4']
{ "pile_set_name": "StackExchange" }
---------------------- Forwarded by Mark Taylor/HOU/ECT on 01/17/2000 11:18 AM --------------------------- Nony Flores 01/14/2000 06:34 PM To: Mark Taylor/HOU/ECT@ECT cc: Dale Neuner/HOU/ECT@ECT, John L Nowlan/HOU/ECT@ECT, Steven M Elliott/HOU/ECT@ECT, Alan Aronowitz/HOU/ECT@ECT, Harry M Collins/HOU/ECT@ECT, Janice R Moore/HOU/ECT@ECT, Michael A Robison/HOU/ECT@ECT, Lynn E Shivers/HOU/ECT@ECT Subject: Re: On-line GTC's for Enron Clean Fuels Company (ECFC) Mark - Mike Robison and I have finalized the form of GTC's for ECFC and are forwarding them to you for further distribution. Thanks. John - The only modifications to ECFC's online GTC's are as follows: Contract Formation In order to maintain consistency with the other product lines online GTCs, we've modified this section to reflect the same introduction as reflected in ELFI, EGLI, ERAC and EPC. Damages and Limitation We've added a 2 line bilateral provision regarding payment. Financial Information We've added the two paragraphs re credit provisions as reflected in the confirmations. Compliance with U.S. Laws We've added the buyer/seller trade sanctions as reflected in the confirmations. Law and Jurisdiction We replaced this section in its entirety with the new policy in effect as of June 1998. Appendix II We've added the Marine Provisions Addendum. Appendix III We've added the Transportation Addendum. Thanks. Nony Flores Ext. 37541 Dale Neuner on 01/12/2000 05:24:45 PM To: Nony Flores/HOU/ECT@ECT, Mark Taylor/HOU/ECT@ECT cc: [email protected], Bob Shults/HOU/ECT, Steven M Elliott/HOU/ECT@ECT Subject: Re: On-line GTC's for Enron Petrochemicals Company (EPC) Mark - Although these products were not on the release list for the Jan 5 release, we have been actively pursuing their completion and posting to EOL. Since our reviews of the products are complete, the Petrochems guys are extremely eager to have their products posted. Please offer your approval of the attached ASAP so I can get this and the product onto EOL. Nony - We were expecting delivery of the Methanol GTC today. Please be aware that the uploading of GTC's can sometimes be laborious, and since the Petrochem guys are eager to get this out, time is of the essence. Please let me know when you can expect delivery of the Methanol GTC to Mark Taylor. Dale Nony Flores 01/11/2000 06:09 PM To: Mark Taylor/HOU/ECT@ECT cc: Alan Aronowitz/HOU/ECT@ECT, Harry M Collins/HOU/ECT@ECT, Janice R Moore/HOU/ECT@ECT, Michael A Robison/HOU/ECT@ECT, Dale Neuner/HOU/ECT@ECT, Steven M Elliott/HOU/ECT@ECT Subject: On-line GTC's for Enron Petrochemicals Company (EPC) Mark - Last week Dale Neuner was requested to add four new commodities to on-line trading and Mike Robison and Steve Elliott have reviewed and approved the long descriptions for benzene, mixed xylene, toluene and methanol. We have also finalized the form of GTC's for EPC (who trades benzene, xylene and toluene) and are forwarding them to you for further distribution. Note: GTC's for Enron Clean Fuels Company (who trades methanol) will follow on Wednesday. Thanks. Nony Flores Ext. 3-7541
{ "pile_set_name": "Enron Emails" }
The root of words like electricity, electronic, and electrostatic is the ancient Greek word elektron, meaning amber. The Greeks used pieces of amber in jewelry. They noticed that when amber was rubbed with cloth, it attracted light objects such as hair. The amber could pick these things up off the ground, despite the gravitational forces pulling them down. It seemed natural to attribute this behavior to the amber. Subsequently, anything that displayed similar behavior was likened to amber. The property of amber electrostatic charging comes from triboelectric effect. To understand the electrostatic charge, you need to think about the atoms that make up everything we can see. All matter is made up of atoms, which are themselves made up of charged particles. Atoms have a nucleus consisting of neutrons and protons. They also have a surrounding "shell" that is made up electrons. Typically, matter is neutrally charged, meaning that the number of electrons and protons are the same. If an atom has more electrons than protons, it is negatively charged. If it has more protons than electrons, it is positively charged. Static charge is most often generated by contact and separation, or "triboelectrification." When any two materials touch and then break contact, there is an exchange of electrons; one material will be left with an excess, and the other a loss, of electrons. Some atoms hold on to their electrons more tightly than others do. How strongly matter holds on to its electrons determines its place in the triboelectric series. If a material is more apt to give up electrons when in contact with another material, it is more positive in the triboelectric series. If a material is more apt to "capture" electrons when in contact with another material, it is more negative in the triboelectric series. The triboelectric series is a list that ranks various materials according to their tendency to gain or lose electrons. It usually lists materials in order of decreasing tendency to charge positively (lose electrons), and increasing tendency to charge negatively (gain electrons). Somewhere in the middle of the list are materials that do not show strong tendency to behave either way. Note that the tendency of a material to become positive or negative after triboelectric charging has nothing to do with the level of conductivity (or ability to discharge) of the material. The following list shows you the triboelectric series for many materials you find around the house. Positive items in the series are at the top, and negative items are at the bottom (compilation of a few Internet sources) Air Very positive Human hands (usually too moist, though) Very positive Leather Rabbit Fur Glass Human hair Nylon Wool Fur Lead Silk Aluminum Paper Cotton Neutral Steel Neutral Wood Amber Hard rubber Nickel, Copper Brass, Silver Gold, Platinum Polyester Styrene (Styrofoam) Saran Wrap Polyurethane Polyethylene (like Scotch Tape) Polypropylene Vinyl (PVC) Silicon Teflon Very negative The relative position of two substances in the triboelectric series tells you how they will act when brought into contact. Glass rubbed by silk causes a charge separation because they are several positions apart in the table. The same applies for amber and wool. The farther the separation in the table, the greater the effect. Practical conclusions A common complaint people have on static electricity is that they shoot sparks when touching objects. This is typically caused because they have dry skin, which can become highly positive (+) in charge, especially when the clothes they wear are made of polyester material, which can become negative (-) in charge. People that build up static charges due to dry skin are advised to wear all-cotton clothes, which is neutral. Also, moist skin reduces the collection of charges. Neutral materials or those close to neutral are probably the best for peoples' living environment. The goal is to avoid creation or exposure to a large electrostatic charge. Amber Jewelry generally close to neutral Despite the fact that it was amber that gave it's Greek name to electricity, amber is quite close to the middle of the table, relatively to the synthetic negative materials, not much away from the neutral ones. Amber jewelry, as compared to other materials used in living environment can be considered slightly negative or close to neutral. Also, the active surface of amber jewelry is small or very small and can't be meant as significant.
{ "pile_set_name": "Pile-CC" }
"What's Your Constituency?" Community Grade (13 Users) Your Grade Welcome back to the TV Club coverage of Top Chef, which marks the fourth straight season covered by yours truly. I have to confess upfront that I’m feeling some trepidation at the start here: Partly because I felt the quality and chemistry of the competitors on Season Six set the bar awfully high for future seasons, partly because I found myself checking out early on the second run of Top Chef Masters: Season Two (which Emily Withrow covered), and partly because the beginning of every season offers a daunting 17 contestants, the vast majority of whom don’t stand a chance of winning the thing. That means a good 10 weeks of waiting for wheat to be separated from chaff, and trying to summon up outrage over, say, the 9th best chef getting bounced before the 13th best chef. But while the grind of the early episodes is inevitable—aided by a reality-show formula that certainly isn’t getting any fresher—Top Chef is usually fun in its particulars and Season Seven brings us one major reason for optimism: Eric Ripert, genial culinary superstar and frequent guest in past seasons, has replaced Toby Young as the permanent fourth judge. I’m sure there must be someone who will miss Young’s canned quips and facile observations—and I’ll definitely miss the simmering contempt on Tom’s face whenever Toby opened his mouth—but I think most will agree this is an upgrade. My one concern about Ripert may be the “genial” part: Toby was not afraid to be candid, sometimes to his detriment, and it would be a shame if Ripert became the Paula Abdul of the panel, given more to gentle advise than slash-and-burn critiques. I’ll reserve judgment, but based on tonight’s debut, Ripert at least revealed some sharp, specific insights into how various dish succeeded or failed and that may be enough to compensate for his lack of tartness. After the standard introduction atop a balcony with Congress as a backdrop, the D.C. Top Chef puts the popular mise en place Quickfire challenge front and center. Had everyone been asked to cook, we’d have gotten the usual mélange of dishes, with some overachievers underachieving and vice versa, and no real sense of who the contenders are this season. Granted, no challenge will tell us everything about how the season is going to shape up, but it stands to reason that those with refined knife skills are very likely to be refined in other areas. And lo, a pair of tough-talking, “alpha-male” types emerge as the persons to beat: Kenny, who zips through three of the four stages in first place (including breaking down four whole chickens at a rate of 10 seconds per), and Angelo, who ultimately wins the Quickfire (and $20,000) with his roasted wing & thigh, curry onion jam, and potato noodles. Playing the role of hubristic culinary hotshot to the hilt, Angelo also emerges as the early villain of Season Seven, announcing his absurd ambition “to be the first contestant to win every single challenge” and likening his cooking to “an orchestra with flavors.” Still, I prefer brash, overconfident chefs who can cook to the brash, overconfident chefs who don’t know what they’re doing, and there’s the expected abundance of the latter early on. The Elimination challenge asks the chefs to divide into four groups and create individual dishes that most strongly represent where they’re from—all for the 300 or so partygoers (including The Bachelor from The Bachelor!) celebrating cherry blossom season. With the best and worst from each group up for victory and elimination, respectively, the two top Quickfire competitors, Angelo and Kenny, are given the advantage of choosing someone weak to square off against. Tactically, it’s only a minor edge, since nobody knows for sure how good or bad anyone else is yet; on a more personal level, it’s likely to cause friction between the Quickfire champs and the chefs they assume suck the most. Surprise, surprise, the chefs who make head-slappingly stupid decisions are punished for it later. You had to know Jacqueline was in trouble when she talked about her desire to prove that self-taught cooks who cater can win Top Chef. Then she opted to serve a non-fat (or very low-fat) chicken liver mousse, which evinced mystified “Why bother?” looks from the judges even before they discovered she hadn’t strained the dish to cut back on the unpleasantly grainy texture. She was fortunate that bandana-donning wild man John was around to do something even dumber: Cook dessert, which has been such an Achilles’ heel for contestants in the past that Top Chef has created a spin-off show strictly for pastry maestros. I don’t care what deliciousness is oozing out of trees in Michigan: Maple mousse on puff pastry sounds sickly sweet in the best of circumstances. Add to that a pre-formed pastry—which, as Gail notes, represents a full third of his three-component dish—and John was doomed for an early exit. As for the winners, Angelo so far makes good on his goal to win every challenge, orchestrating a bite of arctic char (with, sigh, a bacon foam) that wows the judges, and there’s Kenny again playing bridesmaid with a nicely layered trout with a cinnamon coffee rub. Kevin, who had made the final four in the Quickfire, too, acquitted himself well enough to be in contention with his lamb sous vide. Of the four, I’m thinking Alex might emerge as a wild card possibility: Tom is usually quick to hate deconstructed anything, but Alex’s dubious-sounding short rib/borscht combination allayed even his doubts. Too early to tell how the gelatinous blob that is the beginning of all Top Chef seasons will shape up, but at least there’s some heat at the top. Stray observations: • Because Bravo sends out the first episode of every season to critics, this review was posted right after airing. Won’t happen again. Future recaps will mostly likely be posted 90 minutes or so later. • “Everything I do will be outstanding” —Angelo, asking for it. • Proof that Kevin must be an excellent chef: His Jersey restaurant is called Rat’s, and it’s still in business. • As noted in one of last season’s posts, I had the pleasure of dining at Seablue, where Stephen Hopcraft works as executive chef. (It’s one of two Michael Mina restaurants at MGM Grand.) Stephen wound up on the bottom four for his deep-fried ribeye, but I suspect he won’t reside there all season. Seablue offers a “fried” quartet of appetizers that includes a taste “lobster corn dog,” but what works for lobster apparently doesn’t for beef. • Great tweet from Washington Post political columnist and all-around handsome bastard Ezra Klein on the irony of the Elimination challenge: “I feel like a challenge based on "representation" is a needlessly cruel way to start Top Chef dc. #novote”
{ "pile_set_name": "Pile-CC" }
Digital Learning ACG School Jakarta students are well prepared to face the ever-changing world that we live in today. Information Technology is an integral part of day-to-day living; therefore, students need to be able to understand and utilise these tools effectively. Students will acquire the skills necessary to select and manage digital tools that will empower them in all phases of the learning process, including research, critical thinking, creative thinking,communication, self-management and collaboration. ACG School Jakarta is a centre of leadership and educational excellence with a strong vision for learning with technology. iPads in Primary School BYOD (Bring Your Own Device) in Secondary School Digital Citizenship ACG School Jakarta believes in a Digital Citizenship model for supporting safe and responsible use of the Internet in teaching and learning. An important part of this is that we are able to show others what responsible use looks likewhile we are using technology in our learning. We think a good digital citizen is someone who; Is a confident and capable user of ICT. Will use ICT for learning as well as other activities. Will think carefully about whether the information they see online is true. Will be able to speak the language of digital technologies. Understands that they may experience problems when using technology but can deal with them. Will always use ICT to communicate with others in positive ways. Will be honest and fair in all their actions using ICT. Will always respect people’s privacy and freedom of speech online. Will help others to become a better digital citizen. Because we know this is important for us all, we ask everyone, including the staff, students, and volunteers working at the school to agree to use the Internet and other technologies in a safe and responsible way by following the rules laid out in the Responsible Use Agreement. Managebac ManageBac is an online planning, assessment, and reporting platform. At ACG School Jakarta, we use ManageBac to take attendance, record assessment data, plan our units, and report on student progress. Parents can login to ManageBac, where they can access student reports and attendance for their child(ren). Parent guidance to ManageBac is provided by the school. Speak to the Front Desk, DP Coordinator, or PYP Coordinator if you need support. Students who enroll at ACG School Jakarta are eligible for Office 365 Education for free, including Word, Excel, PowerPoint, OneNote, and Microsoft Teams, plus additional classroom tools. Students use Office 365 in the classroom and learn a suite of skills and applications that employers value most. Whether it’s Outlook, Word, PowerPoint, Access or OneNote, students will be prepared for their futures today. A record of student learning is kept on the online Seesaw application. The student portfolio includes the evidence of learning in the form of photographs, videos, audio recordings and text documents. The emphasis is on the process of learning and will therefore include ongoing records, not just the final product. Teachers post a weekly blog to Seesaw about the learning happening in the classroom. We encourage parents to view this weekly to see what has been taught as well as take note of any upcoming events. Access for parents is available via the Seesaw Family application. For information about how you can access your child’s portfolio, please contact their class teacher. Teachers and parents may also communicate via the Seesaw chat feature built into the application.
{ "pile_set_name": "Pile-CC" }
## run-all-benchmarks.pkg # Compiled by: # src/app/benchmarks/benchmarks.lib stipulate package bj = benchmark_junk; # benchmark_junk is from src/app/benchmarks/benchmark-junk.pkg herein package run_all_benchmarks { # fun run_all_benchmarks () = { r = ([]: List( bj::Benchmark_Result )); # Linux times seem accurate to about 4ms # so I tweak these to run about 400ms each # to give us times accurate to roughly +-1%: r = tagged_int_loop::run_benchmark 200000000 ! r; # tagged_int_loop is from src/app/benchmarks/tagged-int-loop.pkg r = one_word_int_loop::run_benchmark 200000000 ! r; # one_word_int_loop is from src/app/benchmarks/one-word-int-loop.pkg r = tagged_int_loops::run_benchmark 200 ! r; # tagged_int_loops is from src/app/benchmarks/tagged-int-loops.pkg r = one_word_int_loops::run_benchmark 200 ! r; # one_word_int_loops is from src/app/benchmarks/one-word-int-loops.pkg r = tagged_int_loop_with_overflow_trapping::run_benchmark 200000000 ! r; # tagged_int_loop is from src/app/benchmarks/tagged-int-loop.pkg r = one_word_int_loop_with_overflow_trapping::run_benchmark 200000000 ! r; # one_word_int_loop is from src/app/benchmarks/one-word-int-loop.pkg r = tagged_int_loops_with_overflow_trapping::run_benchmark 200 ! r; # tagged_int_loops is from src/app/benchmarks/tagged-int-loops.pkg r = one_word_int_loops_with_overflow_trapping::run_benchmark 200 ! r; # one_word_int_loops is from src/app/benchmarks/one-word-int-loops.pkg r = tagged_int_shellsort::run_benchmark 15000000 ! r; # tagged_int_shellsort is from src/app/benchmarks/tagged-int-shellsort.pkg r = tagged_int_shellsort_no_bounds_checking::run_benchmark 15000000 ! r; # tagged_int_shellsort_no_bounds_checking is from src/app/benchmarks/tagged-int-shellsort.pkg bj::summarize_all_benchmarks (reverse r); }; my _ = run_all_benchmarks (); }; end;
{ "pile_set_name": "Github" }
Glenea anticepunctata Glenea anticepunctata is a species of beetle in the family Cerambycidae. It was described by James Thomson in 1857. It is known from Borneo, Sumatra, India and Malaysia. Varietas Glenea anticepunctata var. janthoides Breuning, 1956 Glenea anticepunctata var. mediovitticollis Breuning, 1956 Glenea anticepunctata var. obsoletepunctata (Thomson, 1857) References Category:Glenea Category:Beetles described in 1857
{ "pile_set_name": "Wikipedia (en)" }
Simplify (((z**9/z)/z**(-17))**(4/5))**21 assuming z is positive. z**420 Simplify (a*(a*a**8)/a*a**(1/9))/(((a*(a**(-11)*a)/a*a)/a)/a**5) assuming a is positive. a**(217/9) Simplify (u/(u/(u/((u/(u/(u*u/(u*u*u**(-2)*u))))/u))*u))**(-4)/((u**11*u)/u**(2/23)) assuming u is positive. u**(-274/23) Simplify ((z*z**(-18)/z)/z*z)/(z*z**(5/9)*z)*z**(-2/11)/z**(7/3) assuming z is positive. z**(-2284/99) Simplify ((p*p*(p*p**8)/p)/p*p*p/p**3*(p**(1/3))**(6/25))**14 assuming p is positive. p**(2828/25) Simplify (a*a**(-24/5))**(-14)/(a**(-2/3))**(2/111) assuming a is positive. a**(88598/1665) Simplify (((m/(m/m**(1/13)))/(m/(m**(2/33)*m)))**(-40))**27 assuming m is positive. m**(-21240/143) Simplify (z/z**(-6))**(-1/14)/(z**(1/2)/z**(-13)) assuming z is positive. z**(-14) Simplify (n*n/n**13*n*n**(8/9))/(n/(((n*n**(-1))/n)/n*n*n*n))**(-8) assuming n is positive. n**(-82/9) Simplify (d**(8/9)/(d*d**10*d))/(d**(-10)/(d/(d*d*d/(d*d**(3/11))))) assuming d is positive. d**(-182/99) Simplify (v**4)**(4/11)/((v/v**(2/17))/(v/v**16)) assuming v is positive. v**(-2698/187) Simplify ((i**(-5)*i*i)/i)**29*((i/i**5*i)/i)/i**17 assuming i is positive. i**(-137) Simplify (g/((g*g**7)/g))**(-10/9)*g**(-4)*g/((g*g**15)/g) assuming g is positive. g**(-34/3) Simplify (((l*l/(l/l**3))/l)/(l**(-1/4)*l)*((l/(l**2/l))/l)/(((l*l*l**(-4/9)*l)/l)/l))**(1/2) assuming l is positive. l**(25/72) Simplify ((d**(-3))**(-24)/(d**(-12)/(d/d**(1/4))))**(-1/5) assuming d is positive. d**(-339/20) Simplify (d**14/d*d)/(d**11*d*d)*d*(d/d**(-1/4))/d*d*d*d**8 assuming d is positive. d**(49/4) Simplify ((j*j**(-13/3)/j)/j**20)/(j/(j**(-7)*j)*(j*j*j**(-1/18)*j*j)/j) assuming j is positive. j**(-617/18) Simplify ((y*y**1)**(2/119)*y*y**(-6)*y*(y*y**(-3/5)/y)/y)**(1/3) assuming y is positive. y**(-1104/595) Simplify (t**(1/4))**(-1/15)*(t**(-3/8))**(-5) assuming t is positive. t**(223/120) Simplify (p*p*p/p**(7/2))**(1/31)/((p/(p*p**3/p))/p)**(35/2) assuming p is positive. p**(1627/31) Simplify ((((w**(1/3)*w*w)/w)**(-6/29))**(-16/7))**(3/8) assuming w is positive. w**(48/203) Simplify ((p**(-1/19)*p*p)/(p/((p*(p/(p/((p**(-10/3)/p)/p)))/p)/p)*p))/(p*p**(1/4)*p*p**(-23)) assuming p is positive. p**(3275/228) Simplify ((q**11)**(2/17))**(1/44) assuming q is positive. q**(1/34) Simplify ((o**(1/4)/o*o)/o)**(-38)*(o*o**(-13))/(o/(o/(o**(-2/63)*o))) assuming o is positive. o**(1957/126) Simplify ((w*((w**(-3/7)/w)/w*w)/w)**44)**30 assuming w is positive. w**(-13200/7) Simplify ((l*l**(-1/2))**(2/25)/(l**(-1)/(l*l/l**(1/2)*l)))**(-14/5) assuming l is positive. l**(-1239/125) Simplify (d**1*d**(-1/3)/d*d**5/d**(-6))**33 assuming d is positive. d**352 Simplify (t**11/t*t**(1/4)*t)/(((t*t/t**(-2/31))/t)/t*t*((t*t*t*t**5*t*t)/t*t)/t) assuming t is positive. t**(147/124) Simplify (d**(-2/7))**(-20/9)*(d**(-1/2))**28 assuming d is positive. d**(-842/63) Simplify (d*d/(d**(9/5)*d))/d**6*d/d**(-3/13)*d/d**9 assuming d is positive. d**(-882/65) Simplify (((d*((d**(-9/4)*d)/d)/d)/(d**(-2)/d))**(-4/5))**(-10) assuming d is positive. d**6 Simplify (r*r/(r**10*r))/r**(-1)*r**9/(r*r**6*r*r) assuming r is positive. r**(-8) Simplify u**(-2/15)*u/u**20*u**8*(u/u**(-1/2))/u assuming u is positive. u**(-319/30) Simplify s*s*s/s**14*s*s/(s/(s*(s**13*s*s)/s))*s*s*(s**4)**(-43) assuming s is positive. s**(-165) Simplify ((g**(-2)*g*g*g)**(-2/17)*((g*g**2)/g*g)**(-9))**(-1/7) assuming g is positive. g**(461/119) Simplify (g/(g*g*g**5*g))/g*g*(g**12*g)/g*(g/(g**(2/3)*g))/(g**(-6)/g) assuming g is positive. g**(34/3) Simplify (r*r**(-3)*r)**16*r*((r/(r*r*r**8*r*r)*r)/r)/r*r**(-1) assuming r is positive. r**(-28) Simplify (x*x**(-11))/(x/x**(-17/2)*x*x*x*x)*x**(-23)/x*x*x**24/x assuming x is positive. x**(-47/2) Simplify (x**(-1/6))**(9/7)*(x/x**0)/x**(3/2) assuming x is positive. x**(-5/7) Simplify (b**(-4)*b/b**(1/10)*(b**(2/3)*b)**(-35))**(29/3) assuming b is positive. b**(-53447/90) Simplify ((n/((n*n*n**(-1/2))/n)*n)/n**6)/((n*n*n**8)/(n**(-1/4)/n)) assuming n is positive. n**(-63/4) Simplify (m/m**(-2/47)*m)/((m/m**31)/m)*(m**(-5/12)/m)**(-8) assuming m is positive. m**(6257/141) Simplify (q*q**(-7)/q)**(-1)/(q*q**(-1/15)*q*q**(-1/21)*q) assuming q is positive. q**(144/35) Simplify n**(-4)*n*n**17*n*n*(n*n*(n*n**(-11)/n)/n)/n**(-2/11) assuming n is positive. n**(68/11) Simplify i**13*i*i*i**(-2/9)*i*i*(i/(i/i**(-3/14)))/(i*i**11) assuming i is positive. i**(575/126) Simplify ((g**(-1/2))**(-9/7)*g/(g/(g*g**(-7)/g))*g*(g/(g*(g/((g/g**(1/7))/g))/g))/g*g)**(13/2) assuming g is positive. g**(-143/4) Simplify (y**(10/3)/y*y**(-29))**(-5) assuming y is positive. y**(400/3) Simplify t**(-3/11)*t**(-6/5)*(t*t**(-1/10))/(t/(t**(-2/7)*t)) assuming t is positive. t**(-661/770) Simplify (((l**(-1/32)/l)/l)/l**9)/((l*l*l/l**(-14))/(l/(l*(l/(l/l**(1/11))*l)/l)*l)) assuming l is positive. l**(-9547/352) Simplify (c*c/(c*c**(1/5))*c)**35*c**(5/4)/((c**(11/5)*c)/c) assuming c is positive. c**(1241/20) Simplify x**5/(x**(7/3)*x)*(x/(x**(-2/5)/x)*x)**(5/2) assuming x is positive. x**(61/6) Simplify j**1*j*j**(-2/9)*j/((j/(j*j**(-1/3)))/j*j)*j/j**(2/5)*j assuming j is positive. j**(182/45) Simplify h*h/(h/(h**14/h)*h*h)*h/(h/(h/h**12))*h**(-11)/(((h**(-4)*h)/h)/h) assuming h is positive. h**(-5) Simplify (k/(k*k**1)*k/(k/(k/(k/k**21))))**(5/16) assuming k is positive. k**(25/4) Simplify ((n**(-3/5))**(-1/4))**(-1/19) assuming n is positive. n**(-3/380) Simplify (j/j**(-2/41))/j**(-20)*(j**(2/19))**(-36) assuming j is positive. j**(13445/779) Simplify (k**(-33)*k**(-2/15))/((k**(-1/27)/k)/k**(-32)) assuming k is positive. k**(-8653/135) Simplify (((a*a/(a**(-2)*a))/a)/(((a/((a**(-3/4)/a)/a))/a)/a))/(a*a*a*a**(-6)*a)**(-2/47) assuming a is positive. a**(31/188) Simplify (q**(-3/2)/(q/q**(1/11))*q*q/(q/(q/(q/(q*q*q**(-2/3)/q))))*q/(q/((q*q**(-2/7))/q))*q)**29 assuming q is positive. q**(-4843/462) Simplify (o**(2/5)/o*o*o)**(-13)/((o**(-4/5)*o*o)/o**11) assuming o is positive. o**(-42/5) Simplify ((c*c/(c**(-31)/c)*c)/c**(2/7))**(1/4) assuming c is positive. c**(243/28) Simplify h/(h/(h*h*h**6/h))*h*h**(1/3)*h**(2/37)*h*h*h**(-4)*h*h*h*h assuming h is positive. h**(1153/111) Simplify ((v/v**(2/5))**(1/56)/(v*v*v/(v**(-3)*v)*v*v**(-5/4)*v*v))**41 assuming v is positive. v**(-77367/280) Simplify ((t**(-2/5)*t*t/(t*t**(2/5)*t))/(t**(-1))**(12/7))**(-10) assuming t is positive. t**(-64/7) Simplify ((f**(3/5))**(3/5))**(1/10) assuming f is positive. f**(9/250) Simplify (r**(1/4)/r)**(-2/13)*(r**(2/9))**(-2/107) assuming r is positive. r**(2785/25038) Simplify (b**(-2/5)*b*b**(-2/9))/(b*b**(-1/33))**(-2/11) assuming b is positive. b**(3017/5445) Simplify ((y**(2/5)/y)/y*y)**(3/7)/((y**(2/27)/y)/(y**(1/3)*y)) assuming y is positive. y**(1892/945) Simplify (d**(-1/13))**2/(d**(2/21))**(7/5) assuming d is positive. d**(-56/195) Simplify o*o/o**(-16)*o**(-2)*(o/(((o*o**(1/2))/o)/o))**(-23/2) assuming o is positive. o**(-5/4) Simplify (t**(-2/11)*t/(t/(t*(t**(-24)*t)/t)))/(t*t**(-1/2)*t*t**(-8)*t) assuming t is positive. t**(-389/22) Simplify (g*g**(-2/21)/g)/(g*g**8*g*g)*(((g*g/(g*g*g**(-2/35)*g))/g)/g*g)/(g/g**(2/25)) assuming g is positive. g**(-7328/525) Simplify ((i**(-11/6)*i*i)/(i/i**(-11)))/(i*i**1*i*i/(i/i**(2/57))) assuming i is positive. i**(-565/38) Simplify d**(2/9)*(d/d**(11/3))/d*d*((d*d*d/(d/d**(-11)))/d)/(d**(4/5)/d*d) assuming d is positive. d**(-596/45) Simplify (t**(-5/8)*t**8)/(((t*t**(9/5))/t)/t*t*t**(-23)) assuming t is positive. t**(1143/40) Simplify ((w**(-11)*w)/w*w*w**3/w)**(4/11) assuming w is positive. w**(-32/11) Simplify ((h/((h/(h**(-1/2)/h))/h))**20*(h*h**(-3))/((h/(h*h**(-2)*h))/h))**(-26) assuming h is positive. h**312 Simplify ((r*r*r/(r/(r/(r/r**(-2/91))*r)))/r*r**(-2/21))**(-2/97) assuming r is positive. r**(-1028/26481) Simplify y**(-3)/y**(2/9)*(y*(y/y**(3/7))/y)**(-5/9) assuming y is positive. y**(-223/63) Simplify (w/(w*(w**7/w)/w*w))**20/(w**(-2/9))**(-3/10) assuming w is positive. w**(-1801/15) Simplify ((b*b**21*b)/b*b**(-9/4))/(b/b**(7/3))**(2/21) assuming b is positive. b**(5009/252) Simplify ((t/(t*t**(-2/13)))**(3/10)/(t**(2/13))**(3/4))**12 assuming t is positive. t**(-54/65) Simplify ((j/(j/(j/j**17)))**(-5/2))**(-5) assuming j is positive. j**(-200) Simplify (i**(8/3)*i**(3/23))/(i**16)**34 assumin
{ "pile_set_name": "DM Mathematics" }
--- abstract: 'We give a detailed analysis of the proportion of elements in the symmetric group on $n$ points whose order divides $m$, for $n$ sufficiently large and $m\geq n$ with $m=O(n)$.' address: | School of Mathematics and Statistics,\ University of Western Australia,\ Nedlands, WA 6907\ Australia. author: - 'Alice C. Niemeyer' - 'Cheryl E. Praeger' date: '31 March 2006.' title: On Permutations of Order Dividing a Given Integer --- Introduction ============ The study of orders of elements in finite symmetric groups goes back at least to the work of Landau [@Landau09 p. 222] who proved that the maximum order of an element of the symmetric group $S_n$ on $n$ points is $e^{(1+o(1))(n\log n)^{1/2}}$. Erdős and Turán took a probabilistic approach in their seminal work in the area, proving in [@ErdosTuran65; @ErdosTuran67] that, for a uniformly distributed random element $g\in S_n$, the random variable $\log|g|$ is normally distributed with mean $(1/2) \log^2n$ and standard deviation $\frac{1}{\sqrt{3}} \log^{3/2}(n)$. Thus most permutations in $S_n$ have order considerably larger than $O(n)$. Nevertheless, permutations of order $O(n)$, that is, of order at most $cn$ for some constant $c$, have received some attention in the literature. Let $P(n,m)$ denote the proportion of permutations $g\in S_n$ which satisfy $g^m = 1$, that is to say, $|g|$ divides $m$. In 1952 Chowla, Herstein and Scott [@Chowlaetal52] found a generating function and some recurrence relations for $P(n,m)$ for $m$ fixed, and asked for its asymptotic behaviour for large $n$. Several years later, Moser and Wyman [@MoserWyman55; @MoserWyman56] derived an asymptotic for $P(n,m)$, for a fixed prime number $m$, expressing it as a contour integral. Then in 1986, Wilf [@Wilf86] obtained explicitly the limiting value of $P(n,m)$ for an arbitrary fixed value of $m$ as $n\rightarrow\infty$, see also the paper [@Volynets] of Volynets. Other authors have considered equations $g^m=h$, for a fixed integer $m$ and $h\in S_n$, see [@BouwerChernoff85; @GaoZha; @MineevPavlov76a; @MineevPavlov76b]. However in many applications, for example in [@Bealsetal03], the parameters $n$ and $m$ are linearly related, so that $m$ is unbounded as $n$ increases. For the special case where $m=n$, Warlimont [@Warlimont78] showed in 1978 that most elements $g\in S_n$ satisfying $g^n=1$ are $n$-cycles, namely he proved that $P(n,n)$, for $n$ sufficiently large, satisfies $$\frac{1}{n} + \frac{2c}{n^2} \le P(n,n) \le \frac{1}{n} + \frac{2c}{n^2} + O\left(\frac{1}{n^{3-o(1)}}\right)$$ where $c =1$ if $n$ is even and $c=0$ if $n$ is odd. Note that the proportion of $n$-cycles in $S_n$ is $1/n$ and, if $n$ is even, the proportion of elements that are a product of two cycles of length $n/2$ is $2/n^2$. Warlimont’s result proves in particular that most permutations satisfying $g^n=1$ are $n$-cycles. More precisely it implies that the conditional probability that a random element $g\in S_n$ is an $n$-cycle, given that $g^n =1$, lies between $1-2c n^{-1} - O(n^{-2+o(1)})$ and $1-2c n^{-1} + O(n^{-2})$. The main results of this paper, Theorems \[leadingterms\] and \[bounds\], generalise Warlimont’s result, giving a detailed analysis of $P(n,m)$ for large $n$, where $m=O(n)$ and $m\geq n$. For this range of values of $n$ and $m$, we have $rn\leq m<(r+1)n$ for some positive integer $r$, and we analyse $P(n,m)$ for $m$ in this range, for a fixed value of $r$ and $n\rightarrow\infty$. It turns out that the kinds of elements that make the largest contribution to $P(n,m)$ depend heavily on the arithmetic nature of $m$, for example, on whether $m$ is divisible by $n$ or by $r+1$. We separate out several cases in the statement of our results. Theorem \[leadingterms\] deals with two cases for which we give asymptotic expressions for $P(n,m)$. The first of these reduces in the case $m=n$ to Warlimont’s theorem [@Warlimont78] (modulo a small discrepancy in the error term). For other values of $m$ lying strictly between $rn$ and $(r+1)n$ we obtain in Theorem \[bounds\] only an upper bound for $P(n,m)$, since the exact value depends on both the arithmetic nature and the size of $m$ (see also Remark \[remark:leadinterms\]). \[leadingterms\] Let $n$ and $r$ be positive integers. Then for a fixed value of $r$ and sufficiently large $n$, the following hold. 1. $\displaystyle{ P(n,rn)=\frac{1}{n}+\frac{c(r)}{n^2} +O\left(\frac{1}{n^{2.5-o(1)}}\right) }$ where $c(r)=\sum (1+\frac{i+j}{2r})$ and the sum is over all pairs $(i,j)$ such that $1\leq i,j\leq r^2, ij =r^2,$ and both $r+i, r+j$ divide $rn$. In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even. 2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then $$P(n,m)=\frac{1}{n}+\frac{t+c'(r)}{n^2}+O\left(\frac{1}{n^{2.5-o(1)}} \right)$$ where $c'(r)=\sum(1+\frac{i+j-2}{2(r+1)})$ and the sum is over all pairs $(i,j)$ such that $1< i,j\leq (r+1)^2, (i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j$ divide $m$. \[bounds\] Let $n,m,r$ be positive integers such that $rn< m<(r+1)n$, and ${{\delta}}$ a real number such that $0<{{\delta}}\leq 1/4$. Then for a fixed value of $r$ and sufficiently large $n$, $$P(n,m)\leq \frac{\alpha.(r+1)}{m}+\frac{k(r)} {n^2}+ O\left(\frac{1}{n^{2.5-2{{\delta}}}}\right)$$where $k(r) = \frac{4(r+3)^4}{r^2}$ and $$\alpha=\left\{\begin{array}{ll} 1&\mbox{if $r+1$ divides $m$ and $n-\frac{m}{r+1} < \frac{m}{2(r+1)(r+2)-1}$}\\ 0&\mbox{otherwise.} \end{array}\right.$$ \[remark:leadinterms\] \(a) In Theorem \[leadingterms\](a), the leading term $1/n$ is the proportion of $n$-cycles, while the proportion of permutations containing an $(n-t)$-cycle is $\frac{1}{n-t} = \frac{1}{n} + \frac{t}{n^2} + O(\frac{1}{n^3})$, which contributes to the first two terms in Theorem \[leadingterms\](b). The terms $\frac{c(r)}{n^2}$ and $\frac{c'(r)}{n^2}$ correspond to permutations in $S_n$ that have two long cycles, and these have lengths $\frac{m} {r+i}$ and $\frac{m}{r+j}$, for some $(i,j)$ satisfying the conditions in Theorem \[leadingterms\] (a) or (b) respectively, (where $m=rn$ in part (a)). \(b) In Theorem \[bounds\], if $r+1$ divides $m$ and $n-m/(r+1)<\frac{m}{2(r+1)(r+2)-1}$, then the term $(r+1)/m$ comes from elements containing a cycle of length $m/(r+1)$. The term $\frac{k(r)}{n^2}$ corresponds to permutations with exactly two ‘large’ cycles. More details are given in Remark \[rem:general\]. Our interest in $P(n,m)$ arose from algorithmic applications concerning finite symmetric groups. For example, $n$-cycles in $S_n$ satisfy the equation $g^n=1$, while elements whose cycle structure consists of a 2-cycle and a single additional cycle of odd length $n-t$, where $t = 2$ or $3$, satisfy the equation $g^{2(n-t)} =1$. For an element $g$ of the latter type we can construct a transposition by forming the power $g^{n-t}$. In many cases the group $S_n$ is not given as a permutation group in its natural representation, and, while it is possible to test whether an element $g$ satisfies one of these equations, it is often impossible to determine its cycle structure with certainty. It is therefore important to have lower bounds on the conditional probability that a random element $g$ has a desired cycle structure, given that it satisfies an appropriate equation. Using Theorem \[leadingterms\], we obtained the following estimates of various conditional probabilities. \[cdnlprobs1\] Let $r, n$ be positive integers and let $g$ be a uniformly distributed random element of $S_n$. Then for a fixed value of $r$ and sufficiently large $n$, the following hold, where $c(r)$ and $c'(r)$ are as in Theorem $\ref{leadingterms}$. 1. The conditional probability $P$ that $g$ is an $n$-cycle, given that $|g|$ divides $rn$, satisfies $$\begin{aligned} 1-\frac{c(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& P \leq 1-\frac{c(r)}{n}+O\left(\frac{1} {n^{2}}\right).\\\end{aligned}$$ 2. If $r=t!-1$, then the conditional probability $P$ that $g$ contains an $(n-t)$-cycle, given that $|g|$ divides $t!(n-t)$, satisfies $$\begin{aligned} 1-\frac{c'(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& P \leq 1-\frac{c'(r)}{n}+O\left(\frac{1} {n^{2}}\right).\\\end{aligned}$$ We note that Theorem \[leadingterms\] improves the upper bound of $(1+o(1))/n$ obtained in [@Bealsetal03 Theorem 3.7], while Corollary \[cdnlprobs1\] improves the corresponding lower bound of $1-o(1)$ of [@Bealsetal03 Theorem 1.3(a)]. These results have been developed and refined further in [@NiemeyerPraeger05b] to derive explicit ‘non-asymptotic’ bounds that hold for all $n$ and can be applied directly to improve the recognition algorithms for $S_n$ and $A_n$ in [@Bealsetal03]. [**Commentary on our approach**]{} Warlimont’s proof in [@Warlimont78] of an upper bound for $P(n,n)$ and the proof of [@Bealsetal03 Theorem 3.7] by Beals and Seress of an upper bound for $P(n,m)$ for certain values of $m$, rely on dividing the elements of $S_n$ into disjoint unions of smaller sets. Warlimont divides the elements according to how many ‘large’ cycles a permutation contains. Fix a real number $s$ such that $1/2 < s < 1$. We say that a cycle of a permutation in $S_n$ is *$s$-small* if its length is strictly less than $n^s$, and is *$s$-large* otherwise. Beals and Seress divide the elements according to the number of cycles in which three specified points lie. Both strategies are sufficient to prove Warlimont’s result or the slightly more general results of [@Bealsetal03 Theorem 3.7]. However, neither is sufficient to prove the general results in this paper. In particular, Warlimont’s approach breaks down when trying to estimate the proportion of elements with no or only one large cycle, which is perhaps why no progress has been made since his paper [@Warlimont78] towards answering Chowla, Herstein and Scott’s original question about the asymptotic behaviour of $P(n,m)$ for large $n$. One of the key ideas that allowed us to generalise Warlimont’s work is the insight that the number of permutations which contain no $s$-large cycles can be estimated by considering their behaviour on three specified points. Another important strategy is our careful analysis of elements containing only one large cycle by separating out divisors of $m$ which are very close to $n$. We regard Theorem \[lem:props\] below as the main outcome of the first stage of our analysis. It is used in the proof of Theorem \[leadingterms\]. The statement of Theorem \[lem:props\] involves the number $d(m)$ of positive divisors of $m$, and the fact that $d(m)=m^{o(1)}$, see Notation \[notation\] (c). It estimates the proportion $P_0(n,m)$ of elements of $S_n$ of order dividing $m$ and having no $s$-large cycles. \[lem:props\] Let $n,m$ be positive integers such that $m\geq n$, and let $s$ be a positive real number such that $1/2<s<1$. Then, with $P_0(n,m)$ as defined above, there is a constant $c$ such that $$P_0(n,m)<\frac{c d(m)m^{2s}}{n^3}=O\left(\frac{m^{2s+o(1)}}{n^3}\right).$$ Theorem \[lem:props\] is proved in Section \[sec:proportions\] and the other results are proved in Section \[sec:stheo\]. Proof of Theorem \[lem:props\] {#sec:proportions} ============================== In this section we introduce some notation that will be used throughout the paper, and we prove Theorem \[lem:props\]. Note that the order $|g|$ of a permutation $g \in S_n$ divides $m$ if and only if the length of each cycle of $g$ divides $m$. Thus $P(n,m)$ is the proportion of elements in $S_n$ all of whose cycle lengths divide $m$. As indicated in the introduction, we estimate $P(n,m)$ by partitioning this proportion in various ways. Sometimes the partition is according to the number of large cycle lengths, and at other times it is defined in terms of the cycles containing certain points. We specify these partitions, and give some other notation, below. \[notation\] The numbers $n,m$ are positive integers, and the symmetric group $S_n$ acts naturally on the set $\Omega=\{1,2,\dots,n\}$. 1. $s$ is a real number such that $1/2 < s < 1$. A divisor $d$ of $m$ is said to be $s$-*large* or $s$-*small* if $d \geq m^{s}$ or $d < m^s$, respectively; $D_\ell$ and $D_s$ denote the sets of all $s$-large and $s$-small divisors $d$ of $m$, respectively, such that $d \le n$. 2. For $g\in S_n$ with order dividing $m$, a $g$-cycle of length $d$ is called $s$-*large* or $s$-*small* according as $d$ is an $s$-large or $s$-small divisor of $m$. 3. $d(m)$ denotes the number of positive divisors of $m$ and $\delta$ and $c_\delta$ are positive real numbers such that $\delta < s$ and $d(m) \le c_\delta m^{\delta}$ for all $m \in {\bf{N}}$. 4. The following functions of $n$ and $m$ denote the proportions of elements $g\in S_n$ of order dividing $m$ and satisfying the additional properties given in the last column of the table below. --------------------- --------------------------------------------- $P_0(n,m)$ all $g$-cycles are $s$-small ${P_0^{(1)}}(n,m)$ all $g$-cycles are $s$-small and $1,2,3$ lie in the same $g$-cycle, ${P_0^{(2)}}(n,m)$ all $g$-cycles are $s$-small and $1,2,3$ lie in exactly two $g$-cycles ${P_0^{(3)}}(n,m)$ all $g$-cycles are $s$-small and $1,2,3$ lie in three different $g$-cycles $P_1(n,m)$ $g$ contains exactly one $s$-large cycle $P_2(n,m)$ $g$ contains exactly two $s$-large cycles $P_3(n,m)$ $g$ contains exactly three $s$-large cycles ${P_{\geq 4}}(n,m)$ $g$ contains at least four $s$-large cycles --------------------- --------------------------------------------- With respect to part (c) we note, see [@NivenZuckermanetal91 pp. 395-396], that for each $\delta > 0$ there exists a constant $c_\delta > 0$ such that $d(m) \le c_\delta m^\delta$ for all $m \in {\bf{N}}.$ This means that the parameter $\delta$ can be any positive real number and in particular that $d(m) = m^{o(1)}.$ Note that $$\label{eq-pi} P_0(n,m) = {P_0^{(1)}}(n,m) + {P_0^{(2)}}(n,m) + {P_0^{(3)}}(n,m)$$ and $$\label{eq-qi} P(n,m) = P_0(n,m) + P_1(n,m) + P_2(n,m) + P_3(n,m)+{P_{\geq 4}}(n,m).$$ We begin by deriving recursive expressions for the $P_0^{(i)}(n,m)$. \[lem:theps\] Using Notation $\ref{notation}$, the following hold, where we take $P_0(0,m) = 1.$ 1. $\displaystyle{{P_0^{(1)}}(n,m) = \frac{(n-3)!}{n!} \sum_{d \in D_s,\ d\ge 3}{(d-1)(d-2)}P_0(n-d,m),}$ 2. $\displaystyle{ {P_0^{(2)}}(n,m) = \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s }{2\le d_2,\ d_1+d_2\le n}} (d_2-1)P_0(n-d_1-d_2,m)}$, 3. $\displaystyle{ {P_0^{(3)}}(n,m) = \frac{(n-3)!}{n!} \sum_{\stackrel{d_1,d_2,d_3\in D_s }{d_1+d_2+d_3 \le n}} P_0(n-d_1-d_2 -d_3,m)}$. We first compute ${P_0^{(1)}}(n,m)$, the proportion of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which the points $1, 2, 3$ are contained in one $g$-cycle, $C$ say, of length $d$ with $d \in D_s$ and $d\geq 3.$ We can choose the remainder of the support set of $C$ in $\binom{n-3}{d-3}$ ways and then the cycle $C$ in $(d-1)!$ ways. The rest of the permutation $g$ can be chosen in $P_0(n-d,m)(n-d)!$ ways. Thus, for a given $d$, the number of such elements is $(n-3)!(d-1)(d-2)P_0(n-d,m)$. We obtain the proportion ${P_0^{(1)}}(n,m)$ by summing over all $d\in D_s$ with $d\geq3$, and then dividing by $n!$, so part (a) is proved. Next we determine the proportion ${P_0^{(2)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which one of the points $1, 2, 3$ is contained in a $g$-cycle $C_1$, and the other two of these points are contained in a different $g$-cycle $C_2$. Let $d_1$ and $d_2$ denote the lengths of the cycles $C_1$ and $C_2$, respectively, so $d_1, d_2\in D_s$ and $d_2 \ge 2.$ Firstly we choose the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways. Secondly we choose the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-2}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways. Finally, the rest of the permutation $g$ is chosen in $P_0(n-d_1 -d_2,m)(n-d_1-d_2)!$ ways. Thus, for a given pair $d_1, d_2$, the number of these elements is $(n-3)!(d_2-1)P_0(n-d_1-d_2,m)$. Since there are three choices for $C_1\cap\{ 1, 2, 3\}$, we have $$\begin{aligned} {P_0^{(2)}}(n,m) & = & \frac{3(n-3)!}{n!}\sum_{\stackrel{d_1, d_2 \in D_s}{2\le d_2,\ d_1+d_2 \le n}} (d_2-1) P_0(n-d_1-d_2,m). \\ \end{aligned}$$ Finally we consider the proportion ${P_0^{(3)}}(n,m)$ of those permutations $g\in S_n$ of order dividing $m$ with all cycles $s$-small, for which each one of the points $1, 2, 3$ is contained in a separate $g$-cycle, say $C_i$ contains $i$ and $C_i$ has length $d_i \in D_s$. We can choose, in order, the support set of $C_1$ in $\binom{n-3}{d_1-1}$ ways and the cycle $C_1$ in $(d_1-1)!$ ways, the support set of $C_2$ in $\binom{n-d_1 -2}{d_2-1}$ ways and the cycle $C_2$ in $(d_2-1)!$ ways, the support set of $C_3$ in $\binom{n-d_1 -d_2 -1}{d_3-1}$ ways and the cycle $C_3$ in $(d_3-1)!$ ways, and the rest of the permutation in $P_0(n-d_1-d_2-d_3,m)(n-d_1-d_2-d_3)!$ ways. The expression for ${P_0^{(3)}}(n,m)$ in part (c) now follows. Next we derive expressions for the $P_i(n,m)$ and ${P_{\geq 4}}(n,m)$. \[lem:qi\] Using Notation $\ref{notation}$, and writing $P_0(0,m)=1$, 1. ${\displaystyle P_0(n,m) = \frac{1}{n}\sum_{d\in D_s} P_0(n-d, m),}$ 2. ${\displaystyle P_1(n,m) = \sum_{d\in D_\ell } \frac{1}{d} P_0(n-d, m)},$ 3. ${\displaystyle P_{2}(n,m) = \frac{1}{2} \sum_{d_1, d_2\in D_\ell } \frac{1}{d_1d_2} P_0(n-d_1-d_2, m)},$ where the sum is over all ordered pairs $(d_1, d_2)$ with $d_1 + d_2 \le n$. 4. ${\displaystyle P_3(n,m) = \frac{1}{6}\sum_{d_1, d_2, d_3 \in D_\ell} \frac{1}{d_1d_2d_3} P_0(n-d_1-d_2 - d_3, m)}$, where the sum is over all ordered triples $(d_1,d_2,d_3)$ with $d_1 + d_2 + d_3 \le n$. 5. ${\displaystyle {P_{\geq 4}}(n,m) \leq \frac{1}{24}\sum_{d_1, d_2, d_3,d_4 \in D_\ell} \frac{1}{d_1d_2d_3d_4} P(n-d_1-d_2 - d_3-d_4, m)}$, where the sum is over all ordered $4$-tuples $(d_1,d_2,d_3,d_4)$ with $d_1 + d_2 + d_3+d_4 \le n$. For each permutation in $S_n$ of order dividing $m$ and all cycles $s$-small, the point 1 lies in a cycle of length $d$, for some $d\in D_s$. For this value of $d$ there are $\binom{n-1} {d-1}(d-1)!$ choices of $d$-cycles containing 1, and $P_0(n-d,m)(n-d)!$ choices for the rest of the permutation. Summing over all $d\in D_s$ yields part (a). The proportion of permutations in $S_n$ of order dividing $m$ and having exactly one $s$-large cycle of length $d$ is $\binom{n}{d}(d-1)! P_0(n-d,m) (n-d)!/n!$. Summing over all $d\in D_\ell$ yields part (b). In order to find the proportion of elements in $S_n$ of order dividing $m$ and having exactly two $s$-large cycles we count triples $(C_1, C_2, g)$, where $C_1$ and $C_2$ are cycles of lengths $d_1$ and $d_2$ respectively, $d_1, d_2\in D_\ell$, $g\in S_n$ has order dividing $m$, $g$ contains $C_1$ and $C_2$ in its disjoint cycle representation, and all other $g$-cycles are $s$-small. For a given $d_1, d_2$, we have $\binom{n}{d_1}(d_1-1)!$ choices for $C_1$, then $\binom{n-d_1}{d_2}(d_2-1)!$ choices for $C_2$, and then the rest of the element $g$ containing $C_1$ and $C_2$ can be chosen in $P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ ways. Thus the ordered pair $(d_1,d_2)$ contributes $\frac{n!}{d_1d_2}P_0(n-d_1-d_2,m)(n-d_1-d_2)!$ triples, and each element $g$ with the properties required for part (c) contributes exactly two of these triples. Hence, summing over ordered pairs $d_1, d_2\in D_\ell$ yields (c). Similar counts are used for parts (d) and (e). For $P_3(n,m), {P_{\geq 4}}(n,m)$ we count 4-tuples $(C_1, C_2,C_3, g)$ and $5$-tuples $(C_1,C_2,C_3,C_4,g)$ respectively, such that, for each $i$, $C_i$ is a cycle of length $d_i$ for some $d_i\in D_\ell$, $g\in S_n$ has order dividing $m$, and $g$ contains all the cycles $C_i$ in its disjoint cycle representation. The reason we have an inequality for ${P_{\geq 4}}(n,m)$ is that in this case each $g$ occurring has at least four $s$-large cycles and hence occurs in at least 24 of the 5-tuples, but possibly more. We complete this section by giving a proof of Theorem \[lem:props\]. The ideas for its proof were developed from arguments in Warlimont’s paper [@Warlimont78]. \[newPs\] Let $m\geq n\geq3$, and let $s, {{\delta}}$ be as in Notation [\[notation\]]{}. Then $$P_0(n,m) < \frac{(1 + 3c_\delta + c_\delta^2)d(m)m^{2s}}{n(n-1)(n-2)}< \frac{c'd(m)m^{2s}}{n^3}= O\left(\frac{m^{2s+\delta}}{n^3}\right)$$ where, if $n\geq6$, we may take $$c'=\left\{\begin{array}{ll} 2(1 + 3c_\delta + c_\delta^2)&\mbox{for any $m\geq n$}\\ 10&\mbox{if $m\geq c_\delta^{1/(s-\delta)}$.} \end{array}\right.$$ In particular Theorem [\[lem:props\]]{} is true. Moreover, if in addition $n\geq m^s+cn^a$ for some positive constants $a,c$ with $a\leq 1$, then $P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^{1+3a}}\right)$. First assume only that $m\geq n\geq3$. Let $D_s$, and $P_0^{(i)}(n,m)$, for $i = 1, 2, 3$, be as in Notation \[notation\]. By (\[eq-pi\]), $P_0(n,m)$ is the sum of the $P_0^{(i)}(n,m)$. We first estimate ${P_0^{(1)}}(n,m).$ By Lemma \[lem:theps\] (a), and using the fact that $d<m^s$ for all $d\in D_s$, $${P_0^{(1)}}(n,m) \le\frac{(n-3)!}{n!} \sum_{\stackrel{d \in D_s}{d\ge 3}}{(d-1)(d-2)}< \frac{d(m) m^{2s}}{n(n-1)(n-2)}.$$ Similarly, by Lemma \[lem:theps\] (b), $$\begin{aligned} {P_0^{(2)}}(n,m) & < & \frac{3(n-3)!}{n!}\sum_{d_1, d_2 \in D_s} (d_2-1) \le \frac{3d(m)^2m^{s}}{n(n-1)(n-2)}\end{aligned}$$ and by Lemma \[lem:theps\] (c), $$\begin{aligned} {P_0^{(3)}}(n,m) &<& \frac{(n-3)!}{n!} \sum_{d_1,d_2,d_3\in D_s} 1 \le \frac{d(m)^3}{n(n-1)(n-2)}.\\\end{aligned}$$ Thus, using the fact noted in Notation \[notation\] that $d(m) \le c_\delta m^\delta$, $$\begin{aligned} P_0(n,m) & \le & \frac{d(m) \left( m^{2s} +3d(m)m^{s} + d(m)^2\right) }{n(n-1)(n-2)} \\ &\le&\frac{d(m)m^{2s}\left( 1 +3c_\delta m^{\delta-s} + (c_\delta m^{\delta-s})^2\right)}{ n(n-1)(n-2)}< \frac{c'd(m) m^{2s}}{n^3}.\end{aligned}$$ To estimate $c'$ note first that, for $n\geq6$, $n(n-1)(n-2)> n^3/2$. Thus if $n\geq6$ then, for any $m\geq n$ we may take $c'= 2(1 + 3c_\delta + c_\delta^2).$ If $m\geq c_\delta^{1/(s-\delta)}$, then $c_\delta m^{\delta-s}\leq 1$ and so we may take $c'=10$. Theorem \[lem:props\] now follows since $d(m)=m^{o(1)}$. Now assume that $n\geq m^s+cn^a$ for some positive constants $c$ and $a$. By Lemma \[lem:qi\], $$P_0(n,m)= \frac{1}{n}\sum_{d\in D_s}P_0(n-d, m).$$ For each $d\in D_s$ we have $m>n-d\geq n-m^s\geq cn^a$, and hence applying Theorem \[lem:props\] (which we have just proved), $$P_0(n-d,m) < \frac{c'd(m)m^{2s}}{(n-d)^3} \leq \frac{c'd(m) m^{2s}}{c^3 n^{3a}}.$$ Thus, $P_0(n,m) \leq \frac{d(m)}{n} \left(\frac{c'd(m)m^{2s}}{c^3n^{3a}} \right)\le \frac{c'c_\delta^2m^{2s + 2\delta}}{c^3n^{1+3a}}$. Proof of Theorem \[leadingterms\] {#sec:stheo} ================================= First we determine the ‘very large’ divisors of $m$ that are at most $n$. \[lem:divat\] Let $r, m$ and $n$ be positive integers such that $rn\le m < (r+1)n$. 1. If $d$ is a divisor of $m$ such that $d \le n$, then one of the following holds: 1. $d=n = \frac{m}{r}$, 2. $d = \frac{m}{r+1}$ so that $\frac{r}{r+1}n \le d < n$, 3. $d \le \frac{m}{r+2}<\frac{r+1}{r+2}n$. 2. Moreover, if $d_1, d_2$ are divisors of $m$ for which $$d_1\le d_2 \le \frac{m}{r+1}\quad \mbox{and}\quad n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)},$$ then $d_1=\frac{m}{c_1}, d_2= \frac{m}{c_2}$, where $c_1, c_2$ divide $m$, and satisfy $c_2 \le 2r+3$, and either $r+2\leq c_2 \le c_1 < 2(r+1)(r+2)$, or $c_2=r+1$, $c_1\geq r(r+1)$. As $d$ is a divisor of $m$ there is a positive integer $t$ such that $d = \frac{m}{t}$. Now $\frac{m}{t} \le n \le \frac{m}{r}$ and therefore $r \le t.$ If $r = t$ then $r$ divides $m$ and $d = \frac{m}{r} \le n$, and since also $rn \le m$ it follows that $d = \frac{m}{r}=n$ and (i) holds. If $t \ge r+2$ then (iii) holds. Finally, if $t=r+1$, then $d = \frac{m}{r+1}$ and $\frac{r}{r+1}n \le \frac{m}{r+1} < n$ and hence (ii) holds. Now we prove the last assertion. Suppose that $d_1, d_2$ are divisors of $m$ which are at most $ \frac{m}{r+1}$, and such that $d_1\leq d_2$ and $n\geq d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$. Then, as $d_1, d_2$ divide $m$, there are integers $c_1, c_2$ such that $d_1 = m/c_1$ and $d_2 = m/c_2.$ Since $d_i \le m/(r+1)$ we have $c_i \ge r+1$ for $i = 1,2$, and since $d_1\le d_2$ we have $c_1\ge c_2$. Now $m/r \ge n \ge d_1 + d_2 > \frac{m(2r+3)}{2(r+1)(r+2)}$, and hence $1/r \ge 1/c_1 + 1/c_2 > \frac{2r+3}{2(r+1)(r+2)}$. If $c_2 \ge 2(r+2)$ then, as $c_1\ge c_2$, we would have $1/c_1 + 1/c_2 \le 1/(r+2)$, which is not the case. Thus $r+1 \le c_2 \le 2r+3.$ If $c_2\geq r+2$, then $$\frac{1}{c_1}> \frac{2r+3}{2(r+1)(r+2)} - \frac{1}{c_2} \ge \frac{2r+3}{2(r+1)(r+2)} - \frac{1}{r+2} = \frac{1}{2(r+1)(r+2)}$$ and hence $c_1 < 2(r+1)(r+2)$ as in the statement. On the other hand, if $c_2=r+1$, then $$\frac{1}{c_1}\leq \frac{n}{m}-\frac{1}{c_2}\leq \frac{1}{r}-\frac{1}{r+1}=\frac{1}{r(r+1)}$$ so $c_1\geq r(r+1)$. The next result gives our first estimate of an upper bound for the proportion $P(n,m)$ of elements in $S_n$ of order dividing $m$. Recall our observation that the parameter $\delta$ in Notation \[notation\](c) can be any positive real number; in Proposition \[prop:general\] we will restrict to $\delta \le s-\frac{1}{2}.$ Note that the requirement $rn\leq m<(r+1)n$ implies that $\frac{n}{r+1}\leq n-\frac{m}{r+1}\leq \frac{m}{r(r+1)}$; the first case of Definition \[def:kr\] (b) below requires an upper bound of approximately half this quantity. \[def:kr\] Let $r,\, m,\, n$ be positive integers such that $rn\le m < (r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-\frac{1}{2}$. - Let $\alpha = \begin{cases} 1 & \mbox{if\ } m=rn,\\ 0 & \mbox{otherwise.} \end{cases}$ - Let $\alpha' = \begin{cases} 1 & \mbox{if\ } (r+1) \mbox{\ divides\ } m \ \mbox{and\ }n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}, \\ 0 & \mbox{otherwise.} \end{cases}$ - Let $t(r,m,n)$ denote the number of divisors $d$ of $m$ with $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$ such that there exists a divisor $d_0$ of $m$ satisfying - $d+d_0\leq n$ and - $\frac{m}{2(r+1)(r+2)}< d_0\leq d$. - Let $k(r,m,n)=t(r,m,n)\frac{2(r+1)(r+2)(2r+3)}{r^2}.$ \[prop:general\] Let $r,\, m,\, n, s$ and $\delta$ be as in Definition [\[def:kr\]]{}. Then, for a fixed value of $r$ and sufficiently large $n$, $$P(n,m) \le \frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)}{n^2}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}} \right),$$ where $\alpha, \alpha', t(r, m, n)$ and $k(r, m, n)$ are as in Definition $\ref{def:kr}.$ Moreover, $t(r,m,n) \le r+3$ and $k(r,m,n) \le \frac{4(r+3)^4}{r^2} $. \[rem:general\] \(a) The term $\frac{1}{n}$, which occurs if and only if $m=rn$, corresponds to the $n$-cycles in $S_n$, and is the exact proportion of these elements. We refine the estimate for $P(n,rn)$ in Theorem \[rn\] below. \(b) The term $\frac{r+1}{m}$, which occurs only if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$, corresponds to permutations with order dividing $m$ and having either one or two $s$-large cycles, with one (the larger in the case of two cycles) of length $\frac{m}{r+1}$. The proportion of elements of $S_n$ containing a cycle of length $\frac{m}{r+1}$ is $\frac{r+1}{m}$, and if there exists a positive integer $d\leq n-\frac{m}{r+1}$ such that $d$ does not divide $m$, then some of these elements have a $d$-cycle and hence do not have order dividing $m$. Thus $\frac{r+1}{m}$ may be an over-estimate for the proportion of elements in $S_n$ (where $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)}$) having order dividing $m$, having exactly one $s$-large cycle of length $\frac{m}{r+1}$, and possibly one additional $s$-large cycle of length dividing $m$. However it is difficult to make a more precise estimate for this term that holds for all sufficiently large $m,n$. In Theorem \[rn\] we treat some special cases where this term either does not arise, or can be determined precisely. \(c) The term $\frac{k(r,m,n)}{n^2}$ arises as follows from permutations that have exactly two $s$-large cycles of lengths dividing $m$. For each of the $t(r,m,n)$ divisors $d$ of $m$ as in Definition \[def:kr\](c), let $d_0(d)$ be the largest of the divisors $d_0$ satisfying Definition \[def:kr\](c)(i),(ii). Note that $d_0(d)$ depends on $d$. Then $k(r,m,n)/n^2$ is an upper bound for the proportion of permutations of order dividing $m$ and having two $s$-large cycles of lengths $d$ and $d_0(d)$, for some $d$ satisfying $\frac{m}{2r+3} \leq d\leq\frac{m}{r+1}$. As in (b) this term may be an over-estimate, not only for the reason given there, but also because lower bounds for the cycle lengths $d, d_0(d)$ were used to define $k(r,m,n)$. Indeed in the case $m=rn$ we are able to obtain the exact value of the coefficient of the $\frac{1}{n^2}$ summand. We divide the estimation of $P(n,m)$ into five subcases. Recall that, by (\[eq-qi\]), $P(n,m)$ is the sum of ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$, for $i=0,1,2,3$, where these are as defined in Notation \[notation\]. We will use the recursive formulae for ${P_{\geq 4}}(n,m)$ and the $P_i(n,m)$ in Lemma \[lem:qi\], together with the expressions for $P_0(n,m)$ in Theorem \[lem:props\] and Lemma \[newPs\], to estimate these five quantities. Summing these estimates will give, by (\[eq-qi\]), our estimate for $P(n,m)$. We also use the information about divisors of $m$ in Lemma \[lem:divat\]. First we deal with $P_0(n,m)$. Since $r$ is fixed, it follows that, for sufficiently large $n$ (and hence sufficiently large $m$), we have $m^s \leq \frac{m}{r+2}$, which is less than $\frac{(r+1)n}{r+2}=n-\frac{n}{r+2}$. Thus $n>m^s+\frac{n}{r+2}$, and applying Lemma \[newPs\] with $a=1, c=\frac{1}{r+2}$, it follows that $$P_0(n,m)=O\left(\frac{m^{2s+2{{\delta}}}}{n^4}\right)=O\left(\frac{1}{n^{4-2s- 2{{\delta}}}}\right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$$ since $4-2s-2{{\delta}}\geq 1+2s-2{{\delta}}$ when $s\leq 3/4$. Next we estimate $P_3(n,m)$ and ${P_{\geq 4}}(n,m)$. By Lemma \[lem:qi\], the latter satisfies ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\sum\frac{1}{d_1d_2d_3d_4}$, where the summation is over all ordered 4-tuples of $s$-large divisors of $m$ whose sum is at most $n$. Thus ${P_{\geq 4}}(n,m)\leq \frac{1}{24}\,\frac{d(m)^4}{m^{4s}}= O\left(\frac{1}{n^{4s-4{{\delta}}}}\right)$. Also $$P_3(n,m)= \frac{1}{6}\sum \frac{1}{d_1d_2d_3}P_0(n-d_1-d_2-d_3,m),$$ where the summation is over all ordered triples of $s$-large divisors of $m$ whose sum is at most $n$. For such a triple $(d_1,d_2,d_3)$, if each $d_i\leq\frac{m} {4(r+1)}$, then $n-\sum d_i\geq n-\frac{3m}{4(r+1)}>\frac{n}{4}$, and so by Lemma \[newPs\], $P_0(n-\sum d_i,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3}} \right)$. Thus the contribution of triples of this type to $P_3(n,m)$ is at most $O\left(\frac{d(m)^3m^{2s+{{\delta}}}}{m^{3s}n^3} \right)=O\left(\frac{1}{n^{3+s-4{{\delta}}}}\right)$. For each of the remaining triples, the maximum $d_i$ is greater than $\frac{m}{4(r+1)}$ and in particular there is a bounded number of choices for the maximum $d_i$. Thus the contribution of the remaining triples to $P_3(n,m)$ is at most $O\left(\frac{d(m)^2}{m^{1+2s}} \right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. It follows that $$P_3(n,m)+{P_{\geq 4}}(n,m)=O\left(\frac{1}{n^{x_3}}\right),$$ where $x_3=\min\{4s-4{{\delta}},3+s-4{{\delta}},1+2s-2{{\delta}}\}=1+2s-2{{\delta}}$ (using the fact that ${{\delta}}\leq s-\frac{1}{2}\leq \frac{1}{4}$). Now we estimate $P_2(n,m)$. By Lemma \[lem:qi\], $$P_{2}(n,m)= \frac{1}{2}\sum \frac{1}{d_1d_2}P_0(n-d_1-d_2,m),$$ where the summation is over all ordered pairs of $s$-large divisors of $m$ whose sum is at most $n$. We divide these pairs $(d_1,d_2)$ into two subsets. The first subset consists of those for which $n- d_1-d_2\geq n^\nu$, where $\nu=(1+2s+{{\delta}})/3$. Note that $\nu<1$ since $\nu\leq s -\frac{1}{6}<1$ (because ${{\delta}}\leq s-\frac{1}{2}$ and $s\leq \frac{3}{4}$). For a pair $(d_1,d_2)$ such that $n- d_1-d_2\geq n^\nu$, by Lemma \[newPs\], $P_0(n-d_1-d_2,m)=O\left(\frac{m^{2s+{{\delta}}}}{n^{3\nu}} \right)$. Thus the total contribution to $P_{2}(n,m)$ from pairs of this type is at most $O\left(\frac{d(m)^2m^{2s+{{\delta}}}}{m^{2s}n^{3\nu}} \right)=O\left(\frac{1}{n^{3\nu-3{{\delta}}}}\right)=O\left(\frac{1}{n^{1+2s-2{{\delta}}}} \right)$. Now consider pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$. Since each $d_i<n\leq m/r$, it follows that each $d_i\leq m/(r+1)$. Since $\nu<1$, for sufficiently large $n$ (and hence sufficiently large $m$) we have $n^\nu\leq \left(\frac{m}{r} \right)^\nu<\frac{m}{2(r+1)(r+2)}$. Thus, for each of the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$, we have $d_1+d_2>n-n^\nu>\frac{m}{r+1}- \frac{m}{2(r+1)(r+2)}=\frac{m(2r+3)}{2(r+1)(r+2)}$, and hence one of $(d_1,d_2)$, $(d_2,d_1)$ (or both if $d_1=d_2$) satisfies the conditions of Lemma \[lem:divat\] (b). Thus, by Lemma \[lem:divat\] (b), it follows that if $d_1 \le d_2$, then either $(d_0,d):=(d_1, d_2)$ satisfies the conditions of Definition \[def:kr\](c), or $d_2=\frac{m}{r+1}$ and $d_1\leq \frac{m}{2(r+1)(r+2)}$. Let $P_2'(n,m)$ denote the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ where $\{d_1,d_2\}=\{ \frac{m}{r+1},d_0\}$ and $d_0 \leq \frac{m}{2(r+1)(r+2)}$. For the other pairs, we note that there are $t(r,m,n) \le r+3$ choices for the larger divisor $d$. Consider a fixed $d\leq \frac{m}{r+1}$, say $d = \frac{m}{c}.$ Then each divisor $d_0$ of $m$, such that $\frac{m}{2(r+1)(r+2)} < d_0 \le d$ and $d + d_0 \le n$, is equal to $\frac{m}{c_0}$ for some $c_0$ such that $c \le c_0 < 2(r+1)(r+2)$. Let $d_0(d) = \frac{m}{c_0}$ be the largest of these divisors $d_0.$ By Lemma \[lem:divat\](b), the combined contribution to $P_2(n,m)$ from the ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ is (since $d$ and $d_0(d)$ may be equal) at most $$\frac{1}{dd_0(d)} < \frac{2r+3}{m} \cdot \frac{2(r+1)(r+2)}{m} = \frac{2(r+1)(r+2)(2r+3)}{m^2}.$$ (Note that $\frac{1}{dd_0(d)} \ge \frac{(r+1)^2}{m^2} > \frac{1}{n^2}$.) If $d_0=\frac{m}{c'}$ is any other divisor of this type and $d_0 < d_0(d)$, then $c_0+1 \le c' < 2(r+1)(r+2)$, and so $n-d-d_0=(n-d-d_0(d))+d_0(d)-d_0$ is at least $$d_0(d)-d_0=\frac{m}{c_0} - \frac{m}{c'} \ge\frac{m}{c_0} - \frac{m}{c_0+1}= \frac{m}{c_0(c_0+1)} > \frac{m}{4(r+1)^2(r+2)^2}.$$ By Lemma \[newPs\], the contribution to $P_2(n,m)$ from the pairs $(d,d_0)$ and $(d_0,d)$ is $O( \frac{1}{m^2}\cdot \frac{m^{2s+\delta}}{m^3}) = O(\frac{1}{n^{5-2s-\delta}})$. Since there are $t(r,m,n) \le r+3$ choices for $d$, and a bounded number of divisors $d_0$ for a given $d$, the contribution to $P_2(n,m)$ from all the pairs $(d_1,d_2)$ such that $n- d_1-d_2< n^\nu$ is at most $$P_2'(n,m) + t(r,m,n) \frac{2(r+1)(r+2)(2r+3)}{n^2r^2}+ O\left(\frac{1}{n^{5-2s-{{\delta}}}} \right).$$ Thus $$\begin{aligned} P_2(n,m)&\le& P_2'(n,m) + \frac{2t(r,m,n)(r+1)(r+2)(2r+3)}{n^2r^2}+ O\left(\frac{1}{n^{x_2}}\right) \\ &=& P_2'(n,m) +\frac{k(r,m,n)}{n^2} + O\left(\frac{1}{n^{x_2}}\right)\end{aligned}$$ with $x_2=\min\{1+2s-2{{\delta}},5-2s-{{\delta}}\}=1+2s-2{{\delta}}$. Note that $$k(r,m,n)\leq (r+3) \frac{2(r+1)(r+2)(2r+3)}{r^2}=4r^2+30r+80+\frac{90}{r}+\frac{36}{r^2}$$ which is less than $\frac{4(r+3)^4}{r^2}$. Finally we estimate $P_1(n,m)+P'_2(n,m)$. By Lemma \[lem:qi\], $P_1(n,m)= \sum \frac{1}{d}P_0(n-d,m)$, where the summation is over all $s$-large divisors $d$ of $m$ such that $d\leq n$, and we take $P_0(0,m)=1$. Note that $d\leq n\leq \frac{m}{r}$, so each divisor $d=\frac{m}{c}$ for some $c\geq r$. In the case where $m=rn$, that is, the case where $n$ divides $m$ (and only in this case), we have a contribution to $P_1(n,m)$ of $\frac{1}{n}$ due to $n$-cycles. If $d<n$ then $d=\frac{m}{c}$ with $c\geq r+1$. Next we consider all divisors $d$ of $m$ such that $d\leq \frac{m}{r+2}$. For each of these divisors, $n-d\geq n - \frac{m}{r+2}\ge n-\frac{(r+1)n}{r+2} =\frac{n}{r+2}$. Thus by Lemma \[newPs\], $P_0(n-d,m) = O\left(\frac{m^{2s + \delta}}{n^{3}}\right) = O\left(\frac{1}{n^{3-2s-\delta}}\right)$. The number of $d$ satisfying $d\geq \frac{m}{2(r+1)}$ is bounded in terms of $r$ (which is fixed), and hence the contribution to $P_1(n,m)$ from all the divisors $d$ satisfying $\frac{m}{2(r+1)}\leq d\leq \frac{m}{r+2}$ is at most $O\left(\frac{1}{m}\,\frac{1}{n^{3-2s-\delta}}\right)=O\left( \frac{1}{n^{4-2s-\delta}}\right)$. On the other hand, if $m^s\leq d <\frac{m}{2(r+1)}$, then $n-d>n - \frac{(r+1)n}{2(r+1)} =\frac{n}{2}$. Now since $r$ is fixed and $s<1$, for sufficiently large $n$, we have $m^s<\frac{n} {4}$, and so $n-d> m^s +\frac{n}{4}$. Then, by Lemma \[newPs\] (applied with $a=1$ and $c=\frac{1}{4}$), $P_0(n-d,m)= O\left(\frac{m^{2s + 2\delta}}{(n-d)^{4}}\right) = O\left(\frac{1}{n^{4-2s-2\delta}}\right)$, and the contribution to $P_1(n,m)$ from all $s$-large divisors $d< \frac{m}{2(r+1)}$ is at most $\frac{d(m)}{m^s}O\left(\frac{1}{n^{4-2s-2\delta}}\right)= O\left(\frac{1}{n^{4-s-3\delta}}\right)$. Thus, noting that $\min\{4-2s-{{\delta}}, 4-s-3{{\delta}}\}\geq 1+2s-2{{\delta}}$, the contribution to $P_1(n,m)$ from all $s$-large divisors $d$ of $m$ such that $d\leq\frac{m}{r+2}$ is $O\left(\frac{1}{n^{1+2s-2\delta}}\right)$. By Lemma \[lem:divat\], the only divisor not yet considered is $d=\frac{m} {r+1}$ and this case of course arises only when $r+1$ divides $m$. Suppose then that $r+1$ divides $m$. We must estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from elements containing a cycle of length $d=\frac{m}{r+1}$. The contribution to $P_1(n,m)+P'_2(n,m)$ due to the divisor $d=\frac{m}{r+1}$ is $\frac{r+1}{m}P_0(n-\frac{m}{r+1},m)+\frac{r+1}{m}\sum_{d_0}\frac{1}{d_0} P_0(n-\frac{m}{r+1}-d_0,m)$, where the summation is over all $s$-large $d_0\leq \frac{m}{2(r+1)(r+2)}$. Suppose first that $n=\frac{m}{r+1}\geq \frac{m}{2(r+1)(r+2)-1}$, so that for each $d_0$, $n-\frac{m}{r+1}-d_0>\frac{m}{2(r+1)^2(r+2)^2}$. Then, by Lemma \[newPs\], the contribution to $P_1(n,m)+P'_2(n,m)$ is at most $$O\left(\frac{1}{m}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right) +d(m) O\left(\frac{1}{m^{1+s}}.\frac{m^{2s+{{\delta}}}}{m^{3}}\right) =O\left(\frac{1}{n^{4-2s-{{\delta}}}}\right)$$ and this is $ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$ since $4-2s-{{\delta}}\geq 1+2s-2{{\delta}}$. Finally suppose that $n-\frac{m}{r+1} < \frac{m}{2(r+1)(r+2)}$. In this case we estimate the contribution to $P_1(n,m)+P'_2(n,m)$ from $d=\frac{m}{r+1}$ by the proportion $\frac{1}{d}=\frac{r+1}{m}$ of elements of $S_n$ containing a $d$-cycle (recognising that this is usually an over-estimate). Putting these estimates together we have $$P_1(n,m)+P'_2(n,m)\leq\frac{\alpha}{n}+\frac{\alpha'.(r+1)}{m}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right),$$ where $\alpha=1$ if $m=rn$ and is $0$ otherwise, and $\alpha'=1$ if $r+1$ divides $m$ and $n-\frac{m}{r+1}<\frac{m}{2(r+1)(r+2)-1}$, and is 0 otherwise. The result now follows using (\[eq-qi\]) and the estimates we have obtained for each of the summands. It is sometimes useful to separate out the results of Proposition \[prop:general\] according to the values of $m,n$. We do this in the theorem below, and also obtain in parts (a) and (b) exact asymptotic expressions for $P(n,rn)$ and $P(n,t!(n-t))$ where $r, t$ are bounded and $n$ is sufficiently large. For this it is convenient to define two sets of integer pairs. \[T\][For positive integers $r$ and $m$, define the following sets of integer pairs: $$\mathcal{T}(r)=\{(i,j)\,|\, 1\leq i,j\leq r^2, ij =r^2,\ \mbox{and both}\ r+i, r+j\ \mbox{divide}\ m\}$$ and $\mathcal{T}'(r)=\{(i,j)\,|\, 1< i,j\leq (r+1)^2, (i-1)(j-1) =(r+1)^2,$ and both $r+i, r+j\ \mbox{divide}\ m\}. $ ]{} \[rn\] Let $n,m,r$ be positive integers such that $rn\leq m<(r+1)n$. Let $1/2<s\leq 3/4$ and $0<{{\delta}}\leq s-1/2$. Then, the following hold for $r$ fixed and sufficiently large $n$ (where the sets $\mathcal{T}(r)$ and $\mathcal{T}'(r)$ are as in Definition [\[T\]]{}). 1. If $m=rn$, then ${\displaystyle P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2} +O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where\ ${\displaystyle c(r)=\sum_{(i,j)\in\mathcal{T}(r)}(1+\frac{i+j}{2r}).} $ In particular $c(1)=0$ if $n$ is odd, and $2$ if $n$ is even. 2. If $r=t!-1$ and $m=t!(n-t)=(r+1)n-t\cdot t!$, then\ ${\displaystyle P(n,m)=\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}} \right)},$ where\ ${\displaystyle c'(r)=\sum_{(i,j)\in\mathcal{T}'(r)}(1+\frac{i+j-2}{2(r+1)})}$. 3. If $rn<m$, then ${\displaystyle P(n,m)\leq \frac{\alpha'.(r+1)}{m}+\frac{k(r,m,n)} {n^2}+ O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)}$, where $\alpha'$ and $k(r,m,n)$ are as in Definition [\[def:kr\]]{}. Part (c) follows immediately from Proposition \[prop:general\]. Next we prove part (a). Suppose that $m=rn$. If $r+1$ divides $m$ then we have $n-\frac{m}{r+1}= \frac{m}{r(r+1)}>\frac{m}{2(r+1)(r+2)-1}$. It follows from Proposition \[prop:general\] that $P(n,m)\leq\frac{1}{n}+\frac{k(r,m,n)} {n^2}+O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. To complete the proof we refine the argument given in the proof of Proposition \[prop:general\] for $P_2(n,m)$ which gave rise to the term $\frac{k(r,m,n)}{n^2}$. The elements contributing to this term were those with exactly two $s$-large cycles, where one of these cycles had length $d=\frac{m}{r+i}$ for some $i$ such that $1\leq i\leq r+3$, and the other had length $d_0(d)=\frac{m}{r+j}$ for some $j$ such that $r+i\leq r+j < 2(r+1)(r+2)$ and $d + d_0(d) \le n.$ Moreover, for a given value of $d$, the value of $d_0(d)$ was the largest integer with these properties. Since we now assume that $m=rn$ we have $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r}$$ that is, $r(2r+i+j)\leq(r+i)(r+j)$, which is equivalent to $r^2\leq ij$. If $d+d_0(d)$ is strictly less than $n$, that is to say, if $r^2<ij$, and thus $ij-r^2\geq1$, then $$n-d-d_0(d)=n-\frac{rn(2r+i+j)}{(r+i)(r+j)}=\frac{n(ij-r^2)}{(r+i)(r+j)}\geq \frac{n}{(r+i)(r+j)},$$ and since $i\leq r+3$ and $r+j<2(r+1)(r+2)$ we have $\frac{n}{(r+i)(r+j)} \geq \frac{n}{2(r+1)(r+2)(2r+3)}$. It now follows from Lemma \[newPs\] that the contribution to $P_2(n,m)$ from all ordered pairs $(d,d_0(d))$ and $(d_0(d),d)$ with $d,d_0(d)$ as above and $n>d+d_0(d)$ is $O\left( \frac{1}{n^2}\,\frac{m^{2s+{{\delta}}}}{n^3}\right)=O\left(\frac{1}{n^{5-2s-{{\delta}}}} \right)\leq O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus when $m=rn$, the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(\frac{m}{r+i},\frac{m}{r+j})$ such that $r^2=ij$ and $1\leq i,j\leq r^2$. (Note that we no longer assume $i\leq j$.) These are precisely the pairs $(i,j)\in\mathcal{T}(r)$. For such a pair $(\frac{m}{r+i},\frac{m}{r+j})$, the contribution to $P_2(n,m)$ is $$\frac{1}{2}\cdot\frac{r+i}{m}\cdot\frac{r+j}{m}= \frac{r^2+r(i+j)+ij}{2n^2r^2}=\frac{1}{n^2}(1+\frac{i+j}{2r})$$ (since $ij=r^2$). Thus $P(n,m)\leq\frac{1}{n}+\frac{c(r)}{n^2} +O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Moreover, for each $(i,j)\in\mathcal{T}(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$ is a permutation of order dividing $m$. Thus $P(n,rn)\geq \frac{1}{n}+\frac{c(r)}{n^2}$, and the main assertion of part (a) is proved. Finally we note that, if $r=1$ then the only possible pair in $\mathcal{T}(1)$ is $(1,1)$, and for this pair to lie in the set we require that $r+1=2$ divides $m=n$. Thus $c(1)$ is 0 if $n$ is odd, and is 2 if $n$ is even. Finally we prove part (b) where we have $r=t!-1$ and $m=t!(n-t)$. Then $rn=(t!-1)n=m+t\cdot t!-n$ which is less than $m$ if $n>t\cdot t!$. Also $(r+1)n=t!\,n>m$. Thus, for sufficiently large $n$, we have $rn<m<(r+1)n$. Moreover, $r+1$ divides $m$ and $n-\frac{m}{r+1}=n-(n-t)=t$, which for sufficiently large $n$ is less than $\frac{n-t}{3t!}<\frac{m}{2(r+1)(r+2)-1}$. It now follows from part (c) that $P(n,t!(n-t))\leq \frac{1}{n-t}+\frac{k(r,m,n)}{n^2}+O\left(\frac{1} {n^{1+2s-2{{\delta}}}}\right)$. Our next task is to improve the coefficient of the $O(\frac{1}{n^2})$ term using a similar argument to the proof of part (a). The elements contributing to this term have exactly two $s$-large cycles of lengths $d=\frac{m}{r+i}$ and $d_0(d)=\frac{m}{r+j}$, with $r+i,r+j\leq (r+1)(r+2)$ and $$d+d_0(d)=\frac{m(2r+i+j)}{(r+i)(r+j)}\leq n=\frac{m}{r+1}+t.$$ This is equivalent to $(r+1)(2r+i+j)\leq(r+i)(r+j)+\frac{t(r+1)(r+i)(r+j)}{m}$, and hence, for sufficiently large $n$ (and hence sufficiently large $m$), $(r+1)(2r+i+j)\leq (r+i)(r+j)$. This is equivalent to $(i-1)(j-1)\geq (r+1)^2$. If $(i-1)(j-1)> (r+1)^2$, then $$\begin{aligned} n-d-d_0(d)&=&(t+\frac{m}{r+1}) - \frac{m(2r+i+j)}{(r+i)(r+j)}\\ &=&t+\frac{m((i-1)(j-1)-(r+1)^2)}{(r+1)(r+i)(r+j)}\\ &>&\frac{rn}{(r+1)^3(r+2)^2}.\end{aligned}$$ As for part (a), the contribution to $P_2(n,m)$ from all pairs $(\frac{m}{r+i},\frac{m}{r+j})$ with $(i-1)(j-1)> (r+1)^2$ is $O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. Thus the only contributions to the $O\left(\frac{1}{n^2}\right)$ term come from pairs $(d,d_0(d))=(\frac{m}{r+i},\frac{m}{r+j})$ such that $(r+1)^2=(i-1)(j-1)$ and $1\leq i,j\leq (r+1)^2$. These are precisely the pairs $(i,j)\in\mathcal{T}'(r)$. For each of these pairs we have $r^2+2r=ij-i-j$ and the contribution to $P_2(n,m)$ is $$\begin{aligned} \frac{1}{2dd_0(d)}&=&\frac{(r+i)(r+j)}{2m^2}= \frac{r^2+r(i+j)+ij}{2(r+1)^2(n-t)^2}\\ &=&\frac{(r+1)(2r+i+j)}{2(r+1)^2(n-t)^2}= \frac{1}{(n-t)^2}\left(1+\frac{i+j-2}{2(r+1)}\right).\end{aligned}$$ Thus $P(n,m)\leq\frac{1}{n-t}+\frac{c'(r)}{n^2} +O\left(\frac{1}{n^{1+2s-2{{\delta}}}}\right)$. On the other hand, each permutation in $S_n$ that contains an $(n-t)$-cycle has order dividing $t!(n-t)=m$, and the proportion of these elements is $\frac{1}{n-t}$. Also, for each $(i,j)\in\mathcal{T}'(r)$, each permutation in $S_n$ having exactly two cycles of lengths $\frac{m}{r+i}$ and $\frac{m}{r+j}$, and inducing any permutation on the remaining $n-\frac{m}{r+i}-\frac{m}{r+j}=t$ points, is a permutation of order dividing $m=t!(n-t)$, and the proportion of all such elements is $\frac{c'(r)}{(n-t)^2}$. Thus $P(n,m)\geq \frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}$, and the assertion of part (b) is proved. It is a simple matter now to prove Theorems \[leadingterms\] and \[bounds\]. The first theorem follows from Theorem \[rn\] (a) and (b) on setting $s=3/4$ and allowing $\delta \rightarrow 0$. Note that $\frac{1}{n-t} = \frac{1}{n} + \frac{t}{n^2} + O(\frac{1}{n^3})$ and $\frac{1}{(n-t)^2} = \frac{1}{n^2} + O(\frac{1}{n^3})$. For the second theorem, again we set $s=3/4$ in Theorem \[rn\](c). By Proposition \[prop:general\] we have $k(r,m,n) \le \frac{4(r+3)^4}{r^2}$. If we define $k(r) = \frac{4(r+3)^4}{r^2}$ the result follows. Finally we derive the conditional probabilities in Corollary \[cdnlprobs1\]. Let $r,\, n$ be positive integers with $r$ fixed and $n$ ‘sufficiently large’, and let $g$ be a uniformly distributed random element of $S_n$. First set $m = rn.$ Let $A$ denote the event that $g$ is an $n$-cycle, and let $B$ denote the event that $g$ has order dividing $m$, so that the probability ${{\rm{Prob}}}(B)$ is $P(n,m)$. Then, by elementary probability theory, we have $$\begin{aligned} {{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )} {{{\rm{Prob}}}(B)} = \frac{\frac{1}{n}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[leadingterms\], $\frac{1}{n}+\frac{c(r)}{n^2}<P(n,m)=\frac{1}{n}+\frac{c(r)}{n^2}+O\left(\frac{1} {n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned} 1-\frac{c(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B) \leq 1-\frac{c(r)}{n}+O\left(\frac{1} {n^{2}}\right).\\\end{aligned}$$ Now suppose that $r=t!-1$ for some integer $t\geq2$, and let $A$ denote the event that $g$ contains an $(n-t)$-cycle, so that ${{\rm{Prob}}}(A)=\frac{1}{n-t}$. Then, with $B$ as above for the integer $m:=t!(n-t)$, we have $$\begin{aligned} {{\rm{Prob}}}( A \mid B) &= &\frac{{{\rm{Prob}}}( A \cap B)} {{{\rm{Prob}}}(B)} = \frac{{{\rm{Prob}}}( A )} {{{\rm{Prob}}}(B)} = \frac{\frac{1}{n-t}}{P(n,m)}. \\\end{aligned}$$ By Theorem \[rn\](b), $\frac{1}{n-t}+\frac{c'(r)}{(n-t)^2}<P(n,m)=\frac{1}{n-t}+ \frac{c'(r)}{(n-t)^2}+O\left(\frac{1} {n^{2.5-o(1)}}\right)$, and hence $$\begin{aligned} 1-\frac{c'(r)}{n}-O\left(\frac{1} {n^{1.5-o(1)}}\right)&\leq& {{\rm{Prob}}}(A \mid B) \leq 1-\frac{c'(r)}{n}+O\left(\frac{1} {n^{2}}\right).\end{aligned}$$ This research was supported ARC Discovery Grants DP0209706 and DP0557587. The authors thank the referee for carefully reading the submitted version and advice on the paper. {#this-research-was-supported-arc-discovery-grants-dp0209706-and-dp0557587.-the-authors-thank-the-referee-for-carefully-reading-the-submitted-version-and-advice-on-the-paper. .unnumbered} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [10]{} Robert Beals, Charles R. Leedham-Green, Alice C. Niemeyer, Cheryl E. Praeger, and Ákos Seress, , , 355(5),(2003), [2097–2113]{}. I.Z. Bouwer and W.W. Chernoff, [Solutions to [$x\sp r=\alpha$]{} in the symmetric group]{}, [Tenth British combinatorial conference (Glasgow, 1985)]{}, [*Ars Combin.*]{}(A) 20, (1985), 83-88. S. Chowla, I. N. Herstein and W. R. Scott, The solutions of $x^d=1$ in symmetric groups, *Norske Vid. Selsk.* [**25**]{} (1952), 29–31. P. Erd[ő]{}s, and P. Tur[á]{}n, , , [**4**]{}, (1965), [175–186]{}. P. Erd[ő]{}s, and P. Tur[á]{}n, , , [18]{}, (1967), [309–320]{}. Lu Gao and Jian Guo Zha. [Solving the equation [$x\sp n=\sigma$]{} in the symmetric group [$S\sb m$]{}]{}, [*J. Math. (Wuhan)*]{}, 7 (2), (1987), 173–176, 1987. E. Landau. , , 1909. . [An equation in permutations]{}, [*Trudy Mat. Inst. Steklov.*]{}, 142 : 182–194, 270, 1976. . [*The number of permutations of a special form*]{}, [*Mat. Sb. (N.S.)*]{}, 99(141) [**3**]{}: 468–476, 480, 1976. Leo Moser and Max Wyman, , , 7, (1955), 159–168. Leo Moser and Max Wyman, , , 8, (1956), 225–233. Alice C. Niemeyer and Cheryl E. Praeger, On the proportion of permutations of order a multiple of the degree, preprint, 2005. Alice C. Niemeyer and Cheryl E. Praeger, On the frequency of permutations containing a long cycle, *J. Algebra* [**300**]{} (2006), 289-304. Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery. . John Wiley & Sons, New York, 5th edition, 1991. L.M. Volynets. . , 40:155–160, 286, 1986. Richard Warlimont. Über die [A]{}nzahl der [L]{}ösungen von $x\sp{n}=1$ in der symmetrischen [G]{}ruppe ${S}\sb{n}$. , 30(6), (1978), 591–594. Herbert S. Wilf. , , 15(2), (1986), [228-232]{}.
{ "pile_set_name": "ArXiv" }
Analysis of the mechanism of the vasodepressor effect of urocortin in anesthetized rats. The aim was to examine if the depressor effect of urocortin involves activation of the nitric oxide (NO)/L-arginine pathway, production of prostanoids or opening of K(+)-channels. I. v. bolus urocortin (0.1-3 nmol/kg) dose-dependently decreased mean arterial pressure in thiobutabarbital-anesthetized rats. The depressor effect of urocortin was unaffected by pretreatment with N(G)-nitro-L-arginine methyl ester (L-NAME, inhibitor of NO synthase, i.v. bolus) or noradrenaline (i.v. infusion), which increased arterial pressure to a similar level as that produced by L-NAME. In addition, methylene blue (inhibitor of soluble guanylyl cyclase, i.v. infusion), indomethacin (cyclooxygenase inhibitor, i.v. bolus), glibenclamide (blocker of ATP-sensitive K(+)-channels, i.v. bolus) or tetraethylammonium (a non specific K(+)-channel blocker, i.v. bolus) did not affect the depressor effect of urocortin. In conclusion, the depressor effect of urocortin in anesthetized rats is not mediated via the NO/L-arginine pathway, activation of soluble guanylyl cyclase, production of prostanoids, opening of TEA sensitive K(+)-channels nor opening of ATP sensitive K(+)-channels.
{ "pile_set_name": "PubMed Abstracts" }
Checklist for Fiction Readers It can be hard to get useful feedback from friends and family. The key is to ask specific questions — questions that will get them to notice their own reactions as an ordinary reader. Honest answers to these questions will prove invaluable to any fiction writer. 1) Is there anything you didn’t understand?• Was anything confusing or hard to follow? • Was it ever hard to figure out what was going on? • Did anything not make sense to you? • Were the events in the story believable? • Were you caught up in the story? Or were there things that interrupted the flow and reminded you that you were reading a made-up story? • Did you ever get characters confused, or forget who a character was? 2) Were you ever bored? • Were there points where your interest flagged? • Were there parts that felt extraneous or unnecessary? 3) Did you want to know more? • Were there characters you wanted to know more about? • Were you left wondering ‘what happened then?’ at any point? • Was there anything else — setting, theme, messages — that you wish had been explored more? 4) Did the characters seem real?• Were the characters believable? (Were they ever unbelievable?) • Did you care about the characters? (Did you ever not care?) • Which characters did you like? Which characters did you hate? Which characters left you indifferent? 5) Did anything seem like a cliché?• Did you ever roll your eyes at a plot twist? • Did any of the descriptions or metaphors feel tired and unoriginal? With inspiration from Orson Scott Card’s concept of the “Wise Reader,” from pp.121-124 of his book How to Write Science Fiction and Fantasy (Writer’s Digest Books, 1990).
{ "pile_set_name": "Pile-CC" }
--- abstract: 'This paper introduces a novel feature detector based only on information embedded inside a CNN trained on standard tasks (e.g. classification). While previous works already show that the features of a trained CNN are suitable descriptors, we show here how to extract the feature locations from the network to build a detector. This information is computed from the gradient of the feature map with respect to the input image. This provides a saliency map with local maxima on relevant keypoint locations. Contrary to recent CNN-based detectors, this method requires neither supervised training nor finetuning. We evaluate how repeatable and how ‘matchable’ the detected keypoints are with the repeatability and matching scores. Matchability is measured with a simple descriptor introduced for the sake of the evaluation. This novel detector reaches similar performances on the standard evaluation HPatches dataset, as well as comparable robustness against illumination and viewpoint changes on Webcam and photo-tourism images. These results show that a CNN trained on a standard task embeds feature location information that is as relevant as when the CNN is specifically trained for feature detection.' author: - Assia Benbihi - Matthieu Geist - 'C[é]{}dric Pradalier' bibliography: - '../egbib.bib' title: 'ELF: Embedded Localisation of Features in pre-trained CNN' --- Introduction ============ ![(1-6) Embedded Detector: Given a CNN trained on a standard vision task (classification), we backpropagate the feature map back to the image space to compute a saliency map. It is thresholded to keep only the most informative signal and keypoints are the local maxima. (7-8): simple-descriptor.[]{data-label="fig:pipeline"}](img.png){width="\linewidth"} Feature extraction, description and matching is a recurrent problem in vision tasks such as Structure from Motion (SfM), visual SLAM, scene recognition and image retrieval. The extraction consists in detecting image keypoints, then the matching pairs the nearest keypoints based on their descriptor distance. Even though hand-crafted solutions, such as SIFT [@lowe2004distinctive], prove to be successful, recent breakthroughs on local feature detection and description rely on supervised deep-learning methods [@detone18superpoint; @ono2018lf; @yi2016lift]. They detect keypoints on saliency maps learned by a Convolutional Neural Network (CNN), then compute descriptors using another CNN or a separate branch of it. They all require strong supervision and complex training procedures: [@yi2016lift] requires ground-truth matching keypoints to initiate the training, [@ono2018lf] needs the ground-truth camera pose and depth maps of the images, [@detone18superpoint] circumvents the need for ground-truth data by using synthetic one but requires a heavy domain adaptation to transfer the training to realistic images. All these methods require a significant learning effort. In this paper, we show that a trained network already embeds enough information to build State-of-the-Art (SoA) detector and descriptor. The proposed method for local feature detection needs only a CNN trained on standard task, such as ImageNet [@deng2009imagenet] classification, and no further training. The detector, dubbed ELF, relies on the features learned by such a CNN and extract their locations from the feature map gradients. Previous work already highlights that trained CNN features are relevant descriptors [@fischer2014descriptor] and recent works [@balntas2016learning; @han2015matchnet; @simo2015discriminative] specifically train CNN to produce features suitable for keypoint description. However, no existing approach uses a pre-trained CNN for feature detection. ELF computes the gradient of a trained CNN feature map with respect to *w.r.t* the image: this outputs a saliency map with local maxima on keypoint positions. Trained detectors learn this saliency map with a CNN whereas we extract it with gradient computations. This approach is inspired by [@simonyan2013deep] which observes that the gradient of classification scores *w.r.t* the image is similar to the image saliency map. ELF differs in that it takes the gradient of feature maps and not the classification score contrary to existing work exploiting CNN gradients [@selvaraju2017grad; @smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic]. These previous works aim at visualising the learning signal for classification specifically whereas ELF extracts the feature locations. The extracted saliency map is then thresholded to keep only the most relevant locations and standard Non-Maxima Suppression (NMS) extracts the final keypoints (Figure \[fig:heatmap\_coco\]). ![ Saliency maps thresholding to keep only the most informative location. Top: original image. (Left-Right: Webcam [@verdie2015tilde], HPatches [@balntas2017hpatches], COCO[@lin2014microsoft]) Middle: blurred saliency maps. Bottom: saliency map after threshold. (Better seen on a computer.) []{data-label="fig:heatmap_coco"}](fig3_heatmap.png){width="\linewidth"} ELF relies only on six parameters: 2$\times$2 Gaussian blur parameters for the automatic threshold level estimation and for the saliency map denoising; and two parameters for the (NMS) window and the border to ignore. Detection only requires one forward and one backward passes and takes $\sim$0.2s per image on a simple Quadro M2200, which makes it suitable for real-time applications. ELF is compared to individual detectors with standard *repeatability* [@mikolajczyk2005comparison] but results show that this metric is not discriminative enough. Most of the existing detectors can extract keypoints repeated across images with similar repeatability scores. Also, this metric does not express how ‘useful’ the detected keypoints are: if we sample all pixels as keypoints, we reach 100% of *rep.* but the matching may not be perfect if many areas look alike. Therefore, the detected keypoints are also evaluated on how ‘matchable’ they are with the *matching score* [@mikolajczyk2005comparison]. This metric requires to describe the keypoints so we define a simple descriptor: it is based on the interpolation of a CNN feature map on the detected keypoints, as in [@detone18superpoint]. This avoids biasing the performance by choosing an existing competitive descriptor. Experiments show that even this simple descriptor reaches competitive results which comforts the observation of [@fischer2014descriptor], on the relevance of CNN features as descriptors. More details are provided section 4.1. ELF is tested on five architectures: three classification networks trained on ImageNet classification: AlexNet, VGG and Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception], as well as SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf] descriptor networks. Although outside the scope of this paper, this comparison provides preliminary results of the influence of the network architecture, task and training data on ELF’s performance. Metrics are computed on HPatches [@balntas2017hpatches] for generic performances. We derive two auxiliary datasets from HPatches to study scale and rotation robustness. Light and 3D viewpoint robustness analysis are run on the Strecha, Webcam and datasets [@strecha2008benchmarking; @verdie2015tilde]. These extensive experiments show that ELF is on par with other sparse detectors, which suggests that the feature representation and location information learnt by a CNN to complete a vision task is as relevant as when the CNN is specifically trained for feature detection. We additionally test ELF’s robustness on 3D reconstruction from images in the context of the CVPR 2019 Image Matching challenge [@cvpr19challenge]. Once again, ELF is on par with other sparse methods even though denser methods, e.g. [@detone18superpoint], are more appropriate for such a task. Our contributions are the following: - We show that a CNN trained on a standard vision task embeds feature location in the feature gradients. This information is as relevant for feature detection as when a CNN is specifically trained for it. - We define a systematic method for local feature detection. Extensive experiments show that ELF is on par with other SoA deep trained detectors. They also update the previous result from [@fischer2014descriptor]: self-taught CNN features provide SoA descriptors in spite of recent improvements in CNN descriptors [@choy2016universal]. - We release the python-based evaluation code to ease future comparison together with ELF code[^1]. The introduced robustness datasets are also made public [^2]. Related work ============ Early methods rely on hand-crafted detection and description : SIFT [@lowe2004distinctive] detects 3D spatial-scale keypoints on difference of gaussians and describes them with a 3D Histogram Of Gradients (HOG). SURF [@bay2006surf] uses image integral to speed up the previous detection and uses a sum of Haar wavelet responses for description. KAZE [@alcantarilla2012kaze] extends the previous multi-scale approach by detecting features in non-linear scale spaces instead of the classic Gaussian ones. ORB [@rublee2011orb] combines the FAST [@rosten2006machine] detection, the BRIEF [@calonder2010brief] description, and improves them to make the pipeline scale and rotation invariant. MSER-based detector hand-crafts desired invariance properties for keypoints, and designs a fast algorithm to detect them [@matas2004robust]. Even though these hand-crafted methods have proven to be successful and to reach state-of-the-art performance for some applications, recent research focus on learning-based methods. One of the first learned detector is TILDE [@verdie2015tilde], trained under drastic changes of light and weather on the Webcam dataset. They use supervision to learn saliency maps which maxima are keypoint locations. Ground-truth saliency maps are generated with ‘good keypoints’: they use SIFT and filter out keypoints that are not repeated in more than 100 images. One drawback of this method is the need for supervision that relies on another detector. However, there is no universal explicit definition of what a good keypoint is. This lack of specification inspires Quad-Networks [@savinov2017quad] to adopt an unsupervised approach: they train a neural network to rank keypoints according to their robustness to random hand-crafted transformations. They keep the top/bottom quantile of the ranking as keypoints. ELF is similar in that it does not requires supervision but differs in that it does not need to further train the CNN. Other learned detectors are trained within full detection/description pipelines such as LIFT [@yi2016lift], SuperPoint [@detone18superpoint] and LF-Net [@ono2018lf]. LIFT contribution lies in their original training method of three CNNs. The detector CNN learns a saliency map where the most salient points are keypoints. They then crop patches around these keypoints, compute their orientations and descriptors with two other CNNs. They first train the descriptor with patches around ground-truth matching points with contrastive loss, then the orientation CNN together with the descriptor and finally with the detector. One drawback of this method is the need for ground-truth matching keypoints to initiate the training. In [@detone18superpoint], the problem is avoided by pre-training the detector on a synthetic geometric dataset made of polygons on which they detect mostly corners. The detector is then finetuned during the descriptor training on image pairs from COCO [@lin2014microsoft] with synthetic homographies and the correspondence contrastive loss introduced in [@choy2016universal]. LF-Net relies on another type of supervision: it uses ground-truth camera poses and image depth maps that are easier to compute with laser or standard SfM than ground-truth matching keypoints. Its training pipeline builds over LIFT and employs the projective camera model to project detected keypoints from one image to the other. These keypoint pairs form the ground-truth matching points to train the network. ELF differs in that the CNN model is already trained on a standard task. It then extracts the relevant information embedded inside the network for local feature detection, which requires no training nor supervision. The detection method of this paper is mainly inspired from the initial observation in [@simonyan2013deep]: given a CNN trained for classification, the gradient of a class score *w.r.t* the image is the saliency map of the class object in the input image. A line of works aims at visualizing the CNN representation by inverting it into the image space through optimization [@mahendran2015understanding; @gatys2016image]. Our work differs in that we backpropagate the feature map itself and not a feature loss. Following works use these saliency maps to better understand the CNN training process and justify the CNN outputs. Efforts mostly focus on the gradient definitions [@smilkov2017smoothgrad; @springenberg2015striving; @sundararajan2017axiomatic; @zeiler2014visualizing]. They differ in the way they handle the backpropagation of the non-linear units such as Relu. Grad-CAM [@selvaraju2017grad] introduces a variant where they fuse several gradients of the classification score *w.r.t* feature maps and not the image space. Instead, ELF computes the gradient of the feature map, and not a classification score, *w.r.t* the image. Also we run simple backpropagation which differs in the non-linearity handling: all the signal is backpropagated no matter whether the feature maps or the gradients are positive or not. Finally, as far as we know, this is the first work to exploit the localisation information present in these gradients for feature detection. The simple descriptor introduced for the sake of the matchability evaluation is taken from UCN [@choy2016universal]. Given a feature map and the keypoints to describe, it interpolates the feature map on the keypoints location. Using a trained CNN for feature description is one of the early applications of CNN [@fischer2014descriptor]. Later, research has taken on specifically training the CNN to generate features suitable for keypoint matching either with patch-based approaches, among which [@simo2015discriminative; @melekhov2016siamese; @han2015matchnet; @zagoruyko2015learning], or image-based approaches [@taira2018inloc; @choy2016universal]. We choose the description method from UCN [@choy2016universal], also used by SuperPoint, for its complexity is only $O(1)$ compared to patch-based approaches that are $O(N)$ with $N$ the number of keypoints. We favor UCN to InLoc [@taira2018inloc] as it is simpler to compute. The motivation here is only to get a simple descriptor easy to integrate with all detectors for fair comparison of the *detector* matching performances. So we overlook the description performance. Method ====== This section defines ELF, a detection method valid for any trained CNN. Keypoints are local maxima of a saliency map computed as the feature gradient *w.r.t* the image. We use the data adaptive Kapur method [@kapur1985new] to automatically threshold the saliency map and keep only the most salient locations, then run NMS for local maxima detection. ![(Bigger version Figure \[fig:big\_saliency\_coco\].) Saliency maps computed from the feature map gradient $\left| ^TF^l(x) \cdot \frac{\partial F^l}{\partial \mathbf{I}} \right|$. Enhanced image contrast for better visualisation. Top row: gradients of VGG $pool_2$ and $pool_3$ show a loss of resolution from $pool_2$ to $pool_3$. Bottom: $(pool_i)_{i \in [1,2,5]}$ of VGG on Webcam, HPatches and Coco images. Low level saliency maps activate accurately whereas higher saliency maps are blurred.[]{data-label="fig:saliency_coco"}](fig2_saliency_bis.png){width="\linewidth"} Feature Specific Saliency ------------------------- We generate a saliency map that activates on the most informative image region for a specific CNN feature level $l$. Let $\mathbf{I}$ be a vector image of dimension $D_I = H_I \cdot W_I \cdot C_I$. Let $F^l$ be a vectorized feature map of dimension $D_F= H_l \cdot W_l \cdot C_l$. The saliency map $S^l$, of dimension $D_I$, is $S^l(\mathbf{I})=\left| ^tF^l(\mathbf{I}) \cdot \nabla_I F^l \right|$, with $\nabla_I F^l$ a $D_F \times D_I$ matrix. The saliency activates on the image regions that contribute the most to the feature representation $F^l(\mathbf{I})$. The term $\nabla_I F^l$ explicits the correlation between the feature space of $F^l$ and the image space in general. The multiplication by $F^l(\mathbf{I})$ applies the correlation to the features $F^l(\mathbf{I})$ specifically and generate a visualisation in image space $S^l(\mathbf{I})$. From a geometrical point of view, this operation can be seen as the projection $\nabla_I F^l$ of a feature signal $F^l(\mathbf{I})$ into the image space. From a signal processing approach, $F^l(\mathbf{I})$ is an input signal filtered through $\nabla_I F^l$ into the image space. If $C_I>1$, $S^l$ is converted into a grayscale image by averaging it across channels. Feature Map Selection --------------------- We provide visual guidelines to choose the feature level $l$ so that $F^l$ still holds high resolution localisation information while providing a useful high-level representation. CNN operations such as convolution and pooling increase the receptive field of feature maps while reducing their spatial dimensions. This means that $F^{l}$ has less spatial resolution than $F^{l-1}$ and the backpropagated signal $S^l$ ends up more spread than $S^{l-1}$. This is similar to when an image is too enlarged and it can be observed in Figure \[fig:saliency\_coco\], which shows the gradients of the VGG feature maps. On the top row, $pool_2$’s gradient (left) better captures the location details of the dome whereas $pool_3$’s gradient (right) is more spread. On the bottom rows, the images lose their resolution as we go higher in the network. Another consequence of this resolution loss is that small features are not embedded in $F^l$ if $l$ is too high. This would reduce the space of potential keypoint to only large features which would hinder the method. This observation motivates us to favor low-level feature maps for feature detection. We chose the final $F^l$ by taking the highest $l$ which provides accurate localisation. This is visually observable by sparse high intensity signal contrary to the blurry aspect of higher layers. Automatic Data-Adaptive Thresholding ------------------------------------ The threshold is automatic and adapts to the saliency map distribution to keep only the most informative regions. Figure \[fig:heatmap\_coco\] shows saliency maps before and after thresholding using Kapur’s method [@kapur1985new], which we briefly recall below. It chooses the threshold to maximize the information between the image background and foreground *i.e.* the pixel distribution below and above the threshold. This method is especially relevant in this case as it aims at maintaining as much information on the distribution above the threshold as possible. This distribution describes the set of local maxima among which we choose our keypoints. More formally, for an image $\mathbf{I}$ of $N$ pixels with $n$ sorted gray levels and $(f_i)_{i \in n}$ the corresponding histogram, $p_i=\frac{f_i}{N}$ is the empirical probability of a pixel to hold the value $f_i$. Let $s \in n$ be a threshold level and $A,B$ the empirical background and foreground distributions. The level $s$ is chosen to maximize the information between A and B and the threshold value is set to $f_s$: $A = \left( \frac{p_i}{\sum_{i<s}pi}\right)_{i<s}$ and $B = \left(\frac{p_i}{\sum_{i>=s}pi}\right)_{i>s}$. For better results, we blur the image with a Gaussian of parameters $(\mu_{thr}, \sigma_{thr})$ before computing the threshold level. Once the threshold is set, we denoise the image with a second Gaussian blur of parameters $(\mu_{noise}, \sigma_{noise})$ and run standard NMS (the same as for SuperPoint) where we iteratively select decreasing global maxima while ensuring that their nearest neighbor distance is higher than the window $w_{\textrm{NMS}} \in \mathbb{N}$. Also we ignore the $b_{\textrm{NMS}} \in \mathbb{N}$ pixels around the image border. Simple descriptor ----------------- As mentioned in the introduction, the repeatability score does not discriminate among detectors anymore. So they are also evaluated on how ‘matchable’ their detected keypoints are with the matching score. To do so, the ELF detector is completed with a simple descriptor inspired by SuperPoint’s descriptor. The use of this simple descriptor over existing competitive ones avoids unfairly boosting ELF’s perfomance. Inspired by SuperPoint, we interpolate a CNN feature map on the detected keypoints. Although simple, experiments show that this simple descriptor completes ELF into a competitive feature detection/description method. The feature map used for description may be different from the one for detection. High-level feature maps have wider receptive field hence take higher context into account for the description of a pixel location. This leads to more informative descriptors which motivates us to favor higher level maps. However we are also constrained by the loss of resolution previously described: if the feature map level is too high, the interpolation of the descriptors generate vector too similar to each other. For example, the VGG $pool_4$ layer produces more discriminative descriptors than $pool_5$ even though $pool_5$ embeds information more relevant for classification. Empirically we observe that there exists a layer level $l'$ above which the description performance stops increasing before decreasing. This is measured through the matching score metric introduced in [@mikolajczyk2005comparison]. The final choice of the feature map is done by testing some layers $l'>l$ and select the lowest feature map before the descriptor performance stagnates. The compared detectors are evaluated with both their original descriptor and this simple one. We detail the motivation behind this choice: detectors may be biased to sample keypoints that their respective descriptor can describe ‘well’ [@yi2016lift]. So it is fair to compute the matching score with the original detector/descriptor pairs. However, a detector can sample ‘useless points’ (e.g. sky pixels for 3d reconstructions) that its descriptor can characterise ‘well’. In this case, the descriptor ‘hides’ the detector default. This motivates the integration of a common independent descriptor with all detectors to evaluate them. Both approaches are run since each is as fair as the other. Experiments =========== This section describes the evaluation metrics and datasets as well as the method’s tuning. Our method is compared to detectors with available public code: the fully hand-crafted SIFT [@lowe2004distinctive], SURF [@bay2006surf], ORB [@rublee2011orb], KAZE [@alcantarilla2012kaze], the learning-based LIFT [@yi2016lift], SuperPoint [@detone18superpoint], LF-Net [@ono2018lf], the individual detectors TILDE [@verdie2015tilde], MSER [@matas2004robust]. Metrics ------- We follow the standard validation guidelines [@mikolajczyk2005comparison] that evaluates the detection performance with *repeatability (rep)*. It measures the percentage of keypoints common to both images. We also compute the *matching score (ms)* as an additional *detector* metric. It captures the percentage of keypoint pairs that are nearest neighbours in both image space and descriptor space i.e. the ratio of keypoints correctly matched. For fair completeness, the mathematical definitions of the metrics are provided in Appendix and their implementation in the soon-to-be released code. A way to reach perfect *rep* is to sample all the pixels or sample them with a frequency higher than the distance threshold $\epsilon_{kp}$ of the metric. One way to prevent the first flaw is to limit the number of keypoints but it does not counter the second. Since detectors are always used together with descriptors, another way to think the detector evaluation is: *’a good keypoint is one that can be discriminatively described and matched’*. One could think that such a metric can be corrupted by the descriptor. But we ensure that a detector flaw cannot be hidden by a very performing descriptor with two guidelines. One experiment must evaluate all detector with one fixed descriptor (the simple one defined in 3.4). Second, *ms* can never be higher than *rep* so a detector with a poor *rep* leads to a poor *ms*. Here the number of detected keypoints is limited to 500 for all methods. As done in [@detone18superpoint; @ono2018lf], we replace the overlap score in [@mikolajczyk2005comparison] to compute correspondences with the 5-pixel distance threshold. Following [@yi2016lift], we also modify the matching score definition of [@mikolajczyk2005comparison] to run a greedy bipartite-graph matching on all descriptors and not just the descriptor pairs for which the distance is below an arbitrary threshold. We do so to be able to compare all state-of-the-art methods even when their descriptor dimension and range vary significantly. (More details in Appendix.) Datasets -------- All images are resized to the 480$\times$640 pixels and the image pair transformations are rectified accordingly. ![Left-Right: HPatches: planar viewpoint. Webcam: light. HPatches: rotation. HPatches: scale. Strecha: 3D viewpoint.[]{data-label="fig:datasets"}](fig13.png){width="\linewidth"} **General performances.** The HPatches dataset [@balntas2017hpatches] gathers a subset of standard evaluation images such as DTU and OxfordAffine [@aanaes2012interesting; @mikolajczyk2005performance]: it provides a total of 696 images, 6 images for 116 scenes and the corresponding homographies between the images of a same scene. For 57 of these scenes, the main changes are photogrammetric and the remaining 59 show significant geometric deformations due to viewpoint changes on planar scenes. **Illumination Robustness.** The Webcam dataset [@verdie2015tilde] gathers static outdoor scenes with drastic natural light changes contrary to HPatches which mostly holds artificial light changes in indoor scenes. **Rotation and Scale Robustness.** We derive two datasets from HPatches. For each of the 116 scenes, we keep the first image and rotate it with angles from $0^{\circ}$ to $210^{\circ}$ with an interval of $40^{\circ}$. Four zoomed-in version of the image are generated with scales $[1.25, 1.5, 1.75, 2]$. We release these two datasets together with their ground truth homographies for future comparisons. **3D Viewpoint Robustness.** We use three Strecha scenes [@strecha2008benchmarking] with increasing viewpoint changes: *Fountain, Castle entry, Herzjesu-P8*. The viewpoint changes proposed by HPatches are limited to planar scenes which does not reflect the complexity of 3D structures. Since the ground-truth depths are not available anymore, we use COLMAP [@schonberger2016structure] 3D reconstruction to obtain ground-truth scaleless depth. We release the obtained depth maps and camera poses together with the evaluation code. ELF robustness is additionally tested in the CVPR19 Image Matching Challenge [@cvpr19challenge] (see results sections). Baselines --------- We describe the rationale behind the evaluation. The tests run on a QuadroM2200 with Tensorflow 1.4, Cuda8, Cudnn6 and Opencv3.4. We use the OpenCV implementation of SIFT, SURF, ORB, KAZE, MSER with the default parameters and the author’s code for TILDE, LIFT, SuperPoint, LF-Net with the provided models and parameters. When comparing detectors in the feature matching pipeline, we measure their matching score with both their original descriptor and ELF simple descriptor. For MSER and TILDE, we use the VGG simple descriptor. **Architecture influence.** ELF is tested on five networks: three classification ones trained on ImageNet (AlexNet, VGG, Xception [@krizhevsky2012imagenet; @simonyan2014very; @chollet17xception]) as well as the trained SuperPoint’s and LF-Net’s descriptor ones. We call each variant with the network’s names prefixed with ELF as in saliency. The paper compares the influence of i) architecture for a fixed task (ELF-AlexNet [@krizhevsky2012imagenet] *vs.* ELF-VGG [@simonyan2014very] *v.s.* ELF-Xception [@chollet17xception]), ii) the task (ELF-VGG *vs.* ELF-SuperPoint (SP) descriptor), iii) the training dataset (ELF-LFNet on phototourism *vs.* ELF-SP on MS-COCO). This study is being refined with more independent comparisons of tasks, datasets and architectures soon available in a journal extension. We use the author’s code and pre-trained models which we convert to Tensorflow [@abadi2016tensorflow] except for LF-Net. We search the blurring parameters $(\mu_{thr}, \sigma_{thr})$, $(\mu_{noise}, \sigma_{noise})$ in the range $ [\![3,21]\!]^2$ and the NMS parameters $(w_{NMS}, b_{NMS})$ in $[\![4,13]\!]^2$. **Individual components comparison.** Individual detectors are compared with the matchability of their detection and the description of the simple VGG-pool3 descriptor. This way, the *m.s.* only depends on the detection performance since the description is fixed for all detectors. The comparison between ELF and recent deep methods raises the question of whether triplet-like losses are relevant to train CNN descriptors. Indeed, these losses constrain the CNN features directly so that matching keypoints are near each other in descriptor space. Simpler loss, such as cross-entropy for classification, only the constrain the CNN output on the task while leaving the representation up to the CNN. ELF-VGG detector is also integrated with existing descriptors. This evaluates how useful the CNN self-learned feature localisation compares with the hand-crafted and the learned ones. **Gradient Baseline.** Visually, the feature gradient map is reminiscent of the image gradients computed with the Sobel or Laplacian operators. We run two variants of our pipeline where we replace the feature gradient with them. This aims at showing whether CNN feature gradients embed more information than image intensity gradients. Results ======= Experiments show that ELF compares with the state-of-the-art on HPatches and demonstrates similar robustness properties with recent learned methods. It generates saliency maps visually akin to a Laplacian on very structured images (HPatches) but proves to be more robust on outdoor scenes with natural conditions (Webcam). When integrated with existing feature descriptors, ELF boosts the matching score. Even integrating ELF simple descriptor improves it with the exception of SuperPoint for which results are equivalent. This sheds new light on the representations learnt by CNNs and suggests that deep description methods may underexploit the information embedded in their trained networks. Another suggestion may be that the current metrics are not relevant anymore for deep learning methods. Indeed, all can detect repeatable keypoints with more or less the same performances. Even though the matchability of the points (*m.s*) is a bit more discriminative, neither express how ‘useful’ the *kp* are for the end-goal task. One way to do so is to evaluate an end-goal task (*e.g.* Structure-from-Motion). However, for the evaluation to be rigorous all the other steps should be fixed for all papers. Recently, the Image Matching CVPR19 workshop proposed such an evaluation but is not fully automatic yet. These results also challenge whether current descriptor-training loss are a strong enough signal to constrain CNN features better than a simple cross-entropy. The tabular version of the following results is provided in Appendix. The graph results are better seen with color on a computer screen. Unless mentioned otherwise, we compute repeatability for each detector, and the matching score of detectors with their respective descriptors, when they have one. We use ELF-VGG-$pool_4$ descriptor for TILDE, MSER, ELF-VGG, ELF-SuperPoint, and ELF-LFNet. We use AlexNet and Xception feature maps to build their respective simple descriptors. The meta-parameters for each variants are provided in Appendix. **General performances.** Figure \[fig:hpatch\_gle\_perf\] (top) shows that the *rep* variance is low across detectors whereas *ms* is more discriminative, hence the validation method (Section 4.1). On HPatches, SuperPoint (SP) reaches the best *rep*-*ms* \[68.6, 57.1\] closely followed by ELF (e.g. ELF-VGG: \[63.8, 51.8\]) and TILDE \[66.0, 46.7\]. In general, we observe that learning-based methods all outperform hand-crafted ones. Still, LF-Net and LIFT curiously underperform on HPatches: one reason may be that the data they are trained on differs too much from this one. LIFT is trained on outdoor images only and LF-Net on either indoor or outdoor datasets, whereas HPatches is made of a mix of them. We compute metrics for both LF-Net models and report the highest one (indoor). Even though LF-Net and LIFT fall behind the top learned methods, they still outperform hand-crafted ones which suggests that their framework learn feature specific information that hand-crafted methods can not capture. This supports the recent direction towards trained detectors and descriptors. **Light Robustness** Again, *ms* is a better discriminant on Webcam than *rep* (Figure \[fig:hpatch\_gle\_perf\] bottom). ELF-VGG reaches top *rep*-*ms* \[53.2, 43.7\] closely followed by TILDE \[52.5, 34.7\] which was the state-of-the-art detector. Overall, there is a performance degradation ($\sim$20%) from HPatches to Webcam. HPatches holds images with standard features such as corners that state-of-the-art methods are made to recognise either by definition or by supervision. There are less such features in the Webcam dataset because of the natural lighting that blurs them. Also there are strong intensity variations that these models do not handle well. One reason may be that the learning-based methods never saw such lighting variations in their training set. But this assumption is rejected as we observe that even SuperPoint, which is trained on Coco images, outperforms LIFT and LF-Net, which are trained on outdoor images. Another justification can be that what matters the most is the pixel distribution the network is trained on, rather than the image content. The top methods are classifier-based ELF and SuperPoint: the first ones are trained on the huge Imagenet dataset and benefit from heavy data augmentation. SuperPoint also employs a considerable data strategy to train their network. Thus these networks may cover a much wider pixel distribution which would explain their robustness to pixel distribution changes such as light modifications. **Architecture influence** ELF is tested on three classification networks as well as the descriptor networks of SuperPoint and LF-Net (Figure \[fig:hpatch\_gle\_perf\], bars under ‘ELF’). For a fixed training task (classification) on a fixed dataset (ImageNet), VGG, AlexNet and Xception are compared. As could be expected, the network architecture has a critical impact on the detection and ELF-VGG outperforms the other variants. The *rep* gap can be explained by the fact that AlexNet is made of wider convolutions than VGG, which induces a higher loss of resolution when computing the gradient. As for *ms*, the higher representation space of VGG may help building more informative features which are a stronger signal to backpropagate. This could also justify why ELF-VGG outperforms ELF-Xception that has less parameters. Another explanation is that ELF-Xception’s gradient maps seem smoother. Salient locations are then less emphasized which makes the keypoint detection harder. One could hint at the depth-wise convolution to explain this visual aspect but we could not find an experimental way to verify it. Surprisingly, ELF-LFNet outperforms the original LF-Net on both HPatches and Webcam and ELF-SuperPoint variant reaches similar results as the original. ![HPatches scale. Left-Right: rep, ms.[]{data-label="fig:robust_scale"}](fig7_scale.png){width="\linewidth"} **Scale Robustness.** ELF-VGG is compared with state-of-the art detectors and their respective descriptors (Figure \[fig:robust\_scale\]). Repeatability is mostly stable for all methods: SIFT and SuperPoint are the most invariant whereas ELF follows the same variations as LIFT and LF-Net. Once again, *ms* better assesses the detectors performance: SuperPoint is the most robust to scale changes, followed by LIFT and SIFT. ELF and LF-Net lose 50% of their matching score with the increasing scale. It is surprising to observe that LIFT is more scale-robust than LF-Net when the latter’s global performance is higher. A reasonable explanation is that LIFT detects keypoints at 21 scales of the same image whereas LF-Net only runs its detector CNN on 5 scales. Nonetheless, ELF outperforms LF-Net without manual multi-scale processing. ![HPatches rotation. Left-Right: rep, ms.[]{data-label="fig:robust_rotation"}](fig7_angle.png){width="\linewidth"} **Rotation Robustness.** Even though *rep* shows little variations (Figure \[fig:robust\_rotation\]), all learned methods’ *ms* crash while only SIFT survives the rotation changes. This can be explained by the explicit rotation estimation step of SIFT. However LIFT and LF-Net also run such a computation. This suggests that either SIFT’s hand-crafted orientation estimation is more accurate or that HOG are more rotation invariant than learned features. LF-Net still performs better than LIFT: this may be because it learns the keypoint orientation on the keypoint features representation rather than the keypoint pixels as done in LIFT. Not surprisingly, ELF simple descriptor is not rotation invariant as the convolutions that make the CNN are not. This also explains why SuperPoint also crashes in a similar manner. These results suggest that the orientation learning step in LIFT and LF-Net is needed but its robustness could be improved. ![Robustness analysis: 3D viewpoint.[]{data-label="fig:robust_strecha"}](fig7_strecha.png){width="\linewidth"} **3D Viewpoint Robustness.** While SIFT shows a clear advantage of pure-rotation robustness, it displays similar degradation as other methods on realistic rotation-and-translation on 3D structures. Figure \[fig:robust\_strecha\] shows that all methods degrade uniformly. One could assume that this small data sample is not representative enough to run such robustness analysis. However, we think that these results rather suggest that all methods have the same robustness to 3D viewpoint changes. Even though previous analyses allows to rank the different feature matching pipelines, each has advantages over others on certain situations: ELF or SuperPoint on general homography matches, or SIFT on rotation robustness. This is why this paper only aims at showing ELF reaches the same performances and shares similar properties to existing methods as there is no generic ranking criteria. The recent evaluation run by the CVPR19 Image Matching Challenge [@cvpr19challenge] supports the previous conclusions. ![Left-Middle-Right bars: original method, integration of ELF detection, integration of ELF description.[]{data-label="fig:ind_component"}](fig11.png){width="\linewidth"} **Individual components performance.** First, all methods’ descriptor are replaced with the simple ELF-VGG-$pool_3$ one. We then compute their new *ms* and compare it to ELF-VGG on HPatches and Webcam (Figure \[fig:ind\_component\], stripes). The description is based on $pool_3$ instead of $pool_4$ here for it produces better results for the other methods while preserving ours. ELF reaches higher *ms* \[51.3\] for all methods except for SuperPoint \[53.7\] for which it is comparable. This shows that ELF is as relevant, if not more, than previous hand-crafted or learned detectors. This naturally leads to the question: *’What kind of keypoints does ELF detect ?’* There is currently no answer to this question as it is complex to explicitly characterize properties of the pixel areas around keypoints. Hence the open question *’What makes a good keypoint ?’* mentioned at the beginning of the paper. Still, we observe that ELF activates mostly on high intensity gradient areas although not all of them. One explanation is that as the CNN is trained on the vision task, it learns to ignore image regions useless for the task. This results in killing the gradient signals in areas that may be unsuited for matching. Another surprising observation regards CNN descriptors: SuperPoint (SP) keypoints are described with the SP descriptor in one hand and the simple ELF-VGG one in the other hand. Comparing the two resulting matching scores is one way to compare the SP and ELF descriptors. Results show that both approaches lead to similar *ms*. This result is surprising because SP specifically trains a description CNN so that its feature map is suitable for keypoint description [@choy2016universal]. In VGG training, there is no explicit constraints on the features from the cross-entropy loss. Still, both feature maps reach similar numerical description performance. This raises the question of whether contrastive-like losses, which input are CNN features, can better constrain the CNN representation than simpler losses, such as cross-entropy, which inputs are classification logits. This also shows that there is more to CNNs than only the task they are trained on: they embed information that can prove useful for unrelated tasks. Although the simple descriptor was defined for evaluation purposes, these results demonstrate that it can be used as a description baseline for feature extraction. The integration of ELF detection with other methods’ descriptor (Figure \[fig:ind\_component\], circle) boosts the *ms*. [@yi2016lift] previously suggested that there may be a correlation between the detector and the descriptor within a same method, i.e. the LIFT descriptor is trained to describe only the keypoints output by its detector. However, these results show that ELF can easily be integrated into existing pipelines and even boost their performances. **Gradient Baseline** The saliency map used in ELF is replaced with simple Sobel or Laplacian gradient maps. The rest of the detection pipeline stays the same and we compute their performance (Figure \[fig:gradient\_perf\] Left). They are completed with simple ELF descriptors from the VGG, AlexNet and Xception networks. These new hybrids are then compared to their respective ELF variant (Right). Results show that these simpler gradients can detect systematic keypoints with comparable *rep* on very structured images such as HPatches. However, the ELF detector better overcomes light changes (Webcam). On HPatches, the Laplacian-variant reaches similar *ms* as ELF-VGG (55 *vs* 56) and outperforms ELF-AlexNet and ELF-Xception. These scores can be explained with the images structure: for heavy textured images, high intensity gradient locations are relevant enough keypoints. However, on Webcam, all ELF detectors outperform Laplacian and Sobel with a factor of 100%. This shows that ELF is more robust than Laplacian and Sobel operators. Also, feature gradient is a sparse signal which is better suited for local maxima detection than the much smoother Laplacian operator (Figure \[fig:sobel\_visu\]). ![Feature gradient (right) provides a sparser signal than Laplacian (middle) which is more selective of salient areas.[]{data-label="fig:sobel_visu"}](fig5_sobel_similar_ter.png){height="3cm"} **Qualitative results** Green lines show putative matches based only on nearest neighbour matching of descriptors. More qualitative results are available in the video [^3]. ![Green lines show putative matches of the simple descriptor before RANSAC-based homography estimation.[]{data-label="fig:matching_pic"}](fig6_matching_ter.png){width="\linewidth"} **CVPR19 Image Matching Challenge [@cvpr19challenge]** This challenge evaluates detection/description methods on two standard tasks: 1) wide stereo matching and 2) structure from motion from small image sets. The *matching score* evaluates the first task, and the camera pose estimation is used for both tasks. Both applications are evaluated on the photo-tourism image collections of popular landmarks [@thomee59yfcc100m; @heinly2015reconstructing]. More details on the metrics definition are available on the challenge website [@cvpr19challenge]. *Wide stereo matching:* Task 1 matches image pairs across wide baselines. It is evaluated with the keypoints *ms* and the relative camera pose estimation between two images. The evaluators run COLMAP to reconstruct dense ‘ground-truth’ depth which they use to translate keypoints from one image to another and compute the matching score. They use the RANSAC inliers to estimate the camera pose and measure performance with the “angular difference between the estimated and ground-truth vectors for both rotation and translation. To reduce this to one value, they use a variable threshold to determine each pose as correct or not, then compute the area under the curve up to the angular threshold. This value is thus the mean average precision up to x, or mAPx. They consider 5, 10, 15, 20, and 25 degrees" [@cvpr19challenge]. Submissions can contain up to 8000 keypoints and we submitted entries to the sparse category i.e. methods with up to 512 keypoints. ![*Wide stereo matching.* Left: matching score (%) of sparse methods (up to 512 keypoints) on photo-tourism. Right: Evolution of mAP of camera pose for increasing tolerance threshold (degrees).[]{data-label="fig:cvpr19_task1"}](fig14.png){width="\linewidth"} Figure \[fig:cvpr19\_task1\] (left) shows the *ms* (%) of the submitted sparse methods. It compares ELF-VGG detection with DELF [@noh2017largescale] and SuperPoint, where ELF is completed with either the simple descriptor from pool3 or pool4, and SIFT. The variant are dubbed respectively ELF-256, ELF-512 and ELF-SIFT. This allows us to sketch a simple comparison of descriptor performances between the simple descriptor and standard SIFT. As previously observed on HPatches and Webcam, ELF and SuperPoint reach similar scores on Photo-Tourism. ELF-performance slightly increases from 25% to 26.4% when switching descriptors from VGG-pool3 to VGG-pool4. One explanation is that the feature space size is doubled from the first to the second. This would allow the pool4 descriptors to be more discriminative. However, the 1.4% gain may not be worth the additional memory use. Overall, the results show that ELF can compare with the SoA on this additional dataset that exhibits more illumination and viewpoint changes than HPatches and Webcam. This observation is reinforced by the camera pose evaluation (Figure \[fig:cvpr19\_task1\] right). SuperPoint shows as slight advantage over others that increases from 1% to 5% across the error tolerance threshold whereas ELF-256 exhibits a minor under-performance. Still, these results show ELF compares with SoA performance even though it is not trained explicitly for detection/description. ![*SfM from small subsets*. Evolution of mAP of camera pose for increasing tolerance threshold.[]{data-label="fig:cvpr19_task2"}](fig15.png){width="0.7\linewidth"} *Structure-from-Motion from small subsets.* Task 2 “proposes to to build SfM reconstructions from small (3, 5, 10, 25) subsets of images and use the poses obtained from the entire (much larger) set as ground truth" [@cvpr19challenge]. Figure \[fig:cvpr19\_task2\] shows that SuperPoint reaches performance twice as big as the next best method ELF-SIFT. This suggests that when few images are available, SuperPoint performs better than other approaches. One explanation is that even in ’sparse-mode’, *i.e.* when the number of keypoints is restricted up to 512, SuperPoint samples points more densely than the others ($\sim$383 *v.s.* $\sim$210 for the others). Thus, SuperPoint provides more keypoints to triangulate i.e. more 2D-3D correspondences to use when estimating the camera pose. This suggests that high keypoint density is a crucial characteristic of the detection method for Structure-from-Motion. In this regard, ELF still has room for improvement compared to SuperPoint. Conclusion ========== We have introduced ELF, a novel method to extract feature locations from pre-trained CNNs, with no further training. Extensive experiments show that it performs as well as state-of-the art detectors. It can easily be integrated into existing matching pipelines and proves to boost their matching performances. Even when completed with a simple feature-map-based descriptor, it turns into a competitive feature matching pipeline. These results shed new light on the information embedded inside trained CNNs. This work also raises questions on the descriptor training of deep-learning approaches: whether their losses actually constrain the CNN to learn better features than the ones it would learn on its own to complete a vision task. Preliminary results show that the CNN architecture, the training task and the dataset have substantial impact on the detector performances. A further analysis of these correlations is the object of a future work. ![image](fig2_saliency_bis.png){width="\linewidth"} Metrics definition ================== We explicit the repeatability and matching score definitions introduced in [@mikolajczyk2005comparison] and our adaptations using the following notations: let $(\mathbf{I}^1, \mathbf{I}^2)$, be a pair of images and $\mathcal{KP}^i = (kp_j^i)_{j<N_i}$ the set of $N_i$ keypoints in image $\mathbf{I_i}$. Both metrics are in the range $[0,1]$ but we express them as percentages for better expressibility. #### Repeatability Repeatability measures the percentage of keypoints common to both images. We first warp $\mathcal{KP}^1$ to $\mathbf{I}^2$ and note $\mathcal{KP}^{1,w}$ the result. A naive definition of repeatability is to count the number of pairs $(kp^{1,w}, kp^2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2$ such that $\|kp^{1,w}-kp^2\|_2 < \epsilon$, with $\epsilon$ a distance threshold. As pointed by [@verdie2015tilde], this definition overestimates the detection performance for two reasons: a keypoint close to several projections can be counted several times. Moreover, with a large enough number of keypoints, even simple random sampling can achieve high repeatability as the density of the keypoints becomes high. We instead use the definition implemented in VLBench [@lenc12vlbenchmarks]: we define a weighted graph $(V,E)$ where the edges are all the possible keypoint pairs between $\mathcal{KP}^{1,w}$ and $\mathcal{KP}^2$ and the weights are the euclidean distance between keypoints. $$\label{eq: graph_dfn} \begin{split} V &= (kp^{1,w} \in \mathcal{KP}^{1,w}) \cup (kp^2 \in \mathcal{KP}^2) \\ E &= (kp^{1,w}, kp^2, \|kp^{1,w} - kp^2\|_2) \in \mathcal{KP}^{1,w} \times \mathcal{KP}^2 \end{split}$$ We run a greedy bipartite matching on the graph and count the matches with a distance less than $\epsilon_{kp}$. With $\mathcal{M}$ be the resulting set of matches: $$\label{rep_dfn} repeatability = \frac{\mathcal{M}}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$ We set the distance threshold $\epsilon=5$ as is done in LIFT [@yi2016lift] and LF-Net [@ono2018lf]. #### Matching score The matching score definition introduced in [@mikolajczyk2005comparison] captures the percentage of keypoint pairs that are nearest neighbours both in image space and in descriptor space, and for which these two distances are below their respective threshold $\epsilon_{kp}$ and $\epsilon_{d}$. Let $\mathcal{M}$ be defined as in the previous paragraph and $\mathcal{M}_d$ be the analog of $\mathcal{M}$ when the graph weights are descriptor distances instead of keypoint euclidean distances. We delete all the pairs with a distance above the thresholds $\epsilon$ and $\epsilon_d$ in $\mathcal{M}$ and $\mathcal{M}_d$ respectively. We then count the number of pairs which are both nearest neigbours in image space and descriptor space i.e. the intersection of $\mathcal{M}$ and $\mathcal{M}_d$: $$\label{MS} matching \; score = \frac{\mathcal{M} \cap \mathcal{M}_d}{\textrm{min}(|\mathcal{KP}^1|, |\mathcal{KP}^2|)}$$ One drawback of this definition is that there is no unique descriptor distance threshold $\epsilon_d$ valid for all methods. For example, the SIFT descriptor as computed by OpenCV is a $[0,255]^{128}$ vector for better computational precision, the SuperPoint descriptor is a $[0,1]^{256}$ vector and the ORB descriptor is a 32 bytes binary vector. Not only the vectors are not defined over the same normed space but their range vary significantly. To avoid introducing human bias by setting a descriptor distance threshold $\epsilon_d$ for each method, we choose to set $\epsilon_d = \infty$ and compute the matching score as in [@mikolajczyk2005comparison]. This means that we consider any descriptor match valid as long as they match corresponding keypoints even when the descriptor distance is high. Tabular results =============== ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- [@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde] ELF-VGG 63.81 ELF-AlexNet 51.30 38.54 35.21 31.92 ELF-Xception 48.06 29.81 ELF-SuperPoint 59.7 46.29 44.32 18.11 ELF-LFNet 60.1 41.90 44.56 33.43 LF-Net 61.16 48.27 34.19 18.10 SuperPoint 46.35 32.44 LIFT 54.66 42.21 34.02 17.83 SURF 54.51 33.93 26.10 10.13 SIFT 51.19 28.25 24.58 8.30 ORB 53.44 31.56 14.76 1.28 KAZE 56.88 41.04 29.81 13.88 TILDE 52.53 46.71 34.67 MSER 47.82 52.23 21.08 6.14 ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- : Generic performances on HPatches [@balntas2017hpatches]. Robustness to light (Webcam [@verdie2015tilde]). (Fig. 5).[]{data-label="tab:whole_pipeline"} -- ----------- ----------- ----------- ----------- ----------- ----------- 34.19 **57.11** 34.02 24.58 26.10 14.76 **44.19** 53.71 **39.48** **27.03** **34.97** **20.04** 18.10 32.44 17.83 10.13 8.30 1.28 **30.71** **34.60** **26.84** **13.21** **21.43** **13.91** -- ----------- ----------- ----------- ----------- ----------- ----------- : Individual component performance (Fig. \[fig:ind\_component\]-stripes). Matching score for the integration of the VGG $pool_3$ simple-descriptor with other’s detection. Top: Original description. Bottom: Integration of simple-descriptor. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_des"} -- ----------- ----------- ----------- ----------- ----------- ----------- 34.19 **57.11** 34.02 24.58 26.10 14.76 **39.16** 54.44 **42.48** **50.63** **30.91** **36.96** 18.10 32.44 17.83 10.13 8.30 1.28 **26.70** **39.55** **30.82** **36.83** **19.14** **6.60** -- ----------- ----------- ----------- ----------- ----------- ----------- : Individual component performance (Fig. \[fig:ind\_component\]-circle). Matching score for the integration of ELF-VGG (on $pool_2$) with other’s descriptor. Top: Original detection. Bottom: Integration of ELF. HPatches: [@balntas2017hpatches]. Webcam: [@verdie2015tilde][]{data-label="tab:cross_res_det"} ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- [@balntas2017hpatches] [@verdie2015tilde] [@balntas2017hpatches] [@verdie2015tilde] Sobel-VGG 56.99 33.74 42.11 20.99 Lapl.-VGG **65.45** 33.74 **55.25** 22.79 VGG 63.81 **53.23** 51.84 **43.73** Sobel-AlexNet 56.44 33.74 30.57 15.42 Lapl.-AlexNet **65.93** 33.74 **40.92** 15.42 AlexNet 51.30 **38.54** 35.21 **31.92** Sobel-Xception 56.44 33.74 34.14 16.86 Lapl.-Xception **65.93** 33.74 **42.52** 16.86 Xception 48.06 **49.84** 29.81 **35.48** ---------------- ------------------------ -------------------- ------------------------ -------------------- -- -- : Gradient baseline on HPatches [@balntas2017hpatches] and Webcam [@verdie2015tilde] (Fig. \[fig:gradient\_perf\] ).[]{data-label="tab:cmp_sobel"} ELF Meta Parameters =================== This section specifies the meta parameters values for the ELF variants. For all methods, $(w_{NMS}, b_{NMS})=(10,10)$. - Denoise: $(\mu_{noise}, \sigma_{noise})$. - Threshold: $(\mu_{thr}, \sigma_{thr})$. - $F^l$: the feature map which gradient is used for detection. - simple-des: the feature map used for simple-description. Unless mentioned otherwise, the feature map is taken from the same network as the detection feature map $F^l$. Nets Denoise Threshold $F^l$ simple-desc ------------ --------- ----------- -------------- ------------- -- VGG (5,5) (5,4) pool2 pool4 Alexnet (5,5) (5,4) pool1 pool2 Xception (9,3) (5,4) block2-conv1 block4-pool SuperPoint (7,2) (17,6) conv1a VGG-pool3 LF-Net (5,5) (5,4) block2-BN VGG-pool3 : Generic performances on HPatches (Fig. \[fig:hpatch\_gle\_perf\]). (BN: Batch Norm)[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------------ --------- ----------- -------------- ------------- -- VGG (5,5) (5,4) pool2 pool4 Alexnet (5,5) (5,4) pool1 pool2 Xception (9,9) (5,4) block2-conv1 block4-pool SuperPoint (7,2) (17,6) conv1a VGG-pool3 LF-Net (5,5) (5,4) block2-conv VGG-pool3 : Robustness to light on Webcam (Fig. \[fig:hpatch\_gle\_perf\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,2) (17,6) pool2 pool4 : Robustness to scale on HPatches (Fig. \[fig:robust\_scale\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,2) (17,6) pool2 pool4 : Robustness to rotation on HPatches (Fig. \[fig:robust\_rotation\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,2) (17,6) pool2 pool4 : Robustness to 3D viewpoint on Strecha (Fig. \[fig:robust\_strecha\]).[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ------ --------- ----------- ------- ------------- -- VGG (5,5) (5,5) pool2 pool3 : Individual component analysis (Fig. \[fig:ind\_component\])[]{data-label="tab:meta_params"} Nets Denoise Threshold $F^l$ simple-desc ----------- --------- ----------- ------- ------------- -- VGG (5,5) (5,4) pool2 pool4 Sobel (9,9) (5,4) - pool4 Laplacian (9,9) (5,4) - pool4 : Gradient baseline on HPatches and Webcam (Fig. \[fig:gradient\_perf\]).[]{data-label="tab:meta_params"} [^1]: ELF code:<https://github.com/ELF-det/elf> [^2]: Rotation and scale dataset: <https://bit.ly/31RAh1S> [^3]: <https://youtu.be/oxbG5162yDs>
{ "pile_set_name": "ArXiv" }
[The Etest--an alternative to the NCLLS standard for susceptibility testing of yeasts?]. The Etest and the NCCLS method are not much differing in respect to their reproducibility. Only single observations exist on the clinical correlation of both tests. The correlation between both tests are known as being dependent on the yeast species and antifungals used. Due to the easy and simple handling the Etest is attractive for routine laboratories. The Etest has to be more evaluated before it can be generally recommended. The NCCLS method also is not validated until now. Different test methods should be compared thoroughly with the above mentioned standard.
{ "pile_set_name": "PubMed Abstracts" }
// Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. // +build amd64,linux // +build !gccgo package unix import "syscall" //go:noescape func gettimeofday(tv *Timeval) (err syscall.Errno)
{ "pile_set_name": "Github" }
Federation Space Federation Space is a Star Trek-based space naval combat board game published by Task Force Games in 1981. Design and gameplay Federation Space was designed by Stephen Wilcox as a strategic companion to the more tactical Star Fleet Battles system. The game components are: 20" x 24" hex grid map 432 back-printed die-cut cardboard counters fleet organization charts combat results table 16-page rulebook The map covers the large extent of space and many of the space-faring races described in the Star Trek TV series. This included the Federation, Klingons, Romulans, Gorns, Tholians and Hydrans. The Kzinti, a race mentioned in the Ringworld novels of Larry Niven, is also featured. This is a game of naval fleet action involving dozens (or more) ships. Nearly all ships move at the same speed (Warp Factor 6). With so many ships involved, combat relies on a simple two-step reduction system to determine damage. Each ship only has two statuses: normal (counter face up); and damaged (counter face-down). Taking any damage results in a ship receiving the "Damaged" status. Taking any subsequent damage destroys the ship. Battles continue until one side is either destroyed or withdraws. Since this is a strategic-level game, some scenarios describe the capture of base stations, starbases or even planets. There is also a campaign game for three or more players (each playing a different race) in which the players use diplomatic alliances and multiple navies to achieve their strategic goals. Reception In the March 1982 edition of The Space Gamer (Issue No. 49), William A. Barton recommended the game, saying, "Federation Space succeeds in its purpose to present a relatively simple, playable Star Trek game which can serve as a strategic module for Star Fleet Battles. Recommended to Trek gamers everywhere." In the August 1983 edition of Dragon (Issue 76), Tony Watson liked a number of things, including its simplicity of rules and combat, the fleet organization charts, the simple step-reduction damage system, and the colourful components. Watson criticized the size of the map, which although large compared to other combat games, was too small and restrictive for entire fleets. Watson also thought the game did not reward clever fleet maneuvers, relying instead on massed fleets simply engaging head on. He concluded, "Federation Space has much to recommend itself. Both those who play Starfleet Battles and those interested in a fast-moving, action-oriented strategic space game should find this title to their liking." References Category:Board games based on Star Trek Category:Board games introduced in 1981 Category:Star Fleet Battles
{ "pile_set_name": "Wikipedia (en)" }
Putative risk factors for postoperative pneumonia which affects poor prognosis in patients with gastric cancer. Several recent studies identified that postoperative infectious complications contribute to recurrence and poor outcome in patients with gastric cancer. This study was designed to investigate the prognostic impact of postoperative pneumonia, and to identify the putative risk factors for its occurrence. We retrospectively analyzed 1,415 consecutive patients who underwent curative gastrectomy for gastric cancer between 1997 and 2013. A total of 31 (2.2 %) patients developed postoperative pneumonia (Clavien-Dindo classification ≥II). Patients with postoperative pneumonia showed a significantly poorer prognosis than patients without (P < 0.001). Concerning the occurrence of postoperative pneumonia, univariate and multivariate analyses identified older age (≥65 years; P = 0.010; odds ratio [OR] 3.59), lower nutritious status (albumin <3 0; P = 0.029; OR 4.51), advanced stage (pStage ≥II; P = 0.045; OR 2.35), concurrent hypertension (P = 0.042; OR 2.21) and total gastrectomy (P = 0.026; OR 2.42) as independent risk factors. Postoperative pneumonia was shown to be associated with long-term poor outcome in patients with gastric cancer. Care should be taken for patients with clinical factors such as older age, lower nutritional status, advanced stage, concurrent hypertension, and total gastrectomy.
{ "pile_set_name": "PubMed Abstracts" }
Q: jsf dynamic component that restores state I am trying to display HtmlInputText dynamically in a JSF page. However, I am getting javax.faces.FacesException: Cannot add the same component twice: j_idt10:hitDyn During the first request to the page the input text renders well. That exception happens during postback of the page, when I enter some text in the input component and press Enter. In the .xhtml page, I have the following code: <h:form> <h:outputLabel value="Welcome!"></h:outputLabel> <f:metadata> <f:event type="preRenderView" listener="#{dynamicBacking.addDynComp}" /> </f:metadata> <h:panelGroup id="dynOuter"></h:panelGroup> </h:form> In the backing bean, I have the following code: @ManagedBean(name="dynamicBacking") public class DynamicBacking { public void addDynComp() { Application app = FacesContext.getCurrentInstance().getApplication(); HtmlInputText hit = (HtmlInputText)app.createComponent(HtmlInputText.COMPONENT_TYPE); hit.setId("hitDyn"); UIComponent parent = findComponent("dynOuter"); if( parent != null ) { parent.getChildren().add(hit); } } public UIComponent findComponent(final String id) { FacesContext context = FacesContext.getCurrentInstance(); UIViewRoot root = context.getViewRoot(); final UIComponent[] found = new UIComponent[1]; root.visitTree(new FullVisitContext(context), new VisitCallback() { @Override public VisitResult visit(VisitContext context, UIComponent component) { if(component.getId().equals(id)){ found[0] = component; return VisitResult.COMPLETE; } return VisitResult.ACCEPT; } }); return found[0]; } } I guess that there is some problem with restoring the state of the dynamic component in a postback. Am I adding the dynamic component too late in the lifecycle of the JSF page? I know that in ASP.NET I could add a dynamic control during Page.Load phase. But I can't so far figure out how to achieve the same in JSF. Please, help! A: The exception appears because the component is added in the tree on the initial page load. When performing a postback your listener gets called again and it tries to add another component with the same id and this causes the exception. A solution of the issue is to check if the request is NOT a postback when adding the component. The following code shows how to check for postback: if (FacesContext.getCurrentInstance().isPostback()) {....
{ "pile_set_name": "StackExchange" }
Q: Pandas Data Frame not Appending I am trying to append dataframes via for loop. CODE def redshift_to_pandas(sql_query,**kwargs): # pass a sql query and return a pandas dataframe cur.execute(sql_query) columns_list = [desc[0] for desc in cur.description] data = pd.DataFrame(cur.fetchall(),columns=columns_list) return data Input - all_schema = [('backup')] Loop - try: if len(all_schema) == 0: raise inputError("The Input has no schema selected. EXITING") else: modified_schemadf=pd.DataFrame(columns=['columns_name','status']) for i in range(len(all_schema)): #print (redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append')) modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append')) print (modified_schemadf) except inputError as e: print(e.message) logger.error("UNEXPECTED INPUT FOUND, Please check the I/P List . EXITING") print (modified_schemadf) I feel the issue is very obvious but i dont seem to find the issue. Here is the o/p - So the the first print ( commented out ), does return me the correct result. the next steps i.e appending the result to the declared dataframe ( name - modified_schemadf) is the problem area. When i print its value , it still throws a empty dataframe. For some reason the appending isnt happening. When the code enters else , i.e when the input is legit, there will be empty dataframe created called modified_schemadf. To this empty dataframe, there will be as many number of appends as there are inputs. Thanks in Advance. Please dont mind the indentations, copying might have affected them. A: Isn't the issue just that you don't assign the appended dataframe? Try changing this line modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append')) to this line modified_schemadf = modified_schemadf.append(redshift_to_pandas("select '"+all_schema[i]+"' as columns_name,(select exists ( select distinct table_schema from information_schema.tables where table_schema like '%"+all_schema[i]+"')) as status",mechanism='append'))
{ "pile_set_name": "StackExchange" }
Interview with a quality leader: Dr. John Combes on boards and governance. Dr. Combes is senior vice president at the American Hospital Association (AHA) and president and COO of the Center for Healthcare Governance. The Interview with Dr. John Combes on Boards and Governance provides a perspective on key changes, issues, competencies, and metrics that hospital boards must address. The role of quality professionals to be effective with boards is also described.
{ "pile_set_name": "PubMed Abstracts" }
{% load static %} <script src="{% static 'fiber/js/markitup-1.1.10/jquery.markitup.js' %}" type="text/javascript"></script> <script src="{% static 'fiber/js/markitup-1.1.10/sets/textile_fiber/set.js' %}" type="text/javascript"></script> <script src="{% static 'fiber/js/fiber.markitup.js' %}" type="text/javascript"></script>
{ "pile_set_name": "Github" }
2010 Vuelta España – Stage 18 – Mean Green Mark Cavendish has his third stage win of the Vuelta España, holding off a surging Haedo who nearly came over the top of Cav on the slightly uphill finish. The nervous finish caused a dramatic set-up to the final sprint with Farrar getting lost, Quickstep dropping their sprinter and Matthew Goss going deep to launch Cavendish. No change in the GC.
{ "pile_set_name": "Pile-CC" }
[Assessment of the new TNM classification for resected lung cancer]. To evaluate the revised TNM classification, we investigated the prognoses of 552 consecutive patients who had resection of non-small-cell lung cancer between April 1982 and March 1996. According to the new classification, the 5-year survival rate was 76.9% for stage I A, 57.2% for stage I B (I A versus I B, p < 0.0005), 47.7% for stage IIA, 49.8% for stage IIB, 18.6% for stage IIIA (IIB versus IIIA, p = 0.005), 16.7% for stage IIIB, and 7.9% for stage IV (IIIB versus IV, p = 0.02). Especially for patients in stage I A, there was significant difference in survival between patients with the tumor size within 1.5 cm and those with larger than 1.5 cm. The survival rate for T3N0M0 patients was significantly better than that for T3N1-2M0, but there was no significant difference between patients with T3N0M0 disease and those with T2N1M0 disease. Concerning the pm1 patients, the survival rate was significantly better than other stage IIIB patients. Our results supported the revision for dividing stage I and putting T3N0M0 into stage IIB. However, the classification is controversial about dividing stage II and putting pm1 as T4 disease. Furthermore, subgrouping of T1N0M0 disease by tumor size, T3 by tumor invaded organ will be necessary in the next revisions.
{ "pile_set_name": "PubMed Abstracts" }
Q: pandas - binning data and getting 2 columns I have a very simple dataframe. There are 2 columns, day_created (int, could change to datetime) and suspended (int, could change to boolean). I can change the data if it makes it easier to work with. Day created Suspended 0 12 0 1 6 1 2 24 0 3 8 0 4 100 1 5 30 0 6 1 1 7 6 0 The day_created column is the integer of the day the account was created (from a start date), starting at 1 and increasing. The suspended column is a 1 for suspension and a 0 for no suspension. What I would like to do is bin these accounts into groups of 30 days or months, but from each bin get a total number of accounts for that month and the number of accounts suspended that were created in that month. I then plan on creating a bar graph with 2 bars for each month. How should I go about this? I don't use pandas often. I assume I need to do some tricks with resample and count. A: Use df.index = start_date + pd.to_timedelta(df['Day created'], unit='D') to give the DataFrame an index of Timestamps representing when the accounts were created. Then you can use result = df.groupby(pd.TimeGrouper(freq='M')).agg(['count', 'sum']) to group the rows of the DataFrame (by months) according to the Timestamps in the index. .agg(['count', 'sum']) computes the number of accounts (the count) and the number of suspended accounts for each group. Then result.plot(kind='bar', ax=ax) plots the bar graph: import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame( {'Day created': [12, 6, 24, 8, 100, 30, 1, 6], 'Suspended': [0, 1, 0, 0, 1, 0, 1, 0]}) start_date = pd.Timestamp('2016-01-01') df.index = start_date + pd.to_timedelta(df['Day created'], unit='D') result = df.groupby(pd.TimeGrouper(freq='M'))['Suspended'].agg(['count', 'sum']) result = result.rename(columns={'sum':'suspended'}) fig, ax = plt.subplots() result.plot(kind='bar', ax=ax) locs, labels = plt.xticks() plt.xticks(locs, result.index.strftime('%Y-%m-%d')) fig.autofmt_xdate() plt.show() yields
{ "pile_set_name": "StackExchange" }
Q: Display difference between volatile and usual variable in Java I am trying to create an example to display the difference between volatile and usual variables like: package main; public class TestVolatile extends Thread { public int l = 5; public volatile int m = -1; public TestVolatile(String str) { super(str); } public void run() { int i = 0; while ((l > 1) && (l < 10)) { if (m >= 0) { m++; } i++; l = 5; System.out.println("5=" + i + " m=" + m); } } public static void main(String[] args) throws InterruptedException { TestVolatile tva = new TestVolatile("ThreadA"); tva.start(); sleep(5); synchronized (tva) { tva.m = 5; tva.l = 10; } } } So m is volatile, l is not. I suppose that exiting from the while loop depends on the value of l. Because the value of l is not volatile - m will be incremented at least 1 time after l has been assigned 5. But I have run the code 10 times and always m==5. So I suppose that I am wrong. How to fix this problem? Thank you. Thanks for answers, but not all run well. I set like: volatile int x = 0; volatile int y = 0; So now the variables have to be the same! But that is not the case. x: 346946234 y: 346946250 x: 346946418 y: 346946422 x: 346946579 y: 346946582 x: 346946742 y: 346946745 x: 346946911 y: 346946912 A: You are synchronizing the main thread and your test thread. Therefore Java guarantees to make any changes visible performed by the other thread. Btw, it is impossible to construct an example which deterministically shows a difference between volatile and non-volatile. The best you can hope is to get a program which shows the difference with a quite high probability. If the threads run interleaved on the same core. You won't be able to show any difference at all. The following program shows on my computer the difference between volatile and non-volatile variables. public class ShowVolatile { final static int NUM_THREADS = 1; int x = 0; volatile int y = 0; public static void main(String... args) { final ShowVolatile sv = new ShowVolatile(); for (int i=0; i< NUM_THREADS; i++) { new Thread(new Runnable() { public void run() { while (true) { sv.x += 1; sv.y += 1; } } }).start(); } while (true) { System.out.println("x: " + sv.x + " y: " + sv.y); } } } If you increase the number of threads you will see additional synchronization misses. But a thread count of 1 is enough. At least on my hardware a Quad-Core i7.
{ "pile_set_name": "StackExchange" }
Q: systemd service script for libreoffice/openoffice I'm trying to setup correctly a headless libreoffice/openoffice server on a debian jessie. I created a script named /etc/systemd/system/openoffice.service with the following content [Unit] Description=OpenOffice service After=syslog.target [Service] ExecStart=/usr/bin/soffice '--accept=socket,host=localhost,port=8101;urp;StarOffice.ServiceManager' --headless --nofirststartwizard --nologo Restart=always KillSignal=SIGQUIT Type=notify StandardError=syslog NotifyAccess=all User=www-data [Install] WantedBy=multi-user.target And I enabled it via: systemctl enable openoffice.service I'm in a situation that is only partially working: it correctly starts on boot if queried status systemctl status openoffice.service it clams it is still activating If I try to start it it just hangs I haven't been able to find a working example, I'd also like to understand how to create the debian /etc/init.d script that uses systems... A: You set Type=notify in your service. This is meant to be used only for specific services which are designed to notify systemd when they have finished starting up. At the moment, these are rather uncommon, and I don't think LibreOffice is among them. You should most likely be using Type=simple instead.
{ "pile_set_name": "StackExchange" }
The backwoods-gothic terrain may be familiar, but the jolts are doled out with an expert hand in “Blue Ruin,” a lean and suspenseful genre piece that follows a bloody trail of vengeance to its cruel, absurd and logical conclusion. Writer-director Jeremy Saulnier shows impressive progress from his funny-scary 2007 debut, “Murder Party,” with this tense, stripped-down tale of a Virginia drifter who finds himself in way over his head when he tries to exact payback for his parents’ deaths. Saulnier cleverly establishes a man-on-the-run theme in his opening shot, before the action proper has even started. Thereafter the camera practically stays glued to Dwight Evans (Macon Blair), a quiet vagrant who gets by sifting through Dumpsters and sleeping in his beat-up blue Pontiac. Yet his seemingly pointless existence is marked by curious flashes of daring and resourcefulness, if not exactly great intelligence, and he suddenly snaps into action and returns to his rural Virginia hometown upon learning that one Will Cleland has been released from prison. Making it clear there’s a score to settle without immediately disclosing the gruesome details, the script lures the viewer into an unnerving sense of complicity as Dwight follows Will and his folks to a bar and, armed with a small knife, initiates the first of several brutal setpieces. The filmmaking is clean and efficient but the killing isn’t, and in the course of his clumsy, foolhardy getaway, Dwight ends up putting Will’s entire gun-toting redneck family on his tail. In a twist that streamlines the narrative considerably, the Clelands opt not to inform the police of the attack, choosing instead to keep things “in-house.” While Dwight’s not-so-bright actions generate some darkly humorous beats (none grislier than when he tries, and fails, to clean a nasty arrow wound), Saulnier resists turning his protagonist into an object of outright ridicule, never compromising the audience’s intense identification with this reluctant renegade. In name and appearance, Dwight is the sort of pudgy, clean-shaven Everyman more suited to an office cubicle than a shootout, and even as the arguable aggressor in this scenario, he seems to act more out of fear and protectiveness than out of a real desire for retribution. Blair’s engaging, soulful-eyed performance succeeds by locating the sweet spot between idiot and amateur, predator and prey. Repeatedly, Dwight plans ahead, takes calculated risks and still messes up, and much of the film’s tension derives from his very fallibility, as well as his increasing awareness that none of this can possibly end well. If the climax goes inevitably over-the-top, it’s nonetheless the sort of gruesome finish the story’s steady, merciless buildup demands. Carefully exploiting the audience’s fear of what it can’t (or can only partially) see, Saulnier’s shallow-focus widescreen compositions amp up the suspense at key intervals, as do Julia Bloch’s crisp editing, Matt Snedecor and Dan Flosdorf’s meticulously layered sound design, and Brooke and Will Blair’s ominous synth score. While Dwight is often the camera’s sole focus, warm character notes are provided by Amy Hargreaves as Dwight’s sister, who is at once grateful for and angered by his reckless actions, and Devin Ratray as an old high-school friend whom Dwight enlists to help, in one of his smarter decisions.
{ "pile_set_name": "Pile-CC" }
Litia Cakobau Adi Litia Qalirea Cakobau (c. 1941 – 8 October 2019) was a Bau high Fijian Chief and political leader. Cakobau, the daughter of Ratu Sir George Cakobau, who was Fiji's Governor-General from 1973 to 1983, was appointed to the Senate in 2001 as one of nine nominees of the Fijian government. She held this post till 2006, when her elder sister, Adi Samanunu Cakobau-Talakuli was appointed to the Senate. Prior to her appointment to the Senate, she had previously held Cabinet office as Minister for Women, a post to which she was appointed in 1987. Her brother, Ratu George Cakobau, was also a Senator from 2001 to 2006, but was nominated by the Great Council of Chiefs rather than the government, as she was. She died at her home in Lautoaka in October 2019 at the age of 78. References Category:1940s births Category:2019 deaths Category:Fijian chiefs Category:I-Taukei Fijian members of the Senate (Fiji) Category:Tui Kaba Category:Soqosoqo Duavata ni Lewenivanua politicians Category:Soqosoqo ni Vakavulewa ni Taukei politicians Category:Politicians from Bau (island)
{ "pile_set_name": "Wikipedia (en)" }
Q: How do you equally space out elements in a Row? Row{ width: parent.width spacing: ???? Checkbox{} Checkbox{} Checkbox{} Checkbox{} } So just to be clear, the checkboxes should be spaced in such a manner that however wide the row is, it will expand or compress the spacing in accordance to this. A: The simplest solution would be to set width: parent.width/4 for each of the checkboxes. If you want to keep the checkbox width set at some known value, you could instead set spacing: (parent.width - 4 * checkboxwidth)/3 on the Row. Note that this will cause the elements to overlap when the parent is narrow. If you're targeting Qt 5.1 or higher, you may want a RowLayout. I'm still on 5.0, though, so I can't help you there. Yet another way to do this would to be to put each CheckBox in an Item. Each Item would have width: parent.width/4, and each CheckBox would have anchors.centerIn: parent. This would give a half-width margin on the far left and far right, which may or may not be desired.
{ "pile_set_name": "StackExchange" }
Event Description:You'll meet our team of obstetricians, certified nurse midwives, and family medicine physicians. You can also tour our recently renovated Labor & Delivery unit and in-hospital Birth Center. You can also attend an optional tour before or after the event (5 p.m. or 7 p.m.). Please plan on arriving 15 minutes before the event to allow time to sign in. If you are planning on attending the 5 p.m. tour, please arrive at 4:45 p.m. If you are unable to register online, please call 619-543-3168.Parking information
{ "pile_set_name": "Pile-CC" }
jsToolBar.strings = {}; jsToolBar.strings['Strong'] = 'Gras'; jsToolBar.strings['Italic'] = 'Italique'; jsToolBar.strings['Underline'] = 'Souligné'; jsToolBar.strings['Deleted'] = 'Rayé'; jsToolBar.strings['Code'] = 'Code en ligne'; jsToolBar.strings['Heading 1'] = 'Titre niveau 1'; jsToolBar.strings['Heading 2'] = 'Titre niveau 2'; jsToolBar.strings['Heading 3'] = 'Titre niveau 3'; jsToolBar.strings['Unordered list'] = 'Liste à puces'; jsToolBar.strings['Ordered list'] = 'Liste numérotée'; jsToolBar.strings['Quote'] = 'Citer'; jsToolBar.strings['Unquote'] = 'Supprimer citation'; jsToolBar.strings['Preformatted text'] = 'Texte préformaté'; jsToolBar.strings['Wiki link'] = 'Lien vers une page Wiki'; jsToolBar.strings['Image'] = 'Image';
{ "pile_set_name": "Github" }
// Copyright (c) 2006 Foundation for Research and Technology-Hellas (Greece). // All rights reserved. // // This file is part of CGAL (www.cgal.org). // You can redistribute it and/or modify it under the terms of the GNU // General Public License as published by the Free Software Foundation, // either version 3 of the License, or (at your option) any later version. // // Licensees holding a valid commercial license may use this file in // accordance with the commercial license agreement provided with the software. // // This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE // WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. // // $URL$ // $Id$ // // // Author(s) : Menelaos Karavelas <[email protected]> #ifndef CGAL_VORONOI_DIAGRAM_2_VALIDITY_TESTERS_H #define CGAL_VORONOI_DIAGRAM_2_VALIDITY_TESTERS_H 1 #include <CGAL/license/Voronoi_diagram_2.h> #include <CGAL/Voronoi_diagram_2/basic.h> #include <algorithm> #include <CGAL/Triangulation_utils_2.h> #include <CGAL/Voronoi_diagram_2/Finder_classes.h> namespace CGAL { namespace VoronoiDiagram_2 { namespace Internal { //========================================================================= //========================================================================= template<class VDA, class Base_it> class Edge_validity_tester { // tests whether a halfedge has as face a face with zero area. private: const VDA* vda_; private: typedef Triangulation_cw_ccw_2 CW_CCW_2; // Base_it is essentially VDA::Edges_iterator_base typedef Base_it Edges_iterator_base; typedef typename VDA::Halfedge_handle Halfedge_handle; typedef typename VDA::Delaunay_graph::Vertex_handle Delaunay_vertex_handle; public: Edge_validity_tester(const VDA* vda = NULL) : vda_(vda) {} bool operator()(const Edges_iterator_base& eit) const { CGAL_assertion( !vda_->edge_rejector()(vda_->dual(), eit->dual()) ); int cw_i = CW_CCW_2::cw( eit->dual().second ); CGAL_assertion_code( int ccw_i = CW_CCW_2::ccw( eit->dual().second ); ) CGAL_assertion_code(Delaunay_vertex_handle v_ccw_i = eit->dual().first->vertex(ccw_i);) CGAL_assertion( !vda_->face_rejector()(vda_->dual(), v_ccw_i) ); Delaunay_vertex_handle v_cw_i = eit->dual().first->vertex(cw_i); if ( !vda_->face_rejector()(vda_->dual(), v_cw_i) ) { return false; } Halfedge_handle he(eit); Halfedge_handle he_opp = eit->opposite(); CGAL_assertion( he_opp->opposite() == he ); return he->face()->dual() < he_opp->face()->dual(); } }; //========================================================================= //========================================================================= template<class VDA> class Vertex_validity_tester { private: const VDA* vda_; private: typedef typename VDA::Delaunay_graph::Face_handle Delaunay_face_handle; typedef typename VDA::Delaunay_graph::Finite_faces_iterator Delaunay_faces_iterator; public: Vertex_validity_tester(const VDA* vda = NULL) : vda_(vda) {} bool operator()(const Delaunay_faces_iterator& fit) const { Delaunay_face_handle f(fit); Delaunay_face_handle fvalid = Find_valid_vertex<VDA>()(vda_,f); return f != fvalid; } }; //========================================================================= //========================================================================= } } //namespace VoronoiDiagram_2::Internal } //namespace CGAL #endif // CGAL_VORONOI_DIAGRAM_2_VALIDITY_TESTERS_H
{ "pile_set_name": "Github" }
Q: Why is a JavaScript reserved keyword allowed as a variable name? We know that let is a reserved keyword that defines a variable in JavaScript. var let = 2; console.log(let); // return 2 So why is this not an error? A: let is only a reserved word in strict mode: 'use strict'; var let = 5; Uncaught SyntaxError: Unexpected strict mode reserved word This is because browsers generally prioritize backwards compatibility above all else. Although let was introduced in ES2015 (and its use was forseen sometime before then), prior scripts which used let as a variable name would continue to work as desired. For example, if your script was written in 2008: var let = 2; console.log(let); Then it would continue to work in 2020 as well. For very similar reasons, async and await are also permitted as variable names. As for why the use of let errors in strict mode - strict mode was introduced in ES5, in 2009. Back then, the language designers saw that the use of new keyword(s) to declare variables was a possibility in the future, but it wasn't set in stone yet, and ES6 was still a long ways off. Once ES5 came out, script writers could opt-in to strict mode to make code less confusing, and change silent errors to explicit errors. Although let wasn't usable for variable declaration yet, prohibiting it as a variable name in strict mode improved the readability of future scripts which opted into strict mode, while also not breaking any existing scripts. A: let and some of the other works acts as reserved words only in strict mode. The specs says Disallowed in strict mode: Those that are contextually disallowed as identifiers, in strict mode code: let, static, implements, interface, package, private, protected, and public; You can see let inside the list of words which are only disallowed in strict mode. If you want to throw error for using let as variable name you can use strict mode "use strict"; var let = 3
{ "pile_set_name": "StackExchange" }
165 Pa. Commonwealth Ct. 573 (1994) 645 A.2d 474 BOROUGH OF KENNETT SQUARE v. Amrit LAL, Appellant. Commonwealth Court of Pennsylvania. Submitted on Briefs June 6, 1994. Decided July 8, 1994. Reargument Denied August 17, 1994. *577 Thomas R. Kellogg, for appellant. John L. Hall, for appellee. Before COLINS and PELLEGRINI, JJ., and NARICK, Senior Judge. NARICK, Senior Judge. Appellant, Amrit Lal, appeals from an order of the Court of Common Pleas of Chester County, sitting in equity, ordering injunctive relief and appointing an agent for Appellant to manage Appellant's apartment complex known as "Scarlett Manor Apartments," in order to bring it into compliance with the Borough of Kennett Square's (Borough) housing and building codes. This matter commenced in February, 1993 when the Borough filed this action in an effort to bring an end to almost five years of continuous litigation with Appellant. From the time Appellant purchased Scarlett Manor Apartments, in March 1988, when they were apparently in a good state of repair and free of any Housing Code (Code) violations, until December 1993, Appellant was cited for more than 160 Code violations, and the Borough had spent more than $40,000.00 in attorney's fees in this effort to compel Appellant's compliance. As noted by the trial court judge, The Honorable Thomas J. Gavin, who had personally heard more than one hundred (100) cases involving Appellant's rental real estate, "[Appellant] is the *578 single most litigious person in the history of Chester County."[1] (T.C. Opinion at 5, December 8, 1993.) According to the trial court, Appellant's obstructive conduct usually conformed to the following pattern: Following inspections, the borough would communicate deficiencies to the defendant and suggest that he contact the borough regarding the resolution of same. Several months would pass, often with another intervening inspection, but no corrective action taken. More letters would be generated and ultimately defendant would agree to remedy the deficiencies by a date certain. It is important to note that the defendant was always given leeway to select a date by which the repairs, or deficiencies, would be corrected. The corrective date would come and go with no action by defendant, whereupon the borough would issue citations. Hearings would then be scheduled before the district court (District Court 15-3-04) where the defendant would or would not appear to defend. Whether found guilty by the District Justice or in absentia, an automatic appeal would be taken to the Court of Common Pleas. By the time the cases found their way to my courtroom the deficiencies, which would now be months if not years old, remained uncorrected. In each case numerous pre-trial motions would be filed, often on the day scheduled for trial, asserting that the court lacked jurisdiction, was biased against the defendant, that the borough was discriminating against defendant because of his third world origin, etc. etc. Ultimately the cases would be heard, appropriate verdicts rendered and plaintiff advised that if the repairs were corrected pre-imposition of sentence, nominal fines would be imposed. Invariably, post-verdict motions would be filed, no corrections made and the defendant sentenced accordingly. Thereafter, motions to vacate sentence and/or appeals would be filed with the deficiencies still uncorrected. The deficiencies *579 cited by the borough, by way of example and not limitation, run the gamut from countless vectors (a polite euphemism for cockroaches) scurrying about the apartments to defective and/or leaking and/or missing plumbing fixtures, lighting fixtures that do not work, windows with broken or missing panes, or screens, loose or missing balcony railings, leaking roofs and trash strewn about the properties. (T.C. Opinion at 2-3, December 8, 1993). As a result of this delay and vexatious conduct, the Borough filed a complaint asking for the extraordinary remedy of appointment of an agent to manage the apartments and correct the Code violations. In response, Appellant filed preliminary objections which were denied, and Appellant was given leave to file an answer to the Borough's complaint within twenty (20) days. Appellant failed to file an answer within the time allowed, and instead appealed the trial court's denial of his preliminary objections, via a petition for review, to this court. Appellant's petition to vacate Judge Gavin's order denying his preliminary objections was denied by Judge MacElree of the Chester County Court of Common Pleas. Appellant was notified that a default judgment would be taken if he did not file an answer within ten (10) days, and when such answer was not filed, a default judgment was entered and a final hearing to frame an appropriate final decree was scheduled. The final hearing concluded on November 19, 1993, which resulted in the appointment of an agent to manage the apartment buildings in order to correct the problems and bring them into compliance with the Borough's ordinances. Meanwhile, Appellant's legal maneuverings continued with, inter alia, a petition to quash the Borough's request for a final hearing to fashion an appropriate final decree, a petition for recusal of Judge Gavin or transfer to another county, continuing requests for production of documents after a protective order had been granted, and a motion to disqualify the court's appointed agent. On appeal to this Court, Appellant raises eleven issues for our review, three of which have been waived by failure to *580 raise them in post-trial motions.[2] Pa.R.C.P. No. 227.1(b)(2); Estate of Hall, 517 Pa. 115, 535 A.2d 47 (1987); Borough Council for Borough of Millbourne v. Bargaining Committee of Millbourne Borough Police, 109 Pa.Commonwealth Ct. 474, 531 A.2d 565 (1987). We will therefore consider the remaining issues on their merits. First, Appellant argues that Judge Gavin should have recused because of animosity to Appellant. Judge Gavin denied the motion stating that his actions do not evince any bias towards Appellant. (T.C. Opinion at 2, December 17, 1993.) Like his post-trial motions, Appellant's brief on appeal contains repetitive, generalized, boilerplate allegations of bias and prejudice, but he only indicates one instance which he believes shows the court's animosity, Judge Gavin's threat to hold Appellant in contempt for continuing to cross-examine a witness, Mr. Marguriet, on irrelevant matters. (R. at 50a.) The record indicates that the first questions Appellant asked on cross-examination of Mr. Marguriet, the Manager and Code Enforcement Officer of the Borough, concerned the deeds of properties owned by other landowners, and a case pending against a property owner in the Borough. These questions were clearly irrelevant, and such was Judge Gavin's ruling. Yet, Appellant continued to ask irrelevant questions, until he repeated some he had attempted to ask earlier, and at that point, Judge Gavin warned Appellant to cross-examine only on relevant issues, or risk a contempt citation. (R. 40a-50a.) Judge Gavin remained remarkably patient while Appellant asked one irrelevant question after another, but his repeated rulings were ignored. His warning was therefore warranted, and his threatened use of his contempt powers was entirely proper. *581 In this jurisdiction, it is presumed that a trial judge is capable of recognizing in himself/herself the symptoms of bias and prejudice. If the judge believes that he or she can hear and dispose of the case without partiality, then that decision will not be overturned, absent an abuse of discretion. Reilly by Reilly v. Southeastern Pennsylvania Transportation Authority, 507 Pa. 204, 489 A.2d 1291 (1985); Commonwealth v. Knight, 421 Pa.Superior Ct. 485, 618 A.2d 442 (1992). Here, there was no abuse of discretion in warning Appellant that he would be in contempt of court if he continued to ask totally irrelevant questions. Therefore, there is no merit to Appellant's claim that Judge Gavin should have recused. Next, Appellant claims that the enforcement of the Borough's ordinances was discriminatory against the low income groups which reside in Scarlett Manor and against Appellant, who claims to be a "member of a minority group, being an Asiatic Indian." Appellant not only failed to prove that there was discriminatory enforcement of the Building Code, but has failed to allege any facts which, if true, would support this claim. Township of Ridley v. Pronesti, 431 Pa. 34, 244 A.2d 719 (1968); Harasty v. Borough of West Brownsville, 50 Pa.Commonwealth Ct. 186, 412 A.2d 688 (1980). We will not recapitulate the facts of this case, but suffice to say that the Borough's ordinances were enforced against Appellant because Appellant had violated these ordinances numerous times, there had been numerous complaints by the tenants and others, and because Appellant engaged in every delaying tactic he knew, including abusing his legal rights, to resist abating the conditions for which he was cited and avoid complying with minimal standards of habitability. (Plaintiff's Exhibit 1 and 3.) Therefore, we find no merit to this argument. Next, Appellant claims that the court erred in entering a final order in this case before the receipt and consideration of post-trial motions. Pa.R.C.P. No. 227.1. Appellant relies on Reading Anthracite Co. v. Rich, 525 Pa. 118, 577 A.2d 881 (1990), where an adjudication and decree nisi were entered which ordered the convening of a meeting within ten (10) days *582 of entry of the final order, and also invited the parties to submit post-verdict motions within ten (10) days. The petitioners filed their motions on the tenth day, but prior to their receipt the chancellor disposed of the ultimate issue in the case. The Supreme Court held that the petitioners were denied due process when they were denied the right to file exceptions or post-trial motions. This case is readily distinguishable from Reading Anthracite. First, although the December 8, 1993 filing was labeled an opinion and order, Appellant had the opportunity and did file post-trial motions which were thoroughly addressed and considered by the trial court. Moreover, the court was not required to enter a decree nisi because a judgment by default had already been entered. Panther Valley Television Co. v. Borough of Summit Hill, 372 Pa. 524, 94 A.2d 735 (1953). Thus, Reading Anthracite is inapplicable here, and there is no merit to this argument. Next, Appellant claims that the trial court erred in failing to insure that the party intending to purchase the Scarlett Manor was represented at the hearing, and that the tenants were joined as parties. The issue of the necessity of joining the tenants has been waived because it was not raised at the hearing or in post-trial motions. Moreover, notwithstanding Appellant's allegations regarding a potential buyer for his property, the buyer remains unnamed and unproven in the record. When Appellant's attorney, Mr. Kalmbach, was asked about this buyer, he responded that the Borough was more involved in the negotiations for sale than he was. (R. 84.) However, the attorney for the Borough stated that the Borough knew very little about the potential sale, apart from what was told them by Appellant months before. The Borough never saw a copy of the alleged agreement of sale and never knew the name of the alleged, potential buyers. (R. 83-85.) Appellant did not offer any more specific information about the buyers during the hearing, although he was given every opportunity to do so. Instead, he continued to rely on his and his attorney's assertions that the sale was imminent. *583 (R. 85.) The court cannot join a person or persons in a proceeding when it has not been given information as to the identity of such persons, and when it has received no evidence confirming an interest in the property which would be affected by the court's proceedings. Therefore, there is no merit to Appellant's allegation of trial court error on this issue. Next, Appellant claims that the relief ordered was not a proper exercise of the equitable powers of the court because he made substantial efforts to provide decent housing for the tenants and to comply with the general intent of the ordinance. Appellant cites his own testimony and that of his manager, Mr. Ayra, both of whom the court specifically found not credible. As an example of Mr. Ayra's testimony, he made the incredible statement that perhaps he failed to notice certain Code violations because he made inspections only in the evening. (R. 167a). Although he claimed to have called repair persons to correct the problems, he did not produce a single receipt or cancelled check to prove that they had undertaken the repairs they claimed to have accomplished. As factfinder in the evidentiary hearing, the trial court was free to disregard Appellant's testimony and make findings as to credibility. Commonwealth v. Nunez, 312 Pa.Superior. Ct. 584, 459 A.2d 376 (1983). In no uncertain terms, the trial court found Appellant incredible and disregarded the testimony he and Mr. Ayra offered. Next, Appellant claims that the trial court had an adequate remedy at law, and therefore equitable relief should not have been granted. Citing School District of West Homestead v. Allegheny County Board of School Directors, 440 Pa. 113, 269 A.2d 904 (1970), Appellant argues that the trial court had no jurisdiction to consider this action in equity because: (1) there is a constitutionally valid statute, the Borough's citation procedures, which provide an explicit and exclusive administrative remedial process, with review by the Court of Common Pleas; and (2) the statutory remedy is adequate and compliance with the statutory remedy will not cause irremedial harm. *584 The Borough Code provides that boroughs are specifically vested with the power to enforce housing ordinances by instituting appropriate actions or proceedings in law or in equity. The Borough Code, Act of February 1, 1966, P.L. (1965) 1656, as amended, 53 P.S. § 46202(24). Although there are explicit legal and administrative procedures for serving citations for violations of the Borough's housing ordinances, under the Borough Code, they are not the exclusive remedies available to the authorities, and the courts may proceed in equity. The trial court held that the inadequacy of the available legal remedies was proven by evidence that Code violations continue unabated, notwithstanding the filing of multiple actions by the Borough against Appellant. (T.C. Opinion at 10, December 8, 1993). In explaining this holding the trial court stated, "The borough has tried amicably and legally for five years to compel defendant to meet those minimum standards its other citizens are required to adhere to . . . If equitable relief is not granted, the borough will continue to be frustrated in its legitimate efforts to enforce its housing codes." (T.C. Opinion at 8-9, December 8, 1993). Equity has jurisdiction notwithstanding a failure to pursue an available statutory remedy if that remedy is inadequate. While this Court is reluctant to favor equity over administrative remedies, it is appropriate to take equity jurisdiction to avoid a multiplicity of actions. Temple University v. Department of Public Welfare, 30 Pa.Commonwealth Ct. 595, 374 A.2d 991 (1977). We hold that the remedies at law, the hundreds of citations for violations of the housing code received by Appellant, have been inadequate to insure their enforcement. Thus, the safety and the habitability of the premises can not be guaranteed, and the health and welfare of the tenants residing in Appellant's apartment complex is endangered. Therefore, it was perfectly appropriate for the court to provide equitable relief in the form of appointment of an agent to manage the Scarlett Manor apartments. *585 Appellant next claims that the court erred in failing to open the default judgment. The decision to open a default judgment is left to the sound discretion of the trial court, which must determine that: (1) the petition to open was promptly filed; (2) there was a reasonable excuse for failure to respond; and (3) a meritorious defense must be shown. Southeastern Pennsylvania Transportation Authority v. DiAntonio, 152 Pa.Commonwealth Ct. 237, 618 A.2d 1182 (1992). Appellant cannot meet any part of this test. First, judgment by default for failure to answer the Borough's complaint was entered on September 10, 1993. Appellant did not file his petition to open until December 13, 1993. Although he states he relied on his petition for review of the trial court's dismissal of his preliminary objections, which were filed in this court, to stay the proceedings on the default judgment so that he did not need to file an answer, such reliance was misplaced. Pa.R.A.P. 1701(b)(6) provides that the trial court may proceed further in any matter in which a nonappealable interlocutory order has been entered, notwithstanding the filing of a notice of appeal or a petition for review. Here, Appellant attempted to appeal a nonappealable interlocutory order, which this court dismissed on two different occasions, September 15, 1993 and October 29, 1993, and therefore the trial court properly continued to proceed in this matter while the appeals were pending. Even if we were to accept that Appellant were relying on his petitions for review to stay the proceeding, he still waited for over a month to file his petition to open after his petitions to this Court were dismissed. Thus, the petition to open was not promptly filed, and there is no reasonable excuse for Appellant's failure to respond to the trial court's order to file an answer to the Borough's complaint. Moreover, as we have discussed, no meritorious defense, which has been defined as a defense sufficient to justify relief if proven, Id., was offered. As discussed above, all Appellant's defenses are without merit and are therefore insufficient to justify relief. Therefore, the trial court did not err in refusing to open the default judgement. *586 Finally, Appellant claims that the decree should be vacated because the court did not require that the agent appointed by the court to manage Appellant's property post bond pursuant to Pa.R.C.P. No. 1533(d). Rule 1533(d) provides that a "receiver" must give security for the faithful performance of his duty as the court shall direct, and shall not act until the security is paid. Here, however, an "agent" was appointed, similar to the agent required by the Borough Code, Section 8-107, which requires an owner of any apartment building to register a person to serve as a responsible local agent. Traditionally a person seeking a receiver does so to protect property in which he or she has an interest. Levin v. Barish, 505 Pa. 514, 481 A.2d 1183 (1984); Northampton National Bank of Easton v. Piscanio, 475 Pa. 57, 379 A.2d 870 (1977).[3] Here, the agent was not appointed to protect the assets of a party which has a property interest in Appellant's property; he was only appointed to manage the property in compliance with local ordinances, as would a responsible local agent. Moreover, the appointment does not divest Appellant of his interest in the property; he retains the power to repair and maintain his property if he so chooses. Therefore, we find no merit to Appellant's claim that the trial court erred in failing to require the agent to post security. We believe that the repetitious and frivolous nature of this appeal entitles the Borough to the award of reasonable counsel fees pursuant to Section 2503(7) of the Judicial Code, 42 Pa.C.S. § 2503(7) and Pa.R.A.P. 2744(1). Gossman v. Lower Chanceford Township Board of Supervisors, 503 Pa. 392, 469 A.2d 996 (1983). Moreover, in In the Matter of Appeal of Richard Michael George, 101 Pa.Commonwealth Ct. 241, 515 A.2d 1047 (1986), and Patel v. Workmen's Compensation *587 Appeal Board (Sauquoit Fibers Co.), 103 Pa.Commonwealth Ct. 290, 520 A.2d 525, appeal denied, 515 Pa. 616, 530 A.2d 869 (1987), we held that we are clearly authorized, under Pa.R.A.P. 2744, to sua sponte impose on the appellant the sanction of paying the reasonable counsel fees of the appellee, and while we did not at that time award fees, we held that such abuse of this Court's appeals process may in the future result in the imposition of such sanctions. Here, Appellant has so clearly abused the legal process that we now impose on Appellant, sua sponte, the sanction of paying reasonable counsel fees. This appeal was a result of Appellant's refusal to respond to the Borough's complaint in equity, and instead twice appealing the trial court's preliminary rulings. Then, when Appellant suffered a judgement by default, brought on by his own intentional conduct, he filed this appeal, raising numerous, frivolous issues designed to obstruct and delay the equitable relief ordered by the trial court. Accordingly, we affirm the trial court's order in its entirety, and remand to the trial court for the calculation of reasonable fees incurred by the Borough in this appeal, to be paid by Appellant. ORDER AND NOW, this 8th day of July, 1994, the order of the Court of Common Pleas of Chester County in the above-captioned matter is affirmed. Further, the case is remanded to the trial court for calculation of reasonable attorney's fees incurred by the Borough in this appeal, to be paid by Appellant. Jurisdiction relinquished. NOTES [1] The trial court noted that Appellant has a Ph.D. and a law degree, and the level of sophistication of his pleadings and his ability to manipulate the rules shows he was not a typical pro se litigator. Rather, he was more an unlicensed lawyer of considerable skill. (T.C. Opinion at 6, December 8, 1993.) [2] Although Appellant raised more than fifty (50) issues in the trial court he still has waived the following issues: (1) The Building Code of the Borough of Kennett Square was unconstitutional as it bears no reasonable relationship to the health, safety, morals or general welfare of the community; (2) There is no basis for equity jurisdiction because by its inaction the Borough brought about the conditions of which it complains; (3) The Court was in error to exclude certain evidence. [3] It was held in DeAngelis v. Commonwealth Land Title Insurance Co., 467 Pa. 410, 358 A.2d 53 (1976), that it was improper to appoint a receiver when the party petitioning for such an appointment does not have a lien on the property in question, and only has contract rights to the property which have not been reduced to judgment. Thus, the petitioning party did not have sufficient property rights in the disputed property to force the appointment of a receiver.
{ "pile_set_name": "FreeLaw" }
And check out the kohlrabi slaw and Glada’s garbanzo zucchini salad in the recipe sheet! Also, FFS will have its own filming taking place tomorrow at pick up: Kate Perkins of Perkins Films will be shooting the whole day, from the fruit delivery at 8am to the arrival of the truck from the Farm at Miller’s Crossing in the middle of the day to our 5th distribution. So, just a heads up. And thank you Kate!
{ "pile_set_name": "Pile-CC" }
Q: How to have simple google apps script send mails from Sheets from owner account regardless of who's accessing file I've clicked around for the past few days trying to find an answer but can't seem to find one that makes sense to me (forgive me, I'm fairly new to GAS). I am trying to set up a Fantasy Golf Draft sheet to be used by about 12 users, but over half of which don't have/aren't willing to use a Gmail address. Getting access to the file is no problem, where I am running into an issue is trying to run a script, where when a Button/Shape is clicked, it sends an automated email to the next person who's turn it is to pick. The functionality of the script is working, when it comes from myself or someone with a Google account who can authorize the script etc. I run into troubles when it's someone without a Google account. My question - how can I set the script to ONLY send from my email, or the Sheet/Script Owner's email - regardless of who is modifying/clicking the button? I see links about creating the script as a webapp to do this, but I get lost quickly. Here's a link to my sheet: [https://docs.google.com/spreadsheets/d/16AppcmrcuhatnzcEs7eIQyD_p1swbRimRZZ4FdbhBKI/edit?usp=sharing][1] And here is my send mail code: function sendAlertEmails() { var ss = SpreadsheetApp.getActiveSpreadsheet(); ss.setActiveSheet(ss.getSheetByName("Send Mails")); var sheet = SpreadsheetApp.getActiveSheet(); var dataRange = sheet.getRange("A2:f2"); var data = dataRange.getValues(); for (i in data) { var rowData = data[i]; var emailAddress = rowData[1]; var recipient = rowData[0]; var message1 = rowData[2]; var message2 = rowData[3]; var message3 = rowData[4]; var message4 = rowData[5]; var message = 'Hey ' + recipient + ',\n\n' + message1 + '\n\n' + ' The last player picked was ' + message2 + '\n\n' + message3 +'\n\n' + message4; var subject = '*GOLF DRAFT 2018* - YOU ARE ON THE CLOCK'; MailApp.sendEmail(emailAddress, subject, message); var ss = SpreadsheetApp.getActiveSpreadsheet(); ss.setActiveSheet(ss.getSheetByName("DRAFT")); } } Any help would be greatly appreciated! A: I felt interested in this issue and worked a bit more on it. I changed from it being a get request to being a post request. Here is what I have in the Google sheet. function sendAlertEmails() { var ss = SpreadsheetApp.getActiveSpreadsheet(); ss.setActiveSheet(ss.getSheetByName("Send Mails")); var sheet = SpreadsheetApp.getActiveSheet(); var dataRange = sheet.getRange("A2:f2"); var data = dataRange.getValues(); for (i in data) { var rowData = data[i]; var emailAddress = rowData[1]; var recipient = rowData[0]; var message1 = rowData[2]; var message2 = rowData[3]; var message3 = rowData[4]; var message4 = rowData[5]; var message = 'Hey ' + recipient + ',\n\n' + message1 + '\n\n' + ' The last player picked was ' + message2 + '\n\n' + message3 +'\n\n' + message4; var subject = '*GOLF DRAFT 2018* - YOU ARE ON THE CLOCK'; var data = { 'name': 'Bob Smith', 'email': '[email protected]', 'message': message, 'subject': subject, }; var options = { 'method' : 'post', 'contentType': 'application/json', 'payload' : data }; var secondScriptID = 'STANDALONE_SCRIPT_ID' var response = UrlFetchApp.fetch("https://script.google.com/macros/s/" + secondScriptID + "/exec", options); Logger.log(response) // Expected to see sent data sent back var ss = SpreadsheetApp.getActiveSpreadsheet(); ss.setActiveSheet(ss.getSheetByName("DRAFT")); // Browser.msgbox("Your Pick Has Been Made"); } } Below is what I have in the standalone script. There are some provisos on the standalone script working: It needs to be published under "Deploy as a webapp" Access should be set to 'Anyone, even anonymous' Every time you make a change to the standalone script publish again and change the Project version to new. This is so the call from the first sheet calls to the latest code. Standalone Script function convertURItoObject(url){ url = url.replace(/\+/g,' ') url = decodeURIComponent(url) var parts = url.split("&"); var paramsObj = {}; parts.forEach(function(item){ var keyAndValue = item.split("="); paramsObj[keyAndValue[0]] = keyAndValue[1] }) return paramsObj; // here's your object } function doPost(e) { var data = e.postData.contents; data = convertURItoObject(data) var recipient = data.email; var body = data.message; var subject = data.subject; try { MailApp.sendEmail(recipient, subject, body) } catch(e){ Logger.log(e) } return ContentService.createTextOutput(JSON.stringify(e)); }
{ "pile_set_name": "StackExchange" }
The invention relates to methods and equipment for establishing data security in an e-mail service between an e-mail server and a mobile terminal. Data security in an e-mail service is achieved by using cryptographic techniques in which traffic in a potentially insecure channel is encrypted using cryptographic information, commonly called encryption keys. A problem underlying the invention relates to distributing such encryption information. Prior art techniques for distributing the encryption information are commonly based on public key encryption techniques, such as Diffie-Hellman. A problem with this approach is that the parties have to trust the underlying mobile network and its operator, which they are surprisingly reluctant to do. Another problem is that mobile terminals tend to have small and restricted user interfaces.
{ "pile_set_name": "USPTO Backgrounds" }
Your Collection: Sticker Collection Here are some pictures of my sticker collection from late 70's forward. I worked for Cycle Five Kawasaki in Calvert county Maryland owned by Dale and Ann Norfolk the parents of Skip, Shawn, and Scott. Two of these names are household names in the MX/SX world. The other one is smart. We all collected and traded stickers and beer cans. I never bought any of these, companies either gave them out or I traded for them. Someday I am going to restore some old motocross bikes and then I will stick them. ATTENTION READERS: WE NEED YOUR COLLECTIONS! Do you have something cool you'd like to show off? Submit a piece from your collection as well as your name and mailing address to [email protected] and win Throttle Jockey stickers. You will be notified via e-mail if you are the winner! *Please note that while international readers may submit their Collections, we are only able to award and ship prizes to winners within the United States.
{ "pile_set_name": "Pile-CC" }
Start Date: 4/12/01; HourAhead hour: 14; No ancillary schedules awarded. Variances detected. Variances detected in Energy Import/Export schedule. LOG MESSAGES: PARSING FILE -->> O:\Portland\WestDesk\California Scheduling\ISO Final Schedules\2001041214.txt ---- Energy Import/Export Schedule ---- $$$ Variance found in table tblINTCHG_IMPEXP. Details: (Hour: 14 / Preferred: 20.00 / Final: 19.97) TRANS_TYPE: FINAL SC_ID: ECTRT MKT_TYPE: 2 TRANS_DATE: 4/12/01 TIE_POINT: MEAD_2_WALC INTERCHG_ID: EPMI_CISO_BERT ENGY_TYPE: FIRM
{ "pile_set_name": "Enron Emails" }
244 F.Supp.2d 1250 (2003) HORIZON HOLDINGS, L.L.C. f/k/a Horizon Marine L.C.; Geoffrey Pepper; Cassandra O'Tool; and John O'Tool; Plaintiffs, v. GENMAR HOLDINGS, INC.; Genmar Industries, Inc.; and Genmar Manufacturing of Kansas, L.L.C, Defendants. No. 01-2193-JWL. United States District Court, D. Kansas. February 11, 2003. *1255 Floyd R. Finch, Jr., Blackwell Sanders Peper Martin LLP, George A. Hanson, Stueve Helder Siegel LLP, Kansas City, MO, Nicole T. Bock, Blackwell Sanders Peper Martin LLP, Omaha, NE, Todd M. McGuire, Stueve Helder Siegal LLP, Kansas City, MO, for Plaintiffs. Harlan D. Burkhead, Lathrop & Gage L.C., Kansas City, MO, Holly S.A. Eng, Judith Williams-Killackey, Thomas Tinkham, Dorsey & Whitney LLP, Minneapolis, MN, Rosalee M. McNamara, Tedrick A. Housh, III, Timothy K. McNamara, Lathrop & Gage L.C., Kansas City, MO, for Defendants. MEMORANDUM & ORDER LUNGSTRUM, District Judge. Plaintiffs filed suit against defendants asserting various claims arising out of defendants' acquisition of plaintiff Horizon Marine LC, an aluminum boat manufacturing company. Specifically, plaintiffs Horizon Holdings, LLC f/k/a Horizon Marine LC (hereinafter "Horizon") and Geoffrey Pepper claimed that defendants breached both the express terms of the purchase agreement entered into between the parties and the duty of good faith and fair dealing implied in the purchase agreement. Plaintiffs Horizon and Mr. Pepper further claimed that defendants made a variety of fraudulent misrepresentations to them for the purpose of inducing plaintiffs to enter into the purchase agreement. In addition, plaintiffs Cassandra O'Tool and John O'Tool alleged that defendants breached the employment agreements signed by them. Ms. O'Tool further alleged that defendants discriminated against her on the basis of her pregnancy when they denied her a raise and when they terminated her employment. Finally, Ms. O'Tool and Mr. Pepper claimed that defendants unlawfully terminated their employment in retaliation for Ms. O'Tool's and Mr. Pepper's complaints of pregnancy discrimination. For a more thorough understanding of the facts of this case, please see the court's order resolving defendants' motions for summary judgment, Horizon Holdings, L.L.C. v. Genmar Holdings, Inc., 241 F.Supp.2d 1123 (D.Kan.2002). In November 2002, plaintiffs' claims were tried to a jury and, at the conclusion of the trial, the jury returned a verdict in favor of plaintiffs Horizon and Mr. Pepper on their breach of contract claim in the amount of $2,500,000. The jury also found in favor of the O'Tools on their claims that defendants breached the O'Tools' employment contracts and awarded Ms. O'Tool the sum of $63,200 and Mr. O'Tool the sum of $20,313. The jury found in favor of defendants on all other claims. This matter is presently before the court on three post-trial motions-plaintiffs' motion to alter or amend the judgment (doc. # 197); plaintiffs' motion for attorneys' fees, costs and expenses (doc. # 198); and defendants' renewed motion for judgment as a matter of law pursuant to Rule 50(b) or, in the alternative, motion for remittitur *1256 and/or new trial pursuant to Rule 59 (doc. # 199). As set forth in more detail below, plaintiffs' motion to alter or amend the judgment is granted only to the extent that a typographical error in the judgment will be corrected and is otherwise denied; plaintiffs' motion for attorneys' fees, costs and expenses is granted in part and denied in part; and defendants' renewed motion for judgment as a matter of law, for remittitur and/or for a new trial is denied. I. Defendants' Renewed Motion for Judgment as a Matter of Law, for Remittitur and/or for New Trial Defendants seek post-trial relief on all aspects of the jury's verdict that are favorable to plaintiffs. The primary thrust of defendants' post-trial motion concerns the jury's verdict of $2.5 million in favor of Horizon and Mr. Pepper on the breach of contract claim. According to defendants, this award constitutes a windfall unsupported by the facts or the law. Defendants urge that plaintiffs, as a matter of law, are not entitled to recover any damages in the form of lost earn-out. In the alternative, defendants contend that the award must be remitted or a new trial must be granted on lost earn-out damages. Defendants also seek judgment as a matter of law on the jury's liability finding on the breach of contract claim, asserting that plaintiffs failed to present legally sufficient evidence that defendants breached the express or implied terms of the purchase agreement. Similarly, defendants move for judgment as a matter of law on the O'Tools' claims for breach of their respective employment agreements or for a remittitur of those verdicts. Finally, defendants assert that they are entitled to a new trial because the court erroneously admitted parol evidence and erroneously instructed the jury on the duty of good faith and fair dealing. A. The Jury's Verdict in favor of Plaintiffs Horizon and Geoff Pepper on their Breach of Contract Claim The court first addresses defendants' argument that they are entitled to judgment as a matter of law on the jury's liability finding with respect to Horizon and Mr. Pepper's breach of contract claim. Judgment as a matter of law under Rule 50(b) "should be cautiously and sparingly granted," Black v. M & W Gear Co., 269 F.3d 1220, 1238 (10th Cir.2001), and is appropriate only if the evidence, viewed in the light most favorable to the nonmoving party, "points but one way and is susceptible to no reasonable inferences supporting the party opposing the motion." Sanjuan v. IBP, Inc., 275 F.3d 1290, 1293 (10th Cir.2002). In determining whether judgment as a matter of law is proper, the court may not weigh the evidence, consider the credibility of witnesses, or substitute its judgment for that of the jury. See Turnbull v. Topeka State Hosp., 255 F.3d 1238, 1241 (10th Cir.2001). In essence, the court must affirm the jury verdict if, viewing the record in the light most favorable to the nonmoving party, it contains evidence upon which the jury could properly return a verdict for the nonmoving party. See Roberts v. Progressive Independence, Inc., 183 F.3d 1215, 1219-20 (10th Cir.1999) (citing Harolds Stores, Inc. v. Dillard Dep't Stores, Inc., 82 F.3d 1533, 1546 (10th Cir.1996)). Conversely, the court must enter judgment as a matter of law in favor of the moving party if "there is no legally sufficient evidentiary basis ... with respect to a claim or defense ... under the controlling law." Deters v. Equifax Credit Information Servs., Inc., 202 F.3d 1262, 1268 (10th Cir. 2000) (quoting Harolds, 82 F.3d at 1546-47). In their papers, defendants assert that, as a matter of law, they did not breach the express terms of the purchase *1257 agreement or the implied terms of the purchase agreement. The jury was instructed that they could find in favor of plaintiffs on plaintiffs' breach of contract claim if they found that plaintiffs had proved a breach of one or more express terms or a breach of the implied duty of good faith and fair dealing. See Jury Instruction 12. Because the court concludes that there was ample evidence presented at trial to support a finding that defendants breached the implied covenant of good faith and fair dealing, the court declines to address defendants' arguments concerning whether the evidence was sufficient to support a finding that defendants had breached any express terms of the purchase agreement. According to defendants, plaintiffs' claim for breach of the implied covenant of good faith and fair dealing fails as a matter of law because it purports to "add wholly new terms to the contract" and "requires the court to rewrite or supply omitted provisions to the purchase agreement in contravention of Delaware law." [1] This is, of course, an accurate statement of Delaware law. See, e.g., Cincinnati SMS A Limited Partnership v. Cincinnati Bell Cellular Systems Co., 708 A.2d 989, 992 (Del. 1998) ("Delaware observes the wellestablished general principle that ... it is not the proper role of a court to rewrite or supply omitted provisions to a written agreement."). Nonetheless, principles of good faith and fair dealing permit a court to imply certain terms in an agreement so as to honor the parties' reasonable expectations when those obligations were omitted, in the literal sense, from the text of the written agreement but can be understood from the text of the agreement. Id. In determining whether to imply terms in an agreement, the proper focus is on "what the parties likely would have done if they had considered the issue involved." Id. Nothing in this court's instructions to the jury would have permitted the jury to "rewrite" the purchase agreement or to inject into that agreement wholly new terms. In fact, the jury was instructed, entirely consistent with Delaware law, that they should consider "whether it is clear from what was expressly agreed upon by the parties that the parties would have agreed to prohibit the conduct complained of as a breach of the agreement had they thought to negotiate with respect to that matter." See Jury Instruction 12. Defendants argue in their papers that Mr. Pepper did not demonstrate at trial that the parties would have agreed to prohibit the challenged conduct if they had thought to negotiate about such conduct. Of course, defendants also made this argument to the jury. The jury rejected the argument and there was more than sufficient evidence presented at trial to support that conclusion. For example, the jury could have readily concluded that, in light of the express agreement that plaintiffs would have an opportunity to realize up to $5.2 million in earn-out consideration (defined in the agreement itself as part of the "purchase price"), that the parties would have agreed, had they thought about it, that defendants would not be permitted to undermine Mr. Pepper's authority as president of Genmar Kansas; to abandon the Horizon brand name entirely; to mandate production of Ranger and Crestliner brands at the Genmar Kansas facility to the detriment of the Horizon brand; or to reimburse Genmar Kansas at only "standard cost"[2] for the manufacture of Ranger *1258 and Crestliner boats thereby impairing realization of the earn-out. If the jury concluded that defendants had engaged in such conduct (and there was sufficient evidence to draw such a conclusion), then the jury was free to conclude that such conduct was inconsistent with the spirit of the agreement concerning the earn-out consideration and that such conduct constituted a breach of the implied covenant of good faith and fair dealing. In short, there is evidence in the record upon which a jury could properly return a verdict for Horizon and Mr. Pepper on their breach of contract claim. Judgment as a matter of law, then, is not appropriate. Defendants also assert that they are entitled to judgment as a matter of law on Horizon and Mr. Pepper's breach of contract claim because plaintiffs failed to present evidence upon which a reasonable jury could have concluded that defendants acted in bad faith. In support of this argument, defendants point to a Delaware Supreme Court decision defining "bad faith" as "the conscious doing of a wrong because of a dishonest purpose or moral obliquity; it is different from the negative idea of negligent in that it contemplates a state of mind affirmatively operating with furtive design or ill will." See Desert Equities, Inc. v. Morgan Stanley Leveraged Equity Fund. II, L.P., 624 A.2d 1199, 1209 n. 16 (Del. 1993). According to defendants, the evidence concerning defendants' course of conduct demonstrates only that defendants were attempting to make a profit and that no evidence was presented that defendants were acting with any furtive design or ill will. As an initial matter, the jury was instructed that a "violation of the implied covenant of good faith and fair dealing implicitly indicates bad faith conduct." See Jury Instruction 12. Thus, the court's instruction certainly requires that defendants' conduct reflect some element of bad faith. While the jury was not required to find specifically that defendants acted with furtive design or ill will in order to find that defendants had breached the covenant of good faith and fair dealing, defendants have not directed the court to any cases suggesting that proof of a breach of the duty of good faith and fair dealing is inadequate in the absence of proof of some furtive design or ill will. Certainly, the Desert Equities case does not suggest such a conclusion. There, the court defined "bad faith" only for purposes of contrasting the nature of that claim with a fraud claim in explaining why it was rejecting the defendants' argument that a plaintiff must plead with particularity under Rule 9(b) a claim of bad faith. See 624 A.2d at 1208. The court, then, rejects defendants' suggestion that evidence of some furtive design or ill will was necessary for a finding of liability on plaintiffs' claim that defendants breached the covenant of good faith and fair dealing. See True North Composites, LLC v. Trinity Indus., Inc., 191 F.Supp.2d 484, 517-18 (D.Del.2002) (rejecting argument that claimant must prove that the other party acted "with furtive design or ill will" in order to prove a breach of the covenant of good faith and fair dealing). In any event, even assuming that plaintiffs were required to prove that defendants acted with furtive design or ill will *1259 in order to prove a breach of the covenant of good faith and fair dealing, copious evidence was presented at trial demonstrating that defendants acted with the requisite "dishonest purpose" or "furtive design." There was ample evidence, for example, that defendants had ulterior motives for acquiring Horizon Marine, including the desire to remove a potentially significant competitor from the market and the desire to obtain a facility in the "southern" market dedicated primarily to the production of Ranger boats. There was also substantial evidence demonstrating that defendants' course of conduct was intended to benefit defendants' bottom line to the financial detriment of Mr. Pepper. In that regard, the jury could reasonably have concluded that defendants' efforts to undermine Mr. Pepper's authority as president of Genmar Kansas and their decisions to abandon the Horizon brand name entirely, to mandate the production of Ranger and Crestliner brands at the Genmar Kansas facility and to reimburse Genmar Kansas at only "standard cost" for the manufacture of Ranger and Crestliner boats were all designed to either force Mr. Pepper to quit his employment (thereby extinguishing Mr. Pepper's right to collect any earn-out) or prevent Mr. Pepper from achieving the profit margins necessary to realize his earn-out (because the formula pursuant to which the earn-out was calculated was weighted heavily in favor of the production of Horizon boats). While defendants urge that such a characterization of the evidence simply makes no sense because defendants themselves made no money on the Horizon Marine acquisition (an argument that defendants presented at length to the jury), the evidence was sufficient to support the conclusion that defendants believed (but were ultimately incorrect) that they could still turn a profit through the production of Ranger and Crestliner boats at Genmar Kansas while simultaneously preventing Mr. Pepper from realizing any earn-out by stifling the production of Horizon boats and reimbursing Genmar Kansas only at standard cost for the production of other boats. Simply put, ample evidence was presented from which the jury could reasonably conclude that defendants' conduct, taken as a whole, was in "bad faith," regardless of how that phrase is defined. In sum, the evidence presented at trial was more than adequate for the jury to conclude that defendants breached the implied covenant of good faith and fair dealing. Defendants' motion on this issue is denied. B. The Jury's Award of $2.5 Million for Lost Earn-Out Consideration Defendants contend that they are entitled to judgment as a matter of law on Horizon and Mr. Pepper's claim for damages for two separate but related reasons. First, defendants assert that plaintiffs presented no evidence whatsoever for the jury to ascertain what position plaintiffs would have been in if the purchase agreement had been properly performed. Second, defendants assert that Delaware law precludes any recovery because Genmar Kansas was a new business with no profit history and no evidence was presented from which the jury could conclude that Genmar Kansas was reasonably certain to realize the gross profit margins necessary to achieve any earn-out under the agreement. In the alternative, defendants seek an order remitting the award to nominal damages of one dollar or a new trial on the issue of damages. 1. Judgment as a Matter of Law The jury was instructed that if they found that defendants had breached the purchase agreement and that plaintiffs sustained damages as a result of that *1260 breach, then Horizon and Mr. Pepper were entitled to compensation "in an amount that [would] place them in the same position they would have been in if the purchase agreement had been properly performed." See Jury Instruction 13. According to defendants, plaintiffs made no effort to explain to the jury how, assuming defendants had performed their contractual obligations in good faith, Genmar Kansas would have ever met the requisite gross profit margins or generated the gross revenues necessary to entitle them to substantial earn-out payments. Stated another way, defendants urge that there was simply no evidence presented at trial that Genmar Kansas would have been profitable absent defendants' breach of the purchase agreement. The evidence presented at trial, however, was more than sufficient to permit the jury to conclude that Genmar Kansas would have been profitable absent defendants' breach. Mr. Pepper, for example, testified on the second day of his direct examination that, in his mind, the requisite 13 percent gross profit margin was reasonable and obtainable based on his prior experience with other industry boat companies. According to Mr. Pepper, he had worked for other companies where the gross profit margins ranged from 15 percent to 30 percent, so the 13 percent figure seemed "low" to him. Mr. Pepper further testified that during the time that he was responsible for directing Lowe's manufacturing operations,[3] Lowe achieved gross profit percentages in the range of 30 percent. Mr. Pepper cautioned, however, that he needed a certain level of autonomy with respect to the management of Genmar Kansas to ensure that Genmar Kansas would realize the profits and revenues necessary for Mr. Pepper to obtain the earnout. Specifically, Mr. Pepper testified on the first day of his direct examination that he sought (and received) assurances from Mr. Oppegaard and Mr. Cloutier that they would "allow [him] to do what is necessary in managing the company to obtain that earn-out." According to Mr. Pepper, Mr. Oppegaard further assured him that he would be in control of Genmar Kansas' operations and that he would be able to make the "operation decisions necessary" to obtain the earn-out. The evidence presented at trial was also sufficient from which the jury could conclude that Horizon Marine, just prior to defendants' acquisition, was about to "break into the black" and turn a profit. Mr. Pepper, for example, testified on the first day of his direct examination that Horizon Marine was enjoying significant progress in late 1997 and the first six months of 1998. Mr. Pepper fully expected Horizon Marine to start making a profit in 1998. Indeed, the opinions and perspectives of other people associated with the acquisition lent additional credence to Mr. Pepper's beliefs. Mr. Pepper testified on direct examination, for example, that Bill Ek, a consultant for defendants who visited the Horizon Marine facility in November 1997, was "amazed" at "how far [Horizon Marine] had come in such a short period of time." Mr. Oppegaard testified on cross-examination that Mr. Ek had advised him that Mr. Pepper was "the best product development person in the industry." Similarly, the jury heard testimony on the first day of Mr. Pepper's direct examination that Mr. Oppegaard was impressed and excited about what Mr. Pepper had been able to accomplish with Horizon Marine in a short period of time. In fact, Mr. Oppegaard, after meeting Mr. Pepper and visiting Horizon Marine for the first time, sent an internal memorandum *1261 to his executive team in which he described Mr. Pepper and the Horizon product as "a major competitor if left alone to grow." Mr. Oppegaard also testified on cross-examination that he anticipated that Horizon Marine would grow very fast. From this evidence, a reasonable jury could infer that if defendants had allowed Mr. Pepper to direct the daily operations of Genmar Kansas, then Mr. Pepper would have been able to achieve the requisite gross profit margins to realize the earnout. See Harrington v. Hollingsworth, 1992 WL 91165, at *4 (Del.Super.Ct. Apr. 15, 1992) (in breach of contract case, lost income damages not speculative where commercial fisherman testified that had the defendant constructed his larger commercial fishing boat on time, he would have been able to catch more sea bass and double his annual income; fisherman's testimony was sufficient to establish damages with reasonable probability where his projections were based on bass fishing industry, an industry with which plaintiff was familiar and in which he had participated for 20 years). Moreover, defendants attempted to demonstrate at trial-through both argument and the examination of witnessesthat plaintiffs' claim for damages based on the earn-out was unreasonable because it was uncertain whether the company would have been able to meet the requisite profit margins and revenues. Defendants' efforts in that regard apparently had some impact-the jury awarded only half of the total earn-out consideration. Presumably, then, the jury concluded that plaintiffs had not proved loss of the total earn-out amount with reasonable certainty. Finally, any doubt concerning the amount of damages sustained by plaintiffs is resolved against defendants. As the breaching party, defendants "should not be permitted to reap advantage from [their] own wrong by insisting on proof which by reason of [their] breach is unobtainable." See E. Allan Farnsworth, Contracts § 12.15 at 922 (2d ed.1990); accord Restatement (Second) of Contracts § 352 cmt. a (Any doubts in the proof of damages are resolved against the party in breach because "[a] party who has, by his breach, forced the injured party to seek compensation in damages should not be allowed to profit from his breach where it is established that a significant loss has occurred."). In a related argument, defendants contend that they are entitled to judgment as a matter of law on plaintiffs' claim for damages because, under Delaware law, "a new business with no profit history cannot obtain lost profit damages." See Defs. Br. at 7. On its face, then, defendants' argument is premised on the idea that plaintiffs' damages for lost earn-out consideration is the equivalent of an award for damages based on lost profits. Given the nature of the earn-out consideration at issue in this case, however, it is simply not appropriate to subject plaintiffs' claim for damages to a traditional lost profits analysis. To be sure, Genmar Kansas' profitability was an important component of the earnout formula. However, unlike those cases in which one party seeks to recover lost profits when the issue of whether that party could reasonably expect such profits is in dispute, the parties here agreed at the outset of their relationship that it was reasonable for Mr. Pepper to expect an additional $5.2 million in earn-out consideration pursuant to a formula developed by defendants. Indeed, the parties agreed that the earn-out consideration was part of the total purchase price for the acquisitionan agreement that is reflected in Article 2 of the contract, which states that the "Cash Consideration and the Earn-Out Consideration described in Section 2.2 below are referred to in this Agreement in *1262 the aggregate as the `Purchase Price.'" See Trial Ex. 227a § 2.1. As Mr. Pepper explained on the second day of his direct examination, defendants initially proposed the earn-out consideration as "more of an incentive-type thing" separate and apart from the purchase price. However, after multiple discussions during which Mr. Pepper, Mr. Oppegaard and Mr. Cloutier all agreed that the earn-out was obtainable and that Mr. Pepper would be given the requisite autonomy to obtain the earn-out, defendants ultimately agreed to include the earn-out as part of the purchase price. While both parties agreed at trial that the earn-out was not a "guarantee," ample evidence was presented that all parties believed there to be "reasonable probability" that Mr. Pepper would realize the full amount of the earn-out. Indeed, on his direct examination, Mr. Pepper testified that both Mr. Cloutier and Mr. Oppegaard assured him that the earn-out was obtainable. On his cross-examination, Mr. Pepper testified that he advised his investors in writing that "the management of Horizon believes there is a reasonable probability that ... the earn-out consideration will be achieved." Similarly, Mr. Cloutier testified on direct examination that he he believed at the time of the transaction that Mr. Pepper had a "very realistic" opportunity to achieve the earn-out. Moreover, on cross-examination, Mr. Cloutier testified that he believed that the earn-out portion of the purchase agreement was achievable based in part on defendants' own internal projections. In their papers, defendants now characterize their assurances and beliefs that the earn-out was obtainable as mere "pre-contractual guesswork" and contend that to permit plaintiffs to recover damages based on such guesswork without considering Genmar Kansas' "actual performance" is to provide plaintiffs with an "unwarranted windfall." This argument, however, ignores the significance of the jury's implicit finding-that Genmar Kansas' actual performance would have been different (indeed, it would have been profitable) had defendants performed their obligations under the purchase agreement consistent with plaintiffs' reasonable expectations. In other words, the jury apparently found that defendants' conduct, including undermining Mr. Pepper's managerial authority and requiring increased production of multiple models of Ranger boats, had the effect of rendering Mr. Pepper unable to perform as he had planned, unable to operate Genmar Kansas appropriately and ultimately unable to succeed in achieving any earnout consideration. For these reasons, defendants' reliance on the actual performance of Genmar Kansas as a basis for judgment as a matter of law is misplaced. In sum, the court rejects defendants' attempt to analyze plaintiffs' claim for damages as one for lost profits. The jury's award of $2.5 million is not speculative and is supported by evidence that Genmar Kansas would have been profitable and that the earn-out would have been obtainable if defendants had performed in good faith their obligations under the purchase agreement. 2. Remittitur As an alternative to their argument that they are entitled to judgment as a matter of law on plaintiffs' claim for damages in the form of lost earn-out, defendants maintain that this court should enter a remittitur reducing the $2.5 million verdict to nominal damages of one dollar in light of the "utterly speculative nature" of the lost earn-out damages. Of course, the court has already concluded that the jury's award of $2.5 million was not speculative, so the motion for remittitur is denied. In any event, under Delaware law, the court may order a remittitur only if the verdict *1263 "is so grossly out of proportion as to shock the Court's conscience." See Gillenardo v. Connor Broadcasting Delaware Co., 2002 WL 991110 at *10 (Del.Super.Ct. Apr.30, 2002) (citing Mills v. Telenczak, 345 A.2d 424, 426 (Del.1975)); see also Century 21 Real Estate Corp. v. Meraj Int'l Investment Corp., 315 F.3d 1271, 1281 (10th Cir.2003) (in assessing measure of damages awarded pursuant to contract containing choice of law provision, district court must follow chosen state's law-absent any argument that choice of law provision is unenforceable-including that state's law concerning remittitur). Again, the jury had before it sufficient evidence to conclude that plaintiffs would have realized a significant portion of the earn-out consideration had defendants performed in good faith their obligations under the contract. The $2.5 million verdict represents exactly half of the entire earnout portion of the purchase agreement and exactly half of what the plaintiffs sought to recover on their breach of contract claim. The award is not excessive, it is not unreasonable, it does not shock the court's conscience and, thus, it will not be remitted. See id. at 1282-83 (affirming district court's refusal to remit $700,000 verdict on breach of contract claim, despite concerns about reliability of testimony concerning lost profits and "unrealistic" projections; district court reviewed award under "shock the conscience" standard). 3. New Trial Defendants' final arguments with respect to the jury's verdict on plaintiffs' breach of contract claim is that they are entitled to a new trial because the verdict is against the weight of the evidence and the result of passion and prejudice. Delaware law permits a district court to set aside a verdict and order a new trial only if "the evidence preponderates so heavily against the jury verdict that a reasonable jury could not have reached the result." See Gannett Co. v. Re, 496 A.2d 553, 558 (Del. 1985). For the reasons set forth above in connection with defendants' motion for judgment as a matter of law, the court concludes that evidence presented at trial was sufficient for the jury to have reached the result that it did. Similarly, for the reasons explained above, the court cannot conclude that the verdict is so clearly excessive as to indicate that it was the result of passion or prejudice. See Yankanwich v. Wharton, 460 A.2d 1326, 1332 (Del.1983) ("A verdict will not be disturbed as excessive unless it is so clearly so as to indicate that it was the result of passion, prejudice, partiality, or corruption; or that it was manifestly the result of disregard of the evidence or applicable rules of law."). The jury's verdict of $2.5 million on plaintiffs' breach of contract claim will stand. C. The Jury's Verdicts in favor of Cassandra O'Tool and John O'Tool The jury also found in favor of Cassandra O'Tool and John O'Tool on their claims that defendants breached the O'Tools' employment contracts. The jury awarded Ms. O'Tool the sum of $63,200 and Mr. O'Tool the sum of $20,313. Defendants assert that they are entitled to judgment as a matter of law on the O'Tools' claims for breach of their employment contracts or, in the alternative, that they are entitled to a remittitur reducing the damages awarded to the O'Tools. For the reasons explained below, defendants' motion is denied. 1. Judgment as a Matter of Law At trial, Cassandra and John O'Tool argued that defendants breached the express terms of their respective employment agreements. Specifically, the O'Tools maintained that, pursuant to the express language of their employment agreements, defendants could not discharge Mr. or Ms. O'Tool prior to the end *1264 of an initial three-year employment period except in four narrow circumstances and that they were not discharged for any of those four reasons. In support of their argument, the O'Tools highlighted for the jury section 3 and section 7 of their employment agreements: 3. Term of Employment. This Agreement shall have a term of three (3) years, subject to earlier termination pursuant to the provisions of Section 7 hereof. * * * * * * 7. Termination and Severance. (a) This Agreement may be terminated prior to the end of the three (3) year term by Genmar Kansas for (i) cause, (ii) lack of adequate job performance as determined by Genmar Kansas' President and the President of Genmar Holdings, (iii) death of Employee, or (iv) disability of Employee. (b) In the event Genmar Kansas terminates Employees employment for any reason other than termination for cause, death or disability Employee shall be entitled to six (6) months of severance pay at the base salary Employee is earning on the date of such termination. Defendants attempted to convince the jury, and now the court, that the O'Tools were terminated for "lack of adequate job performance" consistent with section 7 of their employment contracts. The jury clearly rejected defendants' argument and, in finding that defendants breached the O'Tools' employment contracts, concluded that the O'Tools were not terminated for inadequate job performance or any other reason set forth in section 7. Indeed, ample evidence was presented at trial to support the jury's conclusion. In that regard, the jury could have concluded (and presumably did conclude) that the O'Tools were terminated not because of any performance issues but because of their familial ties with Geoff Pepper, the key individual with whom defendants were attempting to sever their relationship. In other words, the jury could have easily concluded from the evidence presented at trial that defendants terminated Mr. and Mrs. O'Tool because defendants believed it would be awkward to retain the O'Tools after terminating Geoff Pepper. Another possibility, equally supported by the evidence, is that the jury concluded that the O'Tools were terminated for inadequate job performance but that the assessment of their job performance was not, as required by section 7, "determined by Genmar Kansas' President and the President of Genmar Holdings." Specifically, the jury could have concluded that Mr. Pepper was still serving as the president of Genmar Kansas during the relevant time period and that Mr. Pepper had not determined that his daughter and son-inlaw were performing inadequately. Moreover, the jury could have concluded from the evidence presented at trial that Mr. Oppegaard, the president of Genmar Holdings, had simply not made an assessment of the O'Tools' job performance. In fact, Mr. Oppegaard testified at trial that he had never discussed with Mr. Pepper the adequacy of the O'Tools' job performance and that he did not make the decision to terminate the O'Tools. Defendants also reiterate their argument (made at the summary judgment stage, to the court at the close of plaintiffs' case and to the jury throughout the trial) that Section 12 of the O'Tools' employment agreements eviscerates any notion that the O'Tools were guaranteed employment for a three-year term.[4] Section 12 of the *1265 agreement, entitled "Miscellaneous," contains the following sentence: "This Agreement shall not give Employee any right to be employed for any specific time or otherwise limit Genmar Kansas' right to terminate Employees employment at any time with or without cause." As the court noted in its summary judgment order, however, any ambiguity created when sections 3 and 7 are read together with section 12 was for the jury to resolve and defendants certainly are not entitled to judgment as a matter of law on the O'Tools' breach of contract claims based on the language of section 12. See Horizon Holdings, L.L.C. v. Genmar Holdings, Inc., 241 F.Supp.2d 1123, 1146 (D.Kan.2002). Moreover, the jury could have concluded that section 12, read literally, gives only Genmar Kansas the right to terminate an employee for any reason whatsoever and that, in contrast, Genmar Holdings and Genmar Industries are bound by the language of sections 3 and 7. In sum, the court certainly cannot conclude as a matter of law that the O'Tools were terminated for lack of adequate job performance consistent with section 7 of their employment agreements or that the O'Tools were not guaranteed any specific term of employment. The record contains more than sufficient evidence upon which the jury could properly return a verdict for the O'Tools on their breach of contract claims. 2. Remittitur In the alternative, defendants urge that the damages awarded by the jury to the O'Tools are excessive and against the weight of the evidence and, as a result, they ask the court to enter an order of remittitur reducing the awards. The court begins with defendants' arguments concerning the jury's award of $63,200 to Ms. O'Tool. According to defendants, Ms. O'Tool's lost wages for the relevant time period were only $52,000 and thus, the jury must have awarded Ms. O'Tool more than $11,000 in lost MIP earnings (a bonus pursuant to defendants' Management Incentive Program). Defendants urge that the $52,000 in lost wages must be reduced because the jury failed to deduct from this amount any wages that Ms. O'Tool could have earned if she had made reasonable efforts to obtain other employment. Of course, the burden was on defendants to prove that Ms. O'Tool failed to mitigate her damages. See Leavenworth Plaza Assocs., L.P. v. L.A.G. Enterprises, 28 Kan.App.2d 269, 272, 16 P.3d 314 (2000) (citing Kelty v. Best Cabs, Inc., 206 Kan. 654, 659, 481 P.2d 980 (1971); Rockey v. Bacon, 205 Kan. 578, 583, 470 P.2d 804 (1970)).[5] Defendants spent very little time on this issue at trial. They presented no evidence regarding any specific jobs that might have been available to Ms. O'Tool and, in contrast, plaintiffs presented evidence reflecting that Ms. O'Tool did, in fact, attempt to find alternative employment but was unsuccessful. Ultimately, defendants simply failed to carry their burden on the mitigation issue. Defendants further contend that the jury's calculation of Ms. O'Tool's lost MIP earnings was inaccurate. Consistent with the evidence presented by plaintiffs at trial, the jury apparently awarded Ms. O'Tool approximately $11,000 in lost MIP earnings, *1266 representing 20 percent of Ms. O'Tool's salary. Significantly, defendants do not contest that Ms. O'Tool's employment agreement provided that her MIP compensation would be 20 percent of her salary assuming that both Genmar Holdings and Genmar Kansas met their operating profit goals. Moreover, defendants do not contest that 20 percent of Ms. O'Tool's salary over the relevant 15-month period at issue (the time of her termination through the time when Ms. O'Tool's employment contract would have expired) would be roughly $11,000.[6] Rather, defendants urge that the jury incorrectly assumed that both Genmar Holdings and Genmar Kansas would have met their operating profit goals during the relevant time frame-an assumption that defendants characterize as "clearly erroneous" in light of the fact that Genmar Kansas never reached the operating profits necessary to generate MIP payments. Similarly, defendants contend that the jury improperly calculated Mr. O'Tool's lost MIP earnings when it awarded him $20,313. In that regard, the jury's verdict represents only lost MIP earnings as it was undisputed that Mr. O'Tool earned more money in his subsequent job than he would have earned if he had stayed at Genmar Kansas. Defendants do not dispute that Mr. O'Tool's employment contract provided that his MIP compensation would be 25 percent of his salary (assuming that both Genmar Holdings and Genmar Kansas met their operating profit goals). Defendants also do not dispute that the jury's verdict of $20,313 represents almost to the penny 25 percent of Mr. O'Tool's annual salary of $65,000 over the course of 15 months.[7] Again, defendants maintain only that the jury incorrectly assumed (or wildly speculated) that both Genmar Holdings and Genmar Kansas would have met their operating profit goals during the relevant time frame and that, in fact, Genmar Kansas never met the requisite profit goals. Of course, defendants had the opportunity to make this argument to the jury and did, in fact, make this argument to the jury. The jury, as it was entitled to do, rejected this argument and plainly adopted plaintiffs' theory, thoroughly developed at trial, that Genmar Kansas would have reached its operating profit goals but for defendants' breach of their obligations under the purchase agreement, including their duty of good faith and fair dealing. In short, the jury's award of $63,200 to Ms. O'Tool and $20,313 to Mr. O'Tool does not shock the conscience of this court and, thus, no remittitur will be issued. See Dougan v. Rossville Drainage Dist, 270 Kan. 468, 486, 15 P.3d 338 (2000) (court has the power to issue a remittitur where a verdict is so manifestly excessive that it shocks the conscience of the court); see also Century 21 Real Estate Corp. v. Meraj Int'l Investment Corp., 315 F.3d 1271, 1281 (10th Cir.2003) (in assessing measure of damages awarded pursuant to contract containing choice of law provision, district court must follow chosen state's law-absent any argument that choice of law provision is unenforceable-including that state's law concerning remittitur). *1267 D. Remaining Arguments in Support of New Trial Finally, defendants assert that they are entitled to a new trial pursuant to Federal Rule of Civil Procedure 59(a) in light of two "substantial errors of law" committed by the court. Specifically, defendants contend that the court erred in admitting parol evidence of the parties' negotiations prior to the execution of the purchase agreement and that the court erred in its instruction to the jury regarding the appropriate standard for determining whether defendants breached the implied covenant of good faith and fair dealing. The court addresses each of these arguments in turn and, as explained below, rejects both arguments. 1. Admission of Parol Evidence In their motion, defendants initially argue that the court erred when it admitted, over defendants' objection, parol evidence of the parties' negotiations to support plaintiffs' claim that they were fraudulently induced into executing the purchase agreement. Curiously, defendant concedes (in the same paragraph) that the law permits such evidence to prove fraudulent inducement. What defendants are really arguing is that parol evidence is inadmissible to prove bad faith in a breach of contract claim and that the jury should not have been permitted to consider evidence of the parties' negotiations (and, more specifically, oral assurances made to plaintiffs by defendants prior to the execution of the agreement) in connection with plaintiffs' claim that defendants breached the implied duty of good faith and fair dealing.[8] While defendants objected at trial to the admission of parol evidence concerning the parties' negotiations, they did not, once the court ruled that such evidence was clearly admissible with respect to plaintiffs' fraud claim, request a limiting instruction or even raise the issue of whether such evidence was admissible with respect to plaintiffs' breach of contract claim. In fact, defendants concede, as they must, that they failed to request a limiting instruction. Defendants, however, urge that parol evidence is a rule of substantive law that is not waived by the failure to object to its admission. See Carey v. Shellburne, Inc., 224 A.2d 400, 402 (Del.1966). While this is certainly time, there is nonetheless an evidentiary objection-relevance under Federal Rules of Evidence 401 and 402-that defendants should have made (and did not) if they desired to preclude the jury from considering such evidence with respect to plaintiffs' breach of contract claim. Because defendants failed to raise a timely objection to the admission of such evidence on that basis and request a limiting instruction, the court reviews the admission of the evidence under the "plain error" standard. See Fed.R.Evid. 103(d). The court readily concludes that the admission of evidence concerning the parties' negotiations prior to executing the purchase agreement was not plain error. In fact, the point largely is moot because the court, even if defendants had brought the issue to the court's attention at trial, would have permitted the jury to consider such evidence in connection with plaintiffs' claim that defendants breached the implied covenant of good faith and fail" dealing. In other words, the court would have overruled any objection that defendants might have made in this regard. *1268 The parol evidence rule requires the court to exclude "extraneous evidence that varies or contradicts the terms of a unified written instrument." True North Composites, LLC v. Trinity Indus., Inc., 191 F.Supp.2d 484, 514 (D.Del.2002) (citation omitted). Because defendants have not shown (much less argued) that the evidence presented at trial concerning the parties' negotiations varied or contradicted the terms of the purchase agreement, such evidence simply does not require invocation of the parol evidence rule. Moreover, because the purchase agreement was silent with respect to the majority of the issues discussed by the parties prior to the execution of the agreement (e.g., the number of Ranger boats that Genmar Kansas would be expected to produce or whether Genmar Kansas would be expected to produce any sister-brand boats at all), evidence concerning the parties' pre-acquisition negotiations is entirely appropriate to provide context for plaintiffs' claim that defendants breached their duty of good faith and fair dealing. See id. at 514-15 (denying motion for new trial based on court's alleged error in admitting parol evidence of transaction underlying written agreement because evidence provided context to good-faith-and-fair-dealing claims and testimony did not vary or contradict the terms of the agreement). In other words, evidence concerning what the parties discussed prior to executing the agreement, to the extent such evidence, as here, does not contradict the agreement, is entirely relevant to whether defendants breached the covenant of good faith and fair dealing because the parties' reasonable expectations at the time of the contract formation determine the reasonableness of the challenged conduct. See id. at 516 (evidence concerning course of dealings between the parties prior to execution of agreement was relevant to claim that party breached the covenant of good faith and fair dealing because such evidence illuminated the parties' expectations of each other at the time of contract formation). To conclude, then, defendants have not shown that the parol evidence rule required exclusion, at least for purposes of plaintiffs' breach of contract claim, of evidence concerning the parties' negotiations prior to the execution of the purchase agreement. The court rejects defendants' contention that it erred by allowing the jury to consider such evidence. 2. The Good Faith and Fair Dealing Instruction Defendants' final argument in support of their motion for a new trial is that the court erred in its instruction to the jury concerning the duty of good faith and fair dealing. In its instructions, the court explained the duty, under Delaware law, as follows: [T]he law imposes a duty of good faith and fair dealing in every contract. This duty is a contract term implied by courts to prevent one party from unfairly taking advantage of the other party. This duty includes a requirement that a party avoid hindering or preventing the other party's performance. The implied covenant of good faith and fair dealing emphasizes faithfulness to an agreed common purpose and consistency for the justified expectations of the other party. The parties' reasonable expectations at the time of the contract formation determines the reasonableness of the challenged conduct. A violation of the implied covenant of good faith and fair dealing implicitly indicates bad faith conduct. In determining whether defendants breached the implied covenant of good faith and fair dealing, you may consider whether it is clear from what was expressly agreed upon by the parties that *1269 the parties would have agreed to prohibit the conduct complained of as a breach of the agreement had they thought to negotiate with respect to that matter. See Jury Instruction 12. The court's instruction, in large part, was based on an instruction given by another federal court applying Delaware law concerning the duty of good faith and fair dealing, True North Composites, LLC v. Trinity Indus., Inc., 191 F.Supp.2d 484 (D.Del.2002). In True North, the court, faced with a motion for a new trial based an alleged errors in the good faith and fail dealing instruction, reviewed its instruction and found it to be "consonant with Delaware law." Id. at 517-18. Specifically, the court noted that its instruction "tracks the language of § 205(a) of the Restatement (Second) of Contracts (1979), which has been used by Delaware courts to explain the duty of good faith." Id. at 518.[9] In short, the court readily concluded that its instruction on the duty of good faith and fair dealing was not in error. Id. Defendants urge, as they did at the instruction conference, that any proper instruction on the duty of good faith and fair dealing under Delaware law must require a finding that the conduct at issue involved "fraud, deceit or misrepresentation." Defendants' proposed instruction, for example, contained the following sentence that the court expressly rejected: "To prove defendants breached the implied duty of good faith and fair dealing in the Purchase Agreement, plaintiffs must demonstrate that defendants engaged in conduct of fraud, deceit or misrepresentation." See Def. Proposed Instruction 5. This proffered language is derived from Corporate Property Associates 6 v. Hallwood Group Inc., 792 A.2d 993 (Del.Ch.2002), a trial court decision from the Court of Chancery in Delaware. In that case, a commercial dispute, the Vice Chancellor stated that a claimant seeking to prove a breach of the implied covenant of good faith and fair dealing "must also demonstrate that the conduct at issue involved `an aspect of fraud, deceit or misrepresentation.'" Id. at 1003. At the instruction conference, defendants relied solely on the Corporate Property case to support their proffered instruction. Indeed, defendants did not direct the court to any other Delaware case-much less a Delaware Supreme Court case or a federal case interpreting Delaware law-in which a court required a finding of fraud, deceit or misrepresentation to support a breach of the covenant of good faith and fair dealing in the context of a commercial transaction. As the court explained at the conference, the trial court in Corporate Property cites only to Merrill v. Crothall-American, Inc., 606 A.2d 96, 101 (Del.1992) in support of the "fraud, deceit or misrepresentation" language. The Merrill case involved an employment-at-will contract and the court held that when the conduct of an employer in the employment-at-will context rises to the level of fraud, deceit or misrepresentation, then the employer will have violated the implied covenant of good faith and fair dealing. Id. Interestingly, the Merrill court, in turn, relies on two cases from two other state courts in support of its conclusion that an element of fraud, deceit or misrepresentation must be present before an employer violates the covenant of good faith and fail' dealing. Id. Those cases, Magnan v. Anaconda Indus., Inc., 37 Conn.Supp. 38, 429 A.2d 492 (1980) and A. John Cohen Ins. v. Middlesex Ins. Co., 8 Mass.App.Ct. 178, 392 N.E.2d 862 (1979), *1270 both arise in the employment-at-will context. In the limited and unique context of employment-at-will, requiring an employee to prove that his or her employer's conduct amounted to fraud in order to show a breach of the duty of good faith and fair dealing is entirely consistent with the notion of an at-will employment relationship. For in the absence of a showing of fraud, the covenant of good faith and fair dealing could not operate in the employment-at-will context without wholly defeating the benefit for which the parties bargained-the employer's ability to discharge the employee and the employee's ability to quit his or her employment for good reason, bad reason or no reason at all. Stated another way, parties to an at-will employment relationship are generally not subjected to any good faith standard.[10] On the other hand, in the context of a commercial transaction like the one presented here, the implied covenant of good faith and fair dealing-as it is typically applied (i.e., without a requirement of fraud)-does not conflict with the benefit for which parties to a commercial transaction generally bargain. For these reasons, the court reiterates its belief that the trial court in Corporate Property incorrectly incorporated into the commercial context the "fraud, deceit or misrepresentation" language from the employment-at-will context of Merrill.[11] Defendants, for the first time, now also cite to a Delaware Supreme Court case that they assert rejects the distinction that this court has drawn between the commercial context and employment-at-will context. Specifically, defendants rely on Cincinnati SMSA Limited Partnership v. Cincinnati Bell Cellular Systems Co., 708 A.2d 989 (Del.1998) and contend that in Cincinnati Bell the Delaware Supreme Court "made clear that the same standard applied by the Delaware court in Merrill should also be applied in the commercial contract context." Defendants' characterization of the Cincinnati Bell case is simply inaccurate; in fact, that case supports this court's conclusion that any requirement that a party prove fraudulent conduct to demonstrate a violation of the duty of good faith and fair dealing is limited to the employment-at-will context. In Cincinnati Bell, the Delaware Supreme Court reviewed a decision by the Court of Chancery dismissing, pursuant to Rule 12(b)(6), a good faith and fair dealing claim arising in the context of a limited partnership agreement. Id. at 990. Specifically, the Delaware Supreme Court affirmed the lower court's conclusion that the implied covenant of good faith and fair dealing could not provide a basis for implying additional noncompete obligations in a limited partnership agreement where the agreement's noncompete clause was unambiguous. Id. at 993-94. In so holding, the Cincinnati Bell court emphasized that "implying obligations based on the covenant of good faith and fair dealing is a cautious enterprise." Id. at 992. *1271 Tracing the development of the implied covenant under Delaware law, the court in Cincinnati Bell noted that the Merrill case was the first case in which the court "first recognized the limited application of the covenant to inducement representations in at-will employment contracts." Id. The Cincinnati Bell court further noted that in Merrill, the court "was careful to heed the legal right of employers to pursue a certain amount of self-interest in the creation of contractual relationships" and "held that, to plead properly a claim for breach of an implied covenant of good faith and fair dealing in the inducement of employment, a plaintiff must allege `an aspect of fraud, deceit or misrepresentation.'" Id. at 992-93 (quoting Merrill, 606 A.2d at 101-02). The court in the Cincinnati Bell case then stated, "[t]his Court should be no less cautious or exacting when asked to imply contractual obligations from the written text of a limited partnership agreement." Id. at 993. Defendants argue that this single sentence clearly illustrates an intent by the Delaware Supreme Court to incorporate the fraud standard of the employment-at-will context into the commercial transaction context. A full reading of Cincinnati Bell, however, indicates that the court was simply stressing the narrow scope of the implied covenant and that application of the covenant is a "cautious enterprise." Id. at 992-93. There is no indication in Cincinnati Bell that the court utilized the fraud standard of Merrill in resolving the appeal. In short, Cincinnati Bell in no way suggests that the jury in this case should have been instructed that plaintiffs were required to prove that defendants acted fraudulently in order to prove a breach of the implied covenant and, more importantly, the court believes that the Delaware Supreme Court, if faced with the issue, would refuse to adopt such a requirement. Moreover, defendants' construction of Delaware law on good faith and fair dealing is illogical as it would render a good faith and fair dealing claim entirely duplicative of a fraud claim. In fact, defendants essentially contend that plaintiffs' good faith and fair dealing claim should be converted into one of fraud. Under defendants' theory, then, plaintiffs could not prevail on their good faith and fair dealing claim without also prevailing on their fraud claim. Any distinction, then, between the two claims would be lost. Such a result would be untenable, as the Delaware Supreme Court obviously recognizes a distinction between the two claims. See Desert Equities, Inc. v. Morgan Stanley Leveraged Equity Fund, II, L.P., 624 A.2d 1199, 1207-08 (Del.1993) (distinguishing claim of fraud from allegations of bad faith). Finally, defendants contend that the court's instruction on the duty of good faith and fair dealing was erroneous because it failed to inform the jury that plaintiffs were required to show affirmative acts of bad faith on the part of defendants. The court's instruction advised the jury that a violation of the implied covenant of good faith and fair dealing "implicitly indicates bad faith conduct." While defendants may have preferred different language concerning bad faith, they have not identified how the court's instruction departs from or incompletely portrays Delaware law. Moreover, defendants have not demonstrated why plaintiffs' proof of a breach of the duty of good faith and fair dealing is inadequate without further proof of affirmative acts of bad faith conduct. The court, then, rejects defendants' argument that the instruction was erroneous. See True North, 191 F.Supp.2d at 517-18 (rejecting argument that instruction was erroneous because it failed to advise that the claimant must prove that the other party acted in bad faith where movant failed to show how the court's instruction was inconsistent with Delaware law). *1272 11. Plaintiffs' Motion to Alter or Amend the Judgment The judgment entered on November 21, 2002 states that plaintiffs Horizon and Mr. Pepper shall recover on their breach of contract claim "the sum of $2,500,000.00, with interest thereon at the rate of 1.46 percent per annum as provided by law." Plaintiffs move to alter or amend the judgment to reflect the parties' contractually agreed interest rate of 2 percent per month.[12] In that regard, the relevant section of the purchase agreement executed by the parties states as follows: In the event that the Non-Defaulting Party is entitled to receive an amount of money by reason of the Defaulting Party's default hereunder, then, in addition to such amount of money, the Defaulting Party shall promptly pay to the Non-Defaulting Party a sum equal to interest on such amount of money accruing at the rate of 2% per month (but if such rate is not permitted under the laws of the State of Delaware, then at the highest rate which is permitted to be paid under the laws of the State of Delaware) during the period between the date such payment should have been made hereunder and the date of the actual payment thereof. See Purchase Agreement, Section 13.2(b) (Trial Exhibit 227a). Defendants oppose plaintiffs' motion for three reasons. According to defendants, the contractual rate of interest specified in the purchase agreement is preempted by the standard rate contained in 28 U.S.C. § 1961; plaintiffs have waived their right to have the judgment accrue interest at the parties' contractually agreed rate; and the contractually agreed rate is not permitted under Delaware law. As set forth below, the court concludes that parties are free to contract for a rate other than that specified in 28 U.S.C. § 1961 and, thus, the federal statute does not supersede the parties' agreement. Nonetheless, because the court concludes that plaintiffs have waived their right to assert the rate set forth in the purchase agreement by not preserving their claim of entitlement to such rate in the pretrial order and by failing to raise the issue until after the entry of judgment, the court denies plaintiffs' motion to alter or amend the judgment to the extent plaintiffs seek to enforce the rate established in the purchase agreement. A. Whether Section 1961 Supersedes the Contractually Agreed Rate Defendants contend that 28 U.S.C. § 1961, the federal statute governing post-judgment interest, must govern the award of post-judgment interest in this case despite the parties' contractual agreement for a different rate. Section 1961 states, in relevant part, that "[i]nterest shall be allowed on any money judgment in a civil case recovered in district court" and that [s]uch interest shall be calculated from the date of the entry of the judgment, at a rate equal to the coupon issue yield equivalent (as determined by the Secretary of the Treasury) of the average accepted auction price for the last auction of the fifty-two week United States Treasury bills settled immediately prior to the date of the judgment." 28 U.S.C. § 1961(a). In support of their argument, defendants direct the court to Wilmington Trust Co. v. Aerovias de Mexico, S.A. de C.V., 893 F.Supp. 215, 220 (S.D.N.Y.1995), *1273 where the court calculated post-judgment interest at the section 1961 rate despite a contractual agreement providing for a higher rate. In that case, the district court simply stated that the language of section 1961(a) is mandatory and must govern the interest rate on any judgment debt: The language of [section 1961(a) ] is mandatory: once a claim is reduced to judgment, the original claim is extinguished, and a new claim, called a judgment debt, arises. Section 1961(a) governs the interest rate on this judgment debt. Carte Blanche (Singapore) v. Carte Blanche (Int.), 888 F.2d 260 (2d Cir.1989), citing Kotsopoulos v. Asturia Shipping Co., 467 F.2d 91 (2d Cir.1972). Id. at 220-21. The Wilmington case, however, is not entirely helpful for purposes of this court's analysis of whether parties can contract for a rate of interest different from the rate set forth in section 1961(a). In that regard, the district court in Wilmington did not expressly address whether the parties could contract around the federal statute. Rather, the court seemed to assume that the parties would not be permitted to do so under Second Circuit precedent. However, Carte Blanche and Kotsopoulos, the Second Circuit cases upon which the Wilmington court relies, do not stand for the proposition that parties cannot contract for a different rate of interest. In Kotsopoulos, a maritime case, the issue before the Second Circuit was only whether state law or federal law would determine the appropriate rate of post-judgment interest in admiralty and maritime cases. See 467 F.2d at 94-95. Similarly, the Second Circuit in Carte Blanche did not address whether parties to a contract could provide for a rate different than the standard rate set forth in section 1961(a). There, the Circuit held that an arbitrator could not impose a postjudgment interest rate different than the rate established in section 1961(a). See 888 F.2d at 268-69 (district court judgment affirming an arbitration award is governed by section 1961(a) rather than rate set forth in arbitration award). Plaintiffs, on the other hand, urge that nearly every Circuit Court of Appeals to have addressed this issue has concluded that the parties can agree to an interest rate other than the standard one contained in 28 U.S.C. § 1961. For example, the Seventh Circuit in Central States, Southeast & Southwest Areas Pension Fund v. Bomar National, Inc., 253 F.3d 1011 (7th Cir.2001), affirmed a district court's award of post-judgment interest pursuant to the rate agreed upon in a pension trust agreement rather than the standard rate contained in section 1961(a). In so doing, the Seventh Circuit stated that "[i]t is well established that parties can agree to an interest rate other than the standard one contained in 28 U.S.C. § 1961." Id. at 1020. In support of its statement, the Seventh Circuit cites to the Fifth Circuit's decision in Hymel v. UNC, Inc., 994 F.2d 260, 265 (5th Cir.1993). In Hymel, the Fifth Circuit "noted" that the district court was correct when it awarded post-judgment interest at a rate of 9 percent per annum pursuant to express language contained in a promissory note executed by the parties. Id. at 265-66. The Circuit summarily rejected the argument that section 1961 applies in every case without exception and, in doing so, cited to another Fifth Circuit case, In re Lift & Equipment Service, Inc., 816 F.2d 1013 (5th Cir.1987). See id. In In re Lift, a case arising out of the bankruptcy court, the parties disputed whether the creditor was entitled to post-judgment interest under Louisiana law or under section 1961(a), 816 F.2d at 1018. The Fifth Circuit, however, rejected both arguments and, embracing a view that none of the parties had espoused, applied the interest *1274 rate set forth in the written assignment of accounts receivable. Id. In so doing, the Circuit stated, "While 28 U.S.C. § 1961 provides a standard rate of post-judgment interest, the parties are free to stipulate a different rate, consistent with state usury and other applicable laws." Id. While the Fifth Circuit in In re Lift offered no explanation for its conclusion, it cited to a Ninth Circuit decision, Investment Service Co. v. Allied Equities Corp., 519 F.2d 508 (9th Cir.1975). In that case, the district court judge applied the interest rate agreed upon by the parties in a promissory note. Id. at 511. The guarantor of the loan argued that the assignee of the note was only entitled to the legal rate of interest under Oregon state law. See id. The Ninth Circuit rejected the argument: It is true that the contractual duty here is discharged by merger once the judgment is entered on the note. Restatement of Contracts § 444. However, upon entry of the judgment the legal rate of interest applicable should apply unless the parties have agreed in the note that some other rate of interest shall apply. Corbin on Contracts § 1045 (1962). Id. The court's reliance on Corbin, however, is somewhat puzzling in that Corbin does not purport to draw any conclusion about the effect of a judgment on the parties' contractual agreement to a different rate and it does not address a contractual agreement for post-judgment interest; rather, the section cited by the Ninth Circuit deals only with the payment of interest as "agreed compensation" for a breach of the contract. See Arthur Linton, Corbin on Contracts § 1045 (Interim ed.2002) (expressly stating that section 1045 addresses neither a contract right to interest nor statutory rights thereto, but only interest recoverable as compensatory damages for a breach of contract). In any event, the court ultimately applied Oregon's legal-rate-of-interest statute, which specifically provides that parties to a contract can agree to a higher rate of interest provided that such rate does not exceed the maximum rate allowed by law. See id. The court concedes at the outset that the cases relied upon by plaintiffs, to the extent those cases purport to stand for a well-recognized rule that parties are free to contract for an interest rate other than the rate established in section 1961(a), are problematic in certain respects. In large part, the cases offer very little analysis as to why parties would be able to contract around the seemingly mandatory language of section 1961(a). Moreover, in several of the cases, the precise issue was not one that the court had to decide and, thus, any conclusions about the issue would be mere dicta. Nonetheless, it is clear that the Seventh, Fifth and Ninth Circuits consider it beyond dispute that parties are free to contract for whatever post-judgment interest rate they choose. In addition, the Fourth Circuit, albeit in an unpublished decision, expressly adopted the Fifth Circuit's Hymel decision in affirming a district court's award of post-judgment interest at a rate set forth in a stock redemption agreement as opposed to the rate set forth in section 1961(a). See Carolina Pizza Huts, Inc. v. Woodward, 67 F.3d 294, 1995 WL 572902, at *3 (4th Cir.1995). Moreover, at least one district court has declined to award post-judgment interest at the section 1961(a) rate where the parties stipulated to the entry of a judgment which provided for interest at a higher rate. See In re Connaught Properties, Inc., 176 B.R. 678, 684-85 (Bankr.D.Conn.1995). In the end, the court is called upon to resolve a difficult legal issue on which the Tenth Circuit has not been called to opinean issue that is rendered that much more difficult in light of the dearth of on-point *1275 analysis by other courts. After carefully weighing both sides of the issue, the court ultimately believes that the Tenth Circuit would likely concur with those Circuits that have held that parties should be and are able to contract for a rate other than the rate set forth in section 1961(a). While section 1961 without a doubt uses mandatory language, the court concludes that Congress intended it to be mandatory in the sense that a district court or other third party (e.g., an arbitrator) has no discretion to award a different rate of interest or to decline to award post-judgment interest. See, e.g., Bell, Boyd & Lloyd v. Tapy, 896 F.2d 1101, 1104 (7th Cir.1990) (section 1961(a) allows the judge no discretion to deny the interest authorized by that section); Carte Blanche, 888 F.2d at 269 (the language of section 1961 is mandatory and its terms do not permit the exercise of judicial discretion in its application). The court, however, can discern no sound reason why Congress would have intended that parties themselves could not agree to a different rate. Thus, the court rejects defendants' contention that section 1961(a) supersedes the rate agreed upon by the parties in the purchase agreement. B. Whether Plaintiffs Waived the Right to Assert, the Contractually Agreed Rate Defendants also oppose plaintiffs' motion to alter or amend on the grounds that plaintiffs waived the right to assert the 2% per month rate by failing to include that rate in the pretrial order. Plaintiffs concede that they did not articulate in the pretrial order their claim of entitlement to a higher rate of post-judgment interest. Nonetheless, plaintiffs contend that no such claim needed to be asserted in the pretrial order. As explained below, the court disagrees with plaintiffs on this point. In their papers, plaintiffs rely to a large extent on the legal principles that an award of post-judgment interest is mandatory, see Bancamerica Commercial Corp. v. Mosher Steel of Kansas, Inc., 103 F.3d 80, 81 (10th Cir.1996), and, as such, must be made regardless of what was demanded in the complaint or stated in the pretrial order. See Bell, Boyd & Lloyd v. Tapy, 896 F.2d 1101, 1104 (7th Cir.1990); 10 Charles Alan Wright, Arthur R. Miller & Mary Kay Kane, Federal Practice and Procedure § 2664 at 186-87 (1998). However, the issue is not whether plaintiffs were required to request post-judgment interest in the pretrial order to receive an award of post-judgment interest. The law is clear (and defendants do not dispute) that plaintiffs are entitled to post-judgment interest, at least at the rate established in 28 U.S.C. § 1961(a), despite their failure to request such an award in the pretrial order. The issue as this court sees it is whether plaintiffs are entitled to an award of post-judgment interest at the higher rate of interest specified in the purchase agreement when no such request was made in the pretrial order. It is axiomatic that a Rule 59(e) motion cannot be used to raise a new issue that could have been raised prior to judgment. See Steele v. Young, 11 F.3d 1518, 1520 n. 1 (10th Cir.1993); 11 Charles Alan Wright, Arthur R. Miller & Mary Kay Kane, Federal Practice and Procedure § 2810.1 (2d ed.1995). In other words, Rule 59(e) is "aimed at re consideration, not initial consideration" and, thus, a party may not rely on Rule 59(e) to raise an argument which could, and should, have been made before judgment issued. United States ex rel, Noyes v. Kimberly Constr., Inc., 43 Fed.Appx. 283, 286-87 (10th Cir.2002) (emphasis in original). Despite plaintiffs' insistence that they did not need to raise the issue prior to judgment, *1276 it is beyond dispute that plaintiffs could have raised the issue prior to judgment. Unlike an award of postjudgment interest pursuant to 28 U.S.C. § 1961, the award sought by plaintiffs here was not necessarily a "given." In that regard, while defendants assert only legal arguments in opposition to plaintiffs' claim of entitlement to the higher rate of interest, it is possible that defendants could have sought to raise factual arguments in opposition to the claim. For example, defendants could have asserted that section 13.2(b) was altered by plaintiffs after the contract was signed.[13] Had defendants so asserted, then they would have been entitled to have the jury resolve that dispute. Because a court is not permitted to give relief under Rule 59(e) "if this would defeat a party's right to jury trial on an issue," see Wright, Miller & Kane, supra, § 2810.1, then the fact that one in the place of defendants might have had fact-based defenses available renders plaintiffs' request for award of postjudgment interest pursuant to the purchase agreement the type of request that cannot be raised for the first time pursuant to Rule 59(e). According to plaintiffs, defendants were nonetheless on notice that plaintiffs would assert a claim of entitlement to an award of postjudgment interest at the higher rate because defendants executed the purchase agreement and are charged with knowledge of the contents of that agreement. The court finds this argument disingenuous as it is clear that plaintiffs themselves did not remember (or perhaps even recognize) that the purchase agreement provided for a higher rate of interest until very late in the litigation process. Indeed, section 13.2(b) provides not only for postjudgment interest, but prejudgment interest-a remedy that plaintiffs failed to request at any time during the course of the litigation (and a remedy that plaintiffs acknowledge they cannot now seek). Plaintiffs' failure in that regard demonstrates to the court that they were not aware of or did not remember the contents of section 13.2(b). Moreover, while section 13.2(a) provides for a prevailing party to recover reasonable attorneys fees, plaintiffs did not assert a claim for fees in the pretrial order. This also demonstrates to the court the likelihood that plaintiffs had not considered the contents of section 13.2 in connection with this case at any time prior to entry of the pretrial order. Only after defendants asserted in the pretrial order a right to recover fees did plaintiffs scour the purchase agreement looking for the source of defendants' claim. At that point, after the entry of the pretrial order, plaintiffs moved to amend the pretrial order to assert a claim for fees. The court granted that motion because defendants, who had asserted a claim for the recovery of fees pursuant to the purchase agreement, were not prejudiced by the addition of that claim in that they clearly had knowledge of that portion of the contract and they had not demonstrated that plaintiffs' right to recover fees would affect the trial of the case in any way. The court concludes that defendants were entitled to notice from plaintiffs-prior to trial and, hopefully, at least by the date of entry of the final pretrial order-that plaintiffs intended to seek postjudgment interest at the contractual rate. Such notice would have enabled defendants to ascertain *1277 whether they had any good faith factual arguments to raise in the face of section 13.2(b)-factual arguments that could have been presented to the jury. Moreover, such notice would have permitted defendants to assess fully the risk of bringing this case to trial. More specifically, defendants would have been able to ascertain the total potential exposure that they might face if the jury, as they did, returned a verdict in favor of plaintiffs. Indeed, the interest rate set forth in the contract-2 percent per month-would expose defendants to an additional $600,000 per year in indebtedness to plaintiffs on a verdict of $2.5 million, assuming the jury's verdict is upheld on appeal. In short, the court believes that defendants were entitled to actual notice that plaintiffs' recovery might encompass this significant amount. In sum, plaintiffs' motion to alter or amend the judgment is denied to the extent plaintiffs seek an award of post-judgment interest pursuant to the interest rate set forth in the parties' purchase agreement. C. Whether Delaware Law Prohibits Application of the Contractually Agreed Rate Because the court denies plaintiffs' motion on the grounds that plaintiffs waived their right to assert the higher interest rate found in the purchase agreement, the court need not address defendants' argument that the higher rate is not permitted under Delaware law. Nonetheless, in the interest of judicial economy in the event the parties' appeal this court's decision to the Tenth Circuit, the court notes, without elaborating in full detail, that it would conclude that the higher rate established in the contract is permissible under Delaware law. The Delaware law governing post-judgment interest is codified at section 2301 of Title 6 of the Delaware Code and states, in relevant part, as follows: Any lender may charge and collect from a borrower interest at any rate agreed upon in writing not in excess of 5% over the Federal Reserve discount rate including any surcharge thereon, and judgments entered after May 13, 1980, shall bear interest at the rate in the contract sued upon. Where there is no expressed contract rate, the legal rate of interest shall be 5% over the Federal Reserve discount rate including any surcharge as of the time from which interest is due; provided, that where the time from which interest is due predates April 18, 1980, the legal rate shall remain as it was at such time. Id. § 2301(a). The court agrees with defendants that section 2301(a) clearly provides that no interest rate can exceed 5% over the federal discount rate and rejects plaintiffs' argument that because the judgment in this case was entered after May 13, 1980, section 2301(a) permits interest to accrue at a contractually agreed rate. However, as plaintiffs highlight in their papers, section 2301(c) expressly provides that there is "no limitation on the rate of interest which may be legally charged for the loan or use of money, where the amount of money loaned or used exceeds $100,000, and where repayment thereof is not secured by a mortgage against the principal residence of any borrower." While defendants urge that this provision does not apply because it is limited to the context of a unsecured loan between a lender and a borrower, section 2301(a) on its face would also appear to apply only to lenders and borrowers. Thus, if subsection (a) applies to the purchase agreement (as defendants urge that it does), then subsection (c) would have to apply as well. In any event, defendants are precluded under Delaware law from challenging the *1278 contractual rate as usurious. See Del. Code. tit. 6 § 2306 ("No corporation ... or limited liability company ... shall interpose the defense of usury in any action."). For these reasons, the court would conclude that the rate of interest agreed upon by the parties in the purchase agreement is not prohibited by Delaware law. III. Plaintiffs' Motion for Attorneys' Fees, Costs and Expenses The purchase agreement executed by the parties provides that the prevailing party shall be entitled to recover from the defaulting party all costs and expenses, including reasonable attorneys' fees, incurred in connection with enforcing the terms of the purchase agreement. See Purchase Agreement, Section 13.2(a) (Trial Exhibit 227a). Pursuant to this provision of the contract, and having prevailed on their breach of contract claim, plaintiffs Horizon and Mr. Pepper seek attorneys' fees and expenses totaling $846,740.35.[14] As set forth below, with the exception of a few minor adjustments, the court grants plaintiffs' motion.[15] The parties have stipulated to the reasonableness of all billing rates and, thus, the court need not address that issue. To the extent defendants do oppose plaintiffs' fee request, that opposition is both exceedingly narrow and easily resolved. Defendants assert that plaintiffs' request is simply too exorbitant because of the "limited success" achieved by plaintiffs at trial. To be clear, defendants have not articulated any objections to any specific portion of the fee request or plaintiffs' billing records and they do not contest any specific time entries. Instead, defendants assert only a general objection to the fee request as unreasonable. Indeed, in the face of a request for nearly $850,000 in fees and expenses, defendants have submitted a brief that is less than 9 pages in length. Defendants suggest in their papers that they are relieved of the burden of objecting to specific portions of plaintiffs' fee request because, according to defendants, plaintiffs have failed to meet their burden of showing that the request is reasonable. The court disagrees. To meet their burden of proving the number of hours reasonably spent on the litigation, plaintiffs "must submit meticulous, contemporaneous time records that reveal, for each lawyer for whom fees are sought, all hours for which compensation is requested and how those hours were allotted to specific tasks." United Phosphorus, Ltd. v. *1279 Midland Fumigant, Inc., 205 F.3d 1219, 1233 (10th Cir.2000) (citing Case v. Unified Sch. Dist. No. 233, 157 F.3d 1243, 1249-50 (10th Cir.1998)). The district court, then, may reduce the number of hours when the time records provided to the court are inadequate. Id. at 1233-34. The court has reviewed the billing records submitted by plaintiffs and those records are more than adequate to meet plaintiffs' burden. Defendants also invite the court to dissect plaintiffs' billing records in an effort to determine or "approximate" those fees that are attributable to the breach of contract claim and those fees that are attributable to the unsuccessful claims. The court, however, is not obligated to comb the record to ferret out deficiencies in plaintiffs' submission. It is defendants' obligation to direct the court to such deficiencies if they believe such deficiencies exist. See Public Serv. Co. of Colorado v. Continental Casualty Co., 26 F.3d 1508, 1521 (10th Cir.1994) ("We do not feel that the trial judge was obligated to comb the evidence before him-consisting of voluminous attorney billing records-to ferret out gaps or inconsistencies in the evidence presented on the fees."); see also United States ex rel. C.J.C., Inc. v. Western States Mechanical Contractors, Inc., 834 F.2d 1533, 1549 (10th Cir.1987) ("[T]he trial court is not responsible for independently calculating a `reasonable' fee."). Nonetheless, the court has reviewed the billing records and, in large part, concludes that plaintiffs' fee request is a reasonable one. The court will, however, deduct from plaintiffs' request fees of $67.50 for work performed by attorney Normal Siegel on April 15, 2002 and fees of $585.00 for work performed by attorney Amy Baumann on August 14, 2002. It is apparent from plaintiffs' papers that they intended to deduct these fees from their request (and to request fees for attorney time only to the extent work was done by the two primary lawyers involved in the case-George Hanson and Todd McGuire) but, presumably by oversight, neglected to do so. Similarly, the court will deduct fees of $3195.00 incurred during July 2002 in connection with plaintiffs' motion to compel discovery. Again, plaintiffs' papers indicate that they intended to deduct these fees from their request, having already recovered this sum from defendants by virtue of this court's July 25, 2002 order, but the billing records indicate that this deduction was not, in fact, made. To reiterate, then, aside from these minor deductions, the court has reviewed the billing records and, in the absence of any specific objection to plaintiffs' request and in the absence of any evidence that the hours claimed by plaintiffs are unreasonable, concludes that plaintiffs' fee request is a reasonable one. See Robinson v. City of Edmond, 160 F.3d 1275, 1279, 1285-86 (10th Cir.1998) (plaintiffs requested $186,000 in fees and defendants generally objected to this request as unreasonable but specifically articulated objections to only $43,000 of the request, leaving $142,000 in requested attorney's fees "not separately contested;" district court abused its discretion in reducing fee award in part because the end result was a fee award that was below the "unrebutted," "unchallenged," and "uncontested" amount of the fee request); Sheets v. Salt Lake County, 45 F.3d 1383 (10th Cir.1995) (affirming trial court's fee award in part because defendants failed to proffer any evidence that the hours claimed were unreasonable and, instead, simply made unsubstantiated allegations that the fees were duplicative and exorbitant in nature). Defendants' general objection to plaintiffs' request is that the request is simply unreasonable in light of plaintiffs' "limited success"-plaintiffs prevailed only on their "relatively simple" contract claim. In the context of this litigation, however, a verdiet *1280 of $2.5 million is a substantial victory for plaintiffs and there was nothing "simple" about the contract claim. Rather, the case presented complex commercial issues and plaintiffs' counsel successfully developed those issues at trial. Indeed, Mr. Pepper and Horizon's breach of contract claim-the claim on which plaintiffs ultimately succeeded-encompassed a claim that defendants had breached the implied covenant of good faith and fair dealing, a claim that is often difficult for judges and lawyers to comprehend let alone lay persons on a jury. To prove plaintiffs' claim at trial, plaintiffs' counsel could not rely on an express term of the contract and could not point to one specific act that constituted defendants' breach. Instead, counsel was required to convey to the jury that defendants' entire course of conduct (conduct that spanned over 18 months) breached an "implied" duty to act in "good faith." Despite the sheer volume of evidence needed to describe and place in context defendants' course of conduct, coupled with the need to fit that evidence into amorphous concepts like "good faith" and "implied duty," plaintiffs' counsel achieved a multimillion dollar verdict for his clients. For these reasons, the court readily concludes (and defendants cannot seriously dispute) that plaintiffs obtained excellent results at trial. See Hampton v. Dillard Dep't Stores, Inc., 247 F.3d 1091, 1120 (10th Cir.2001) (proper focus is on the overall relief obtained). No blanket reduction is warranted and plaintiffs' counsel is deserving of a fully compensatory fee. See Hensley v. Eckerhart, 461 U.S. 424, 433-35, 103 S.Ct. 1933, 76 L.Ed.2d 40 (1983). In a related vein, defendants contend that plaintiffs are only permitted to recover those reasonable fees and expenses incurred in connection with the pursuit of their contract claim. Defendants contend that plaintiffs are improperly attempting to recover fees and expenses associated with the numerous claims on which plaintiffs did not prevail at trial and that the time and labor required to present evidence to the jury that defendants breached the purchase agreement was "only a small part of that actually expended by plaintiffs' counsel." The court rejects this argument, too. As an initial matter, plaintiffs' papers demonstrate that plaintiffs' counsel have already excluded from their request those hours associated with discrete research and other work related to plaintiffs' statutory discrimination claims, including hours spent working with plaintiffs' expert witness concerning plaintiffs' potential damages under Title VII. See Robinson, 160 F.3d at 1281 (prevailing party must make a good faith effort to exclude from request those hours that are excessive, redundant or otherwise unnecessary). In any event, in light of the fact that most, if not all, of the unsuccessful claims were intertwined with the successful breach of contract claim through a common core of fact or related legal theories, any reduction of fees would be inappropriate. See id. at 1283 (reversing district court's reduction of fee award on the grounds that plaintiffs achieved only partial success where all unsuccessful claims were intertwined with the successful claims). The law is clear that when a lawsuit consists of related claims, a plaintiff who has won substantial relief should not have his attorney's fee reduced simply because the court or jury did not adopt each contention raised. See Hampton, 247 F.3d at 1120 (citing Jane L. v. Bangerter, 61 F.3d 1505, 1512 (10th Cir.1995)) (affirming district court's refusal to reduce fee award based on alleged limited success; all of the claims were similar and stemmed from the same set of facts). Indeed, the Supreme Court has cautioned that a court should exclude an unsuccessful claim from a fee award only if that claim is "distinct in *1281 all respects" from the successful claim. See Hensley, 461 U.S. at 440, 103 S.Ct. 1933. Utilizing this standard (a standard that defendants do not even reference in their papers), the court simply cannot conclude that any of plaintiffs' unsuccessful claims are unrelated to the pursuit of the ultimate result achieved. Indeed, any attempt to divide the hours expended in this case on a claim-by-claim basis would be difficult and unjust. Nearly all of the claims pursued by plaintiffs-particularly plaintiffs' fraud and breach of contract claims-centered on the same core of facts. Any investigation or development of the fraud claim would necessarily have encompassed plaintiffs' breach of contract claim (and vice versa) as both claims required careful scrutiny of the parties' pre-contractual negotiations and the parties' conduct throughout the course of the contractual relationship. Thus, it is not surprising to this court that the billing records of plaintiffs' counsel, in large part, do not distinguish between claims. See id. at 435, 103 S.Ct. 1933 ("Much of counsel's time will be devoted generally to the litigation as a whole, making it difficult to divide the hours expended on a claim-by-claim basis."). Moreover, the Tenth Circuit has emphasized the importance of allowing litigants the "breathing room" necessary to raise alternative legal grounds that seek the same result and, thus, focusing on the actual result of the trial rather than dividing attorneys' fees by the number of successful claims. See Robinson, 160 F.3d at 1283. For the foregoing reasons, the court rejects defendants' contention that a blanket reduction of fees is warranted and, with the exception of the minor adjustments noted above, grants plaintiffs' motion for fees and costs and expenses. IT IS THEREFORE ORDERED BY THE COURT THAT plaintiffs' motion to alter or amend the judgment (doc. # 197) is granted in part and denied in part. Specifically, the motion is granted to the extent that a typographical error in the judgment shall be corrected and is otherwise denied; plaintiffs' motion for attorneys' fees, costs and expenses (doc. # 198) is granted in part and denied in part and the court awards plaintiffs fees, costs and expenses in the amount of $842,-892.85; and defendants' renewed motion for judgment as a matter of law pursuant to Rule 50(b) or, in the alternative, motion for remittitur and/or new trial pursuant to Rule 59 (doc. # 199) is denied. IT IS FURTHER ORDERED BY THE COURT THAT the clerk of the court shall amend the judgment to reflect this court's award of $842,892.85 in attorneys' fees, costs and expenses. The amended judgment should also be corrected to reflect that the jury returned a verdict on November 21, 2002 as opposed to November 12, 2002. IT IS SO ORDERED. NOTES [1] The parties do not dispute that Delaware law governs plaintiffs' claim that defendants breached the terms of the purchase agreement, as that agreement contains an express choice-of-law provision. [2] The undisputed evidence at trial was that "standard cost" was the amount that it actually cost Genmar Kansas to build the boat in terms of labor, material and overhead. In other words, Genmar Kansas was not making any profit on Ranger or Crestliner boats and, in most instances, was actually losing money on these boats because Genmar Kansas was not operating at maximum efficiency. Profits on these boats that were built on the production line in the Genmar Kansas facility were earned by Ranger and Crestliner when they in turn sold the boats to their dealer network. [3] Lowe is another aluminum boat manufacturing company. Mr. Pepper worked for Lowe for nearly ten years; ultimately Lowe was purchased by defendants. [4] In their papers, defendants also assert that section 4 of the employment agreements supports their argument that the O'Tools were not guaranteed a specific term of employment. Defendants, however, have not mentioned section 4 at any time prior to filing their renewed motion and certainly did not highlight this section for the jury. [5] The parties do not dispute that Kansas law governs the O'Tools' breach of contract claims as the O'Tools' employment contracts contained a provision identifying Kansas law as the parties' choice of law. [6] In their papers, defendants assert that 20 percent of Ms. O'Tool's salary is only $8320. That figure, however, is based on Ms. O'Tool's annual salary of $41,600 instead of the total salary that Ms. O'Tool would have earned over the relevant 15-month period. [7] When Mr. O'Tool's annual salary is translated into a monthly salary, and that monthly salary is multiplied by 15 months (measured from the time of Mr. O'Tool's discharge through the time when Mr. O'Tool's employment contract would have expired), his total lost salary is $81,249.90 (65,000/12 = $5,416.66 per month x 15). Twenty-five percent of $81,249.90 is $20,312.47. [8] This argument presupposes that the jury considered such evidence in connection with plaintiffs' breach of contract claim. Defendants, of course, have no way of knowing that the jury did, in fact, consider such evidence in its assessment of the breach of contract claim. [9] While the court in True North referenced § 205(a) of the Restatement (Second), that Restatement does not contain a § 205(a); the court intended to reference comment a of § 205. [10] For this reason, many states, including Kansas, have held that there is simply no implied covenant of good faith and fair dealing in the employment-at-will context. See, e.g., St. Catherine Hosp. of Garden City v. Rodriguez, 25 Kan.App.2d 763, 765, 971 P.2d 754 (1998) (Kansas does not recognize any good faith obligation in the employment-at-will context) (citing cases). [11] It may be that the court in Corporate Property was simply using the fraud language as a short-hand for the concept of bad faith. The point, however, is that the court fails to explain why it is utilizing that language and fails to provide any insight into the significance, if any, of that language, such as whether a party bringing a good faith and fair dealing claim would be held to proving the elements of fraud (e.g., false representation, scienter and reliance) in order to prevail. [12] In their motion to alter or amend, plaintiffs also point out that the judgment entered on November 21, 2002 contains a typographical error in that the judgment states that the verdict was returned by the jury on November 12, 2002. The jury, however, returned its verdict on November 21, 2002. The judgment will be corrected, and plaintiffs' motion will be granted, in this respect. [13] No one, of course, is suggesting that plaintiffs did so; the court is simply posing a hypothetical for illustrative purposes to demonstrate that there might have been factbased defenses available to defendants had the issue been raised by plaintiffs. Thus, because plaintiffs were not necessarily automatically entitled to the higher rate, the court rejects plaintiffs' contention that Federal Rule of Civil Procedure 54(c) requires an award of post-judgment interest at the higher rate irrespective of the contents of the pretrial order. [14] Plaintiffs' fee request covers the time period ending December 31, 2002. To the extent plaintiffs intend to recover fees, costs and expenses incurred in January 2003 in connection with responding to defendants' motion for judgment as a matter of law and filing their initial fee application, plaintiffs must file a motion for a supplemental award of fees, as those figures are not presently before the court. To the extent plaintiffs intend to seek fees in connection with defending an appeal filed by defendants, plaintiffs must direct such a request to the Tenth Circuit. See, e.g., San Juan Prods., Inc. v. San Juan Pools of Kansas, Inc., 849 F.2d 468, 477 (10th Cir.1988). [15] Because plaintiffs' fee request stems from a contractual fee provision, plaintiffs' request is subject to far less scrutiny than a request made pursuant to a fee-shifting statute and the court does not possess the same degree of equitable discretion to deny such fees as it has when applying a statute providing for a discretionary award. See United States ex rel. C.J.C, Inc. v. Western States Mechanical Contractors, Inc., 834 F.2d 1533, 1547-50 (10th Cir.1987) (remanding claim for attorneys' fees made pursuant to contractual fee provision where district court reduced the fee and, in doing so, applied the wrong standard and scrutinized the fee request too closely). In such cases, fees are "routinely awarded" unless the trial court determines that an award consistent with the request would be inequitable or unreasonable. Id. at 1548.
{ "pile_set_name": "FreeLaw" }
Q: Can polyalloy (plastic) pex fittings be used with both styles of attachment rings? Some pex fittings are made of a type of durable plastic known as polyalloy. Examples: These fittings appear to be plastic equivalents of their brass counterparts. In general is it permissible to use either the copper crimp rings OR the stainless steel cinch / pinch clamps with this type of fitting? Notes: This wasn't addressed in What is the advantage of PEX pinch clamp vs. crimp rings? A: TLDR: Yes, either type of attachment ring can be used. At least in the USA, these products have to conform to standards which make this so. Additionally, some manufacturers specifically state this is the case. Details: Primarily, this seems to come down to manufacturing standards. In the USA, "PolyAlloy" fittings are governed by standard ASTM F2159 Standard Specification for Plastic Insert Fittings Utilizing a Copper Crimp Ring... which states: This specification establishes requirements for sulfone plastic insert fittings utilizing a copper crimp ring for [PEX] tubing... Based on that alone, it would seem that these plastic fittings can only accept the copper crimp rings, not the Oetiker-style stainless steel cinch rings. However, cinch rings are governed by ASTM F2098 Standard Specification for Stainless Steel Clamps for Securing [PEX] Tubing to Metal Insert and Plastic Insert Fittings which states This specification covers stainless steel clamps ... that comply with F1807 or F2159, and cross-linked polyethylene (PEX) plastic tubing ... Therefore by reference to F2159 it seems that the cinch clamps are effectively retconned into acceptability for use with polyalloy fittings. For example, Everflow fittings are documented to be "Certified to ASTM F2159". And here's a marking on a Vanguard Apollo package indicating the same: Update: I contacted Apollo about this and they wrote: You may use pinch rings, copper crimp rings, pro crimp rings, and stainless steel sleeves with any of our Poly Alloy fittings. So at least as far as their products go, there should be no problem. And I think therefore that any of these fittings made to the same standard ought to be fine also.
{ "pile_set_name": "StackExchange" }
Nine things to know about elicitins. 888 I. 888 II. 889 III. 889 IV. 889 V. 891 VI. 891 VII. 891 VIII. 892 IX. 892 X. 893 XI. 893 893 References 893 SUMMARY: Elicitins are structurally conserved extracellular proteins in Phytophthora and Pythium oomycete pathogen species. They were first described in the late 1980s as abundant proteins in Phytophthora culture filtrates that have the capacity to elicit hypersensitive (HR) cell death and disease resistance in tobacco. Later, they became well-established as having features of microbe-associated molecular patterns (MAMPs) and to elicit defences in a variety of plant species. Research on elicitins culminated in the recent cloning of the elicitin response (ELR) cell surface receptor-like protein, from the wild potato Solanum microdontum, which mediates response to a broad range of elicitins. In this review, we provide an overview on elicitins and the plant responses they elicit. We summarize the state of the art by describing what we consider to be the nine most important features of elicitin biology.
{ "pile_set_name": "PubMed Abstracts" }
Q: XML parsers used in iphone sdk I am quite new to iphone development. I was going through tutorials on XML parsing for which NSXMLParser is used. Are there other parsers we can use for parsing XML. How do we decide which parser to use? Regards, Stone A: Standard parsers are NSXMLParser or c-based libxml. But there is plenty of 3rd party parsers available. Check this blog post where some of the most popular parsers reviewed and compared.
{ "pile_set_name": "StackExchange" }
[Nutritional status of native and non-native population of Russia's Extreme North and Far East]. Daily diets studied in various groups of native and non-native population living in the Far North and East of Russia (Kamchatka, Chukotka, Sakhalin) are shown to be inadequate for the adult capable population. Chemical composition of food provides insufficient supply of vegetable oil, calcium, vitamins, magnesium.
{ "pile_set_name": "PubMed Abstracts" }
Arachidonic acid (all-cis-5,8,11,14-eicosatetraenoic acid) is a polyunsaturated fatty acid (PUFA) containing 20 carbon atoms with four double bonds. The double bonds are arranged with the last one located six carbon atoms from the methyl end of the chain. Therefore, arachidonic acid is referred to as an omega-6 fatty acid. Arachidonic acid is one of the most abundant C.sub.20 PUFA's in the human body. It is particularly prevalent in organ, muscle and blood tissues. Arachidonic acid is a direct precursor for a number of circulating eicosenoids, such as prostaglandins, thromboxanes, leukotrienes and prostacyclins, which are important biological regulators. These eicosenoids exhibit regulatory effects on lipoprotein metabolism, blood rheology, vascular tone, leukocyte function, platelet activation and cell growth. The application of arachidonic acid to an infant's diet is particularly important due to the rapid body growth of an infant. Arachidonic acid is an important precursor to many of the eicosanoids which regulate cellular metabolism and growth in infants. It is found naturally in human breast milk but not in most infant formula. In an effort to have infant formula match the long chain fatty acid profile found in breast milk, scientific and food regulatory bodies have recommended that arachidonic acid be added to infant formula, especially in formula utilized for premature infants. In particular, it is preferable that arachidonic acid containing oil produced for use with infant formula contain little or no other long chain highly unsaturated fatty acids (e.g., eicosapentanoic acid). Such other long chain highly unsaturated fatty acids are not preferred because some of these fatty acids can interfere with the utilization of arachidonic acid by the infant, and/or can inhibit blending of the arachidonic acid-containing oil with other oils to achieve the appropriate ratio of fatty acids matching breast milk or other desired applications. Highly unsaturated fatty acids are defined as fatty acids containing 4 or more double bonds. Traditional sources of arachidonic acid include poultry eggs, bovine brain tissue, pig adrenal gland, pig liver and sardines. The yield of arachidonic acid, however, is usually less than 0.2% on a dry weight basis. The use of microorganisms capable of producing arachidonic acid de novo have been suggested by various investigators, including Kyle, PCT Publication No. WO 92/13086, published Aug. 6, 1992; Shinmen et al., U.S. Pat. No. 5,204,250, issued Apr.20, 1993; Shinmen et al., pp. 11-16, 1989, Appl. Microbiol. Biotechnol., vol. 31; Totani et al., pp. 1060-1062, 1987, LIPIDS, vol. 22; Shimizu et al., pp. 509-512, 1992, LIPIDS, vol. 27; Shimizu et al., pp. 342-347, 1989, JAOCS, vol. 66; Shimizu et al., pp. 1455-1459, 1988, JAOCS, vol. 65; Shimizu et al., pp. 254-258, 1991, JAOCS, vol. 68; Sajbidor et al., pp. 455-456, 1990, Biotechnology Letters, vol. 12; Bajpai et al., pp. 1255-1258, 1991, Appl. Environ. Microbiol., vol. 57; Bajpai, pp. 775-780, 1991, JAOCS, vol. 68; and Gandhi et al., pp. 1825-1830, 1991, J. Gen. Microbiol., vol. 137. The arachidonic acid productivity by the microorganisms disclosed by prior investigators, however, is less than 0.67 grams per liter per day. Such amounts are significantly less than the amounts of arachidonic acid produced by the microorganisms of the present invention. These lower productivity values are the result of employing strains: (1) with slow growth or lipid production rates leading to long fermentation times (i.e., greater than 2-3 days) ( Kyle, 1992, ibid.; Shinmen et al., 1993, ibid.; Shinmen et al., 1989, ibid.; Bajpai et al., 1991, ibid.; Bajpai, ibid.; and Gandhi et al., ibid.); and/or (2) that contain low arachidonic acid contents (expressed as % fatty acids) in the final oil produced (Shinmen et al., 1993, ibid.; Shimizu et al., 1989, ibid.; and Kendrick and Ratledge, 1992, pp. 15-20, Lipids, vol. 27); and/or (3) which require long periods of stress (i.e., aging a biomass for 6-28 days) to achieve high levels of arachidonic acid in a biomass (Bajpai et al., 1991, ibid. and Shinmen et al., 1989, ibid.); and/or (4) that only exhibit high arachidonic acid content in non-commercial growth conditions (e.g., malt agar plates) (Totani and Oba, 1987, pp. 1060-1062, Lipids, vol. 22). In addition, non-Mortierella schmuckeri microorganisms that have been proposed for producing arachidonic acid, in particular Pythium insidiosum microorganisms, disclosed by prior investigators (Kyle, 1992, ibid.), have been reported to be pathogenic to humans and/or animals. Thus, there remains a need for an economical, commercially feasible method for producing arachidonic acid. The present invention satisfies that need. There also remains a need for the an economical, commercially feasible food product for the introduction of arachidonic acid produced according to the present invention into the diet of human infants.
{ "pile_set_name": "USPTO Backgrounds" }
Q: hook-length formula: "Fibonaccized": Part II This is a natural follow-up to my previous MO question, which I share with Brian Hopkins. Consider the Young diagram of a partition $\lambda = (\lambda_1,\ldots,\lambda_k)$. For a square $(i,j) \in \lambda$, define the hook numbers $h_{(i,j)} = \lambda_i + \lambda_j' -i - j +1$ where $\lambda'$ is the conjugate of $\lambda$. The hook-length formula shows that if $\lambda\vdash n$ then $$n!\prod_{\square\,\in\,\lambda}\frac1{h_{\square}}$$ counts standard Young tableaux whose shape is the Young diagram of $\lambda$. Recall the Fibonacci numbers $F(0)=0, \, F(1)=1$ with $F(n)=F(n-1)+F(n-2)$. Define $[0]!_F=1$ and $[n]!_F=F(1)\cdot F(2)\cdots F(n)$ for $n\geq1$. QUESTION. What do these integers count? $$[n]!_F\prod_{\square\,\in\,\lambda}\frac1{F(h_{\square})}.$$ A: This is my answer to the original question (https://mathoverflow.net/a/327022/50244) whether these numbers are integers to begin with, it gives some combinatorial meaning as well: Use the formulas $F(n) = \frac{\varphi^n -\psi^n}{\sqrt{5}}$, $\varphi =\frac{1+\sqrt{5}}{2}, \psi = \frac{1-\sqrt{5}}{2}$. Let $q=\frac{\psi}{\varphi} = \frac{\sqrt{5}-3}{2}$, so that $F(n) = \frac{\varphi^n}{\sqrt{5}} (1-q^n)$ Then the Fibonacci hook-length formula becomes: \begin{align*} f^{\lambda}_F:= \frac{[n]!_F}{\prod_{u\in \lambda}F(h(u))} = \frac{ \varphi^{ \binom{n+1}{2} } [n]!_q }{ \varphi^{\sum_{u \in \lambda} h(u)} \prod_{u \in \lambda} (1-q^{h(u)})} \end{align*} So we have an ordinary $q$-analogue of the hook-length formula. Note that $$\sum_{u \in \lambda} h(u) = \sum_{i} \binom{\lambda_i}{2} + \binom{\lambda'_j}{2} + |\lambda| = b(\lambda) +b(\lambda') +n$$ Using the $q-$analogue hook-length formula via major index (EC2, Chapter 21) we have \begin{align*} f^\lambda_F = \varphi^{ \binom{n}{2} -b(\lambda)-b(\lambda')} q^{-b(\lambda)} \sum_{T\in SYT(\lambda)} q^{maj(T)} = (-q)^{\frac12( -\binom{n}{2} +b(\lambda') -b(\lambda))}\sum_T q^{maj(T)} \end{align*} Now, it is clear from the q-HLF formula that $q^{maj(T)}$ is a symmetric polynomial, with lowest degree term $b(\lambda)$ and maximal degree $b(\lambda) + \binom{n+1}{2} - n -b(\lambda) -b(\lambda') =\binom{n}{2} - b(\lambda')$ so the median degree term is $$M=\frac12 \left(b(\lambda) +\binom{n}{2} - b(\lambda')\right)$$ which cancels with the factor of $q$ in $f^{\lambda}_F$, so the resulting polynomial is of the form \begin{align*} f^{\lambda}_F = (-1)^{M} \sum_{T: maj(T) \leq M } (q^{M-maj(T)} + q^{maj(T)-M}) \\ = (-1)^{M} \sum_{T} (-1)^{M-maj(T)}( \varphi^{2(M-maj(T))} + \psi^{2(M-maj(T)}) = \sum_T (-1)^{maj(T)} L(2(M-maj(T))) \end{align*} where $L$ are the Lucas numbers. Remark. This is a byproduct of collaboration with A. Morales and I. Pak.
{ "pile_set_name": "StackExchange" }
Oct 2, 2004 London Calling And I...live by the river! etc. Anyhoo, we're back in London, and on our last frantic day of shopping and meeting people before we fly out tomorrow night. Our sojourn is almost over! The good news first is I checked my bank accounts and I got a nice whack of interest come through on one account which means I have cash! I can now buy my brother all the Von Dutch T-Shirts he's been barracking for! ;) I will talk a bit about Budapest, considering I got cut off so quickly when I was in a net cafe over there. We got up a 4am to leave Glasgow on a 6am flight. Greg, Debbie (his sister) and Julie (her Scottish flatmate) and I were all a bit bleary-eyed, andI was still a bit worried about my labyrinthitis. Luckily I was OK, just the normal sore head I get with airports and planes. We flew down to London Luton, from where we caught our flight to Budapest. We had to wait a few hours at the airport, and Greg and I actually left a bag in storage so we didn't have to lug it around Budapest! At every castle we visited, Greg bought the guidebook, so consequently we have one very heavy bag! Budapest was lovely. Quite a big city, with a population just under 2 million, about one-fifth of Hungary's overall people-count. It's very very smoggy though, with many older buildings black with pollution. Budapest was formed in 1873 when the towns of Buda and Pest, on either side of the Danube, merged to form one city. Most of the sight-seeing stuff is still on the Pest side. The first day we climbed Mt Gillert on the Buda side. Gillert was priest made a saint for trying to convert the pagan Hungarians around 1000. They thanked him by putting him in a barrel and rolling him down the hill! Hence the hill of Gillert. It was a big climb up, but we were rewarded with good views of the city and of the Statue of Liberty on top. It's a giant bronze woman holding a palm frond. It was put up by the Soviets to celebrate their liberation of Budapest from the Nazis. After the communists left Budapest in 1991, they thought about tearing it down. But they changed their minds and left it there to now symbolise Budapest's freedom from the Soviets! On Tuesday we headed out first to the Synagogue. Over 600 000 Hungarian Jews died during WWII, most in concentration camps. They have a memorial statue out the back of a steel willow tree, with the names of victims on the leaves. It was actually paid for by the American actor Tony Curtis, whose dad was a Hungarian Jew who died in the Holocaust. We needed some cheering up after that, and boy did we get it. We walked along the Danube on the Pest side, past their impressive Parliament building, currently being cleaned, and across to Margaret island. It's a long island in the middle of the river. The first thing we came across were converted golf buggy-type mini cars, available to rent. It only cost about £3 each so bugger it, let's do it! We rocketed around (well, as much as you can rocket at about 7 miles an hour) that island for an hour. It was the best fun. We've got some great bits on video, including when Debbie, Greg and I clambered into the buggy and took off, leaving a hapless Julie chasing us! It was great. I don't remember much else about the island but the buggy sure as hell will stay in my memory! My parents are actually going to Budapest in about a fortnight and I strongly suggest a buggy ride! Tuesday we also decided to visit the baths. Budapest is famous for its hot springs, and there are numerous baths around town. It was a bit of a debacle because no one spoke English at the baths, and we were trying to rent towels and lockers with hand gestures! We also discovered we needed bathing caps to swim, so we had to buy them in the end, as it was cheaper than renting them? Bizarre. The baths were lovely, and we felt refreshed. So we took off and grabbed the furnicular up to the castle district on the Buda side. We went into the labyrinths underneath - there are miles of caverns naturally carved through the hill by hot water thousands of years ago, and then used and maintained by locals. It was fun, because we had to take a gas lamp with us to see by - as in the evening they turn the lights off down there! They'd also put in for some reason, a fountain that flowed with wine! I think it was to signify the rich history of the early Hungarians. Or something. Wednesday we took ourselves of on a 'Hammer and Sickle' T0ur. It was about 4 hours of communist talk. Our guide, Czaba (pronounced 'Chubba'), was really interesting. He was about 30, and had been lucky enough to visit Australia when he was 15, while Communism was still in place. It made him a bit of a rebel at school, following his father's footsteps. His father had been a sportsman so had been able to travel the world and see the cool capitalist stuff the Hungarians weren't getting. He was very against it, and it caused clashes with Czaba's grandfather, who was a very committed communist party member and local leader. Anyhoo, we visited a communist era flat - just a sitting room, bedroom, small kitchen and bathroom, and narrow balcony. There are loads of big ugly concrete blocks still all over Budapest. We also visited statue park, where famous soviet statues are now kept. It was funny seeing the statues with no artistic value - just propaganda! Crap! My net time's out again...will write soon! Hi to everyone, and sorry Clare and Briony for missing your birthdays! I'll try and pick you up something festive! 3 comments: Hi Natz - glad to hear about your visit to Budapest and looking forward to seeing the video. Got your email re arrival - we will check it out and be there to meet you. Regards to all. See you soon - mum. Thanks for post on our website. Sorry to hear you have been sick. If it makes you feel any better, Mark and I have both been sick with head colds. Mark was kind enough to share with me. I am on the end of it though luckily before we hit Rome.Happy Birthday Greg!!Talk soonAlisha & Mark
{ "pile_set_name": "Pile-CC" }
Integration host factor (IHF) modulates the expression of the pyrimidine-specific promoter of the carAB operons of Escherichia coli K12 and Salmonella typhimurium LT2. We report the identification of Integration Host Factor (IHF) as a new element involved in modulation of P1, the upstream pyrimidine-specific promoter of the Escherichia coli K12 and Salmonella typhimurium carAB operons. Band-shift assays, performed with S-30 extracts of the wild type and a himA, hip double mutant or with purified IHF demonstrate that, in vitro, this factor binds to a region 300 bp upstream of the transcription initiation site of P1 in both organisms. This was confirmed by deletion analysis of the target site. DNase I, hydroxyl radical and dimethylsulphate footprinting experiments allowed us to allocate the IHF binding site to a 38 bp, highly A+T-rich stretch, centred around nucleotide -305 upstream of the transcription initiation site. Protein-DNA contacts are apparently spread over a large number of bases and are mainly located in the minor groove of the helix. Measurements of carbamoyl-phosphate synthetase (CPSase) and beta-galactosidase specific activities from car-lacZ fusion constructs of wild type or IHF target site mutants introduced into several genetic backgrounds affected in the himA gene or in the pyrimidine-mediated control of P1 (carP6 or pyrH+/-), or in both, indicate that, in vivo, IHF influences P1 activity as well as its control by pyrimidines. IHF stimulates P1 promoter activity in minimal medium, but increases the repressibility of this promoter by pyrimidines. These antagonistic effects result in a two- to threefold reduction in the repressibility of promoter P1 by pyrimidines in the absence of IHF binding. IHF thus appears to be required for maximal expression as well as for establishment of full repression. IHF could exert this function by modulating the binding of a pyrimidine-specific regulatory molecule.
{ "pile_set_name": "PubMed Abstracts" }
Tallmadge Township, Michigan Tallmadge Charter Township is a charter township of Ottawa County in the U.S. state of Michigan. The population was 7,575 at the 2010 census. Communities Finnasey was a rural post office in Tallmadge Township from 1882 until 1883. Lamont is an village on the north side of the Grand River at . It was founded in 1833 by Harry and Zine Steele, and was known for many years as Steele's Landing. The Steele's Landing post office was established January 9, 1851. In the same year, the Steeles had the village platted as "Middleville", due to being located midway between Grand Rapids and Grand Haven, although the post office remained Steele's Landing. In 1855 Lamont Chubb, of Grand Rapids, offered a road scraper to the village in exchange for the community taking on his name. The post office was duly renamed as Lamont on July 2, 1856. The Lamont ZIP code 49430 provide P.O. Box-only service. Grand Valley is an unincorporated community on M-45 just east of the Grand River. Tallmadge is an unincorporated community near the center of the township at . The city of Coopersville is to the northwest, and the Coopersville ZIP code 49404 serves areas in the northwest part of Tallmadge Township. Marne is an village along the northern boundary with Wright Township. The Marne ZIP code 49435 also serves areas in the central part of Tallmadge Township. The city of Walker is to the east, and the Walker/Grand Rapids ZIP code 49544 serves the eastern parts of Tallmadge Township. Geography According to the United States Census Bureau, the township has a total area of , of which is land and , or 1.76%, is water. Demographics As of the census of 2000, there were 6,881 people, 2,283 households, and 1,869 families residing in the township. The population density was 212.3 per square mile (81.9/km²). There were 2,369 housing units at an average density of 73.1 per square mile (28.2/km²). The racial makeup of the township was 97.83% White, 0.31% African American, 0.32% Native American, 0.31% Asian, 0.44% from other races, and 0.80% from two or more races. Hispanic or Latino of any race were 0.92% of the population. There were 2,283 households out of which 40.4% had children under the age of 18 living with them, 74.0% were married couples living together, 5.3% had a female householder with no husband present, and 18.1% were non-families. 14.3% of all households were made up of individuals and 4.4% had someone living alone who was 65 years of age or older. The average household size was 2.97 and the average family size was 3.32. In the township the population was spread out with 29.6% under the age of 18, 8.4% from 18 to 24, 28.5% from 25 to 44, 24.3% from 45 to 64, and 9.2% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 103.8 males. For every 100 females age 18 and over, there were 104.3 males. The median income for a household in the township was $59,205, and the median income for a family was $65,086. Males had a median income of $45,847 versus $29,434 for females. The per capita income for the township was $23,957. About 3.4% of families and 5.0% of the population were below the poverty line, including 6.2% of those under age 18 and 4.8% of those age 65 or over. References External links Tallmadge Charter Township Category:Townships in Ottawa County, Michigan Category:Charter townships in Michigan
{ "pile_set_name": "Wikipedia (en)" }
Q: SQL get unique month year combos SELECT MONTH(sessionStart) AS Expr1, YEAR(sessionStart) AS Expr2 FROM tblStatSessions WHERE (projectID = 187) GROUP BY sessionStart This returns: 11 | 2010 11 | 2010 11 | 2010 12 | 2010 12 | 2010 But I need it to only return each instance once, IE: 11 | 2010 12 | 2010 If that makes sense! A: The following should be what you want: SELECT MONTH(sessionStart) AS Expr1, YEAR(sessionStart) AS Expr2 FROM tblStatSessions WHERE (projectID = 187) GROUP BY MONTH(sessionStart), YEAR(sessionStart) in general you need to group by every non-aggregate column that you are selecting. Some DBMSs, such as Oracle, enforce this, i.e. not doing so results in an error rather than 'strange' query execution.
{ "pile_set_name": "StackExchange" }
noah’s ark Tag archive for noah’s ark Doubling up on employees may seem like a luxury, but when the pressure’s on, you’ll be glad you did. Just as Noah filled the Ark with animals two by two before the flood, you have to hire staff in twos. The alternative is that you risk being stuck at the most awkward and stressful time without someone to whom you can delegate a critical function of your gym’s operation. If you own or manage a small fitness center you always need contingency plans, especially when it comes to staffing. Large companies have all kinds of luxuries that come with the [...]
{ "pile_set_name": "Pile-CC" }
Tony Christian starts upp Joneskogs 738 camaro First start upp in Bradenton FL 2009 12 04. Peter Rosenqvist is starting his Pro Mod Camaro Peter Rosenqvist in Sweden are starting his Pro Mod Camaro for the first time after he bought it from Adam Flamholc. Rickard Asp is helping him from team Top Mod Viper. Filmed with Canon 5D Mark II with EF 17-40 lens.
{ "pile_set_name": "Pile-CC" }
Q: Where can I get a proper hot chocolate in Firenze-Venezia-Trieste? I am right now in Firenze but will spend two days in Venice and two days in Trieste and I'd like to drink a proper, thick, tasty hot chocolate but everyone says they don't make it in the summer. Any ideas? In Firenze I have a one week bus pass so I'm not limited to any area. A: Hot Chocolate and the Italian Summer As many, many waiters must have told you, hot chocolate is not exactly a summer drink. I do understand that those same establishments probably serve hot coffee and tea in the summer, however tea is somewhat of a more multi-season drink whereas coffee is a daily drink for most Italians. In my opinion, if you wish to maximise the likelihood of finding hot chocolate, you should target specialised establishments, or Cioccolaterie (literally chocolate-places in Italian). Your search keywords should be something like cioccolata calda XXX or cioccolateria XXX, where XXX is the city you wish to search in. Hot Chocolate in Firenze Searching around on the internet for cioccolata calda Firenze yields many results (see here and here for two sample reviews in Italian). The consensus however seems to point towards Rivoire which is known for making their own chocolate, as well as serving thick hot chocolate beverages. Another option could be Cioccolateria Hemingway. None of these specify if they serve hot chocolate in the summer. Nevertheless it might be worth trying them since, being Cioccolaterie, they are definitely more likely to have hot chocolate on their menus. A: There are quite some places to try in Trieste. I suggest to try Chocolat first. From there you can walk towards Piazza Unità and check a local Torrefazione (they serve tea and coffee also). Behind Piazza Unità there's Gelato Marco, a gelateria (ice cream place) where they serve ice-cream covered in hot chocolate! After visiting the old city centre, walk in viale XX settembre to try Madison. This is a rather long pedestrian area filled with restaurants, bars, gelaterie. At late afternoon it gets crowded for aperitivo. Everything I linked is in walking distance. A: I just learned of the existence of VizioVIrtù in Venice. Look at this: And the text suggests it's served in every season: Each season has its chocolate drink. True, as this drink is exquisite also if served cold. Is your mouth watering? Try the milkless and sugarless one.
{ "pile_set_name": "StackExchange" }
BS Beaver Creek Plush Queen 12\ Building the family area so that it seems rather important to pay attention and relaxed. The warm Beaver Creek Furniture could make buddies the attendees, or relatives who arrive at visit to feel at home. Along with the nice feeling that you might, would not be pleasant if you could spend some time speaking within this room together? Organizing interior planning living by choosing a right seat, room you can begin patterns. Selection of loving you and a suitable couch, will support the looks of a room that is living. Chair design would you choose must correspond with all the topic carried by the property itself. Beaver Creek Furniture might seem odd if a contemporary livingroom filled up with chairs contemporary and minimalist. Contemporary effect could be stronger radiated should you select a chair that's carvings along with facts that are basic that are other. Besides getting used for interesting guests, a living room often you utilize to read guides or just relax on Sunday. A seat that has a layout may assist the overall appearance of the room. Nonetheless, the style has to be in accordance with the ease furnished. We advocate that you simply avoid overly reducing convenience to be able to get the design you enjoy. There are numerous choices advanced layout that offers comfort as possible choose pills. So, don't be satisfied with one option only. Again, don't need to buy a couch for good design alone. In addition to the look, you need to chair Beaver Creek Furniture should be satisfied first. In case your home is tiny, requiring the room doubles like a family-room, you should look at whether the item is tough if occupied all the time. Once your preferences are fulfilled, you can view to the design and the model. Is recommended to select a design that's not fixated by era. Therefore, although the trend modified, guest chairs won't create uninterested or looks outdated. There are many alternatives of products that you could pick. Beginning one piece of wood to timber or material body covered with textile and foam multi-faceted. The impact wills bolster if placed in the space contemporary classic-style. However, software of lumber in a minimalist modern room can add a cozy atmosphere that is natural.
{ "pile_set_name": "Pile-CC" }
Q: GetType returns diffent information than is operator uses Cannot explain what is going on the following program. GetType is returning the type I want to return and not the original one. Does that mean we cannot rely on GetType? is operator is right though. Can anybody please explain it in detail? using System; namespace ConsoleApplication2 { public class MyClass { public Type GetType() { return typeof(Program); } } class Program { static void Main(string[] args) { MyClass mc = new MyClass(); if (mc.GetType() == typeof(Program)) { Console.WriteLine("Confused."); } if(mc is Program) { Console.WriteLine(mc.GetType()); // Don't get inside the if. Why? } } } } Update: I am reading the book CLR via C# 3rd edition. In chapter 4 (2nd page) when it explains different methods in System.Object it says "The GetType method is nonvirtual, which prevents a class overriding this method and lying about its type" While I agree about the first statement, I am lying about MyClass type. ain't I? A: Please take care of the warning, as they do exists for reasons. Your code compiled with the following warning: Warning 1 'ConsoleApplication2.MyClass.GetType()' hides inherited member 'object.GetType()'. Use the new keyword if hiding was intended. which means GetType() is non-virtual and you are writing new unrelated method of GetType() that CLR will never call it. A: is operator implemented in terms of as operator and finally use isinst IL instruction. And of course this instruction don't know about your not virtual GetType method that you define in some class in your inheritance hierarchy. To understand this "confusing" behavior lets "implement" our own version of the "is operator": public class MyClass { public Type GetType() { return typeof(Program); } } class Program { //this is oversimplified implementation, //but I want to show the main differences public static bool IsInstOf(object o, Type t) { //calling GetType on System.Object return o.GetType().IsAssignableFrom(t); } static void Main(string[] args) { MyClass mc = new MyClass(); //calling MyClass non-virtual version for GetType method if (mc.GetType() == typeof(Program)) { //Yep, this condition is true Console.WriteLine("Not surprised!"); } //Calling System.Object non-virtual version for GetType method if (IsInstOf(mc, typeof(Program))) { //Nope, this condition isn't met! //because mc.GetType() != ((object)mc).GetType()! } Console.ReadLine(); } } A: Object.GetType is not a virtual method. So mc is MyClass and effectively calls Object.GetType and not your method.
{ "pile_set_name": "StackExchange" }
Goddess worship Goddess worship may be the worship of any goddess in polytheistic religions worship of a Great Goddess on a henotheistic or monotheistic or duotheistic basis Hindu Shaktism the neopagan Goddess movement Wicca Dianic Wicca
{ "pile_set_name": "Wikipedia (en)" }
Member I had a series 1 Watch in 38mm and went yesterday and got the series 3 in 42mm and I’m unsure if it’s too big on me. They didn’t have a 42mm on display at AT&T so I didn’t realize how much bigger it was. What do you all think? Should I go down to the 38mm and pay a $45 restocking fee or does the 42mm look ok? Thanks in advance for any help! Attachments Genius I had a series 1 Watch in 38mm and went yesterday and got the series 3 in 42mm and I’m unsure if it’s too big on me. They didn’t have a 42mm on display at AT&T so I didn’t realize how much bigger it was. What do you all think? Should I go down to the 38mm and pay a $45 restocking fee or does the 42mm look ok? Thanks in advance for any help!
{ "pile_set_name": "Pile-CC" }
YOUR CART Are you close to banging your head on the wall because your printer is not working like it's alleged to? If you can be like me, as an alternative to a technical as everybody else is. I realize how it can be pretty frustrating when all of sudden your reliable printer is bust like it did prior to the. This error code is generally associated the brand new Dell 922 printer. Usually you might see the cartridge moving front and back and slamming itself against the printer. It is advisable to tighten the strings on the back of your printer after which it should work fine. After completed all the physical features of the printer, check out if its connected within your PC or. To make sure the printer is connected properly, adhere to the USB cable from the spine of the printer towards the back belonging to the computer. Paper jams. This is by far the most common problem encountered when printing. Reasons for this include using crumpled papers, and printer roller problems. When your paper is jammed, stop the printing operation, whenever your printer and pull the paper in the direction of the printing path; pulling it backwards might damage your printer good deal. Make sure there aren't any pieces of paper left inside the printer, and turn it back towards. This should work properly by however. If you have a driver that previously works properly fortunately has a nice problem, maybe you need to update your printer driver. And you can now also fix most driver problems by this method. To update the driver, place go for the Windows Update website or printer manufacturer's website appear whether work involved . any updated driver, if there is, download this tool. If that corrupt or outdated, your components will wrestle in detecting a as well as there won't be any any icon shows on the taskbar. Is actually one in the common reasons for "Printer Not Responding" crisis. In this case, to fix the problem, you only have to update your USB trucker. This will be error message asking one to restart the printer. You need to switch trip printer and then connect it via the USB cord. Press the hold http://www.sharpdriversdownload.com/ and cancel buttons at that time. Then power on the printer again. On your LCD display you should typically the message saying 'ready for download mode'. You are able to need to initialize the firmware update process. We've seen that when you are in possessing a version of Word there are free of charge tools which can help you convert the document proper into a PDF ebook. On all other platforms and perhaps even on Windows you can use LibreOffice or LibreOffice which will do the actual perfectly beautifully.
{ "pile_set_name": "Pile-CC" }
Cytotoxic analog of somatostatin containing methotrexate inhibits growth of MIA PaCa-2 human pancreatic cancer xenografts in nude mice. Nude mice bearing xenografts of MIA PaCa-2 human pancreatic cancer cell line were treated for 4 weeks with AN-51, a somatostatin octapeptide analog D-Phe-Cys-Tyr-D-Trp-Lys-Val-Cys-Thr-NH2 (RC-121) containing methotrexate attached to the alpha-amino group of D-Phe in position 1. Control groups of mice received saline, RC-121 or methotrexate. Drugs were given in equimolar doses by daily s.c. injections. After 7 days of treatment with 25 micrograms/day of AN-51, tumor growth was completely inhibited although the treatment had to be suspended because of toxic side effects, especially on the gastrointestinal tract, accompanied by major weight loss of the animals. Mice were allowed to recover for 1 week and treatment was continued with 12.5 micrograms/day AN-51. After 2 weeks of additional therapy, tumor volume, percentage change in tumor volume, and tumor weights were significantly decreased, compared with controls, only in the group treated with AN-51. Methotrexate and RC-121 also inhibited tumor growth, but their effects were not statistically significant. AN-51 retained its hormonal activity and decreased serum growth hormone levels in mice. Binding affinity of AN-51 for somatostatin receptors on MIA PaCa-2 cells was found to be 2.5-times lower than that of parent compound RC-121. This is the first report on inhibition of human pancreatic cancer growth in vivo by somatostatin analogs carrying cytotoxic radicals.
{ "pile_set_name": "PubMed Abstracts" }
INTRODUCTION {#s1} ============ Whilst oncogenesis is driven by a multitude of complex, non-programmed molecular events, there are a number of key features of this process, not least of which is the aberrant activation of genes that would normally be silenced in a given tissue context \[[@R1]\]. The so called cancer/testis (CT) or cancer germline (CG) genes are one such group of genes that are frequently activated in a range of different human cancer types \[[@R2]-[@R4]\]. These genes have expression normally restricted to the human germline, many being testis-specific \[[@R2]-[@R4]\]. They have come under intense scrutiny since their original identification as the immunological privilege of their normal germline setting means that the proteins they encode can elicit an immunological response when aberrantly produced in cancers and so have exceptional potential in immunotherapeutics \[[@R5]\]; for example, the *NY-ESO-1* gene product has been successfully targeted in an adoptive therapeutic approach to melanoma therapy \[[@R6]\]. Despite this interest, remarkably little is known about the normal germline function of most CT genes. Moreover, it has been demonstrated that germline genes in *Drosophila melanogaster* are required for the oncogenic process and that the human orthologues of these *Drosophila* genes have up-regulated expression in a range of human cancers, although the functional implications for oncogenesis of this up-regulation remains unclear \[[@R7],[@R8]\]. Interestingly, down-regulation of a number of CT genes in human cancer cells results in perturbation of cellular proliferative potential \[for example, see [@R9],[@R10]\]. These findings open up the exciting possibility that CT genes might encode functions that are required for tumour homeostasis and it has recently been proposed that tumours become 'addicted' to these germline factors \[[@R11],[@R12]\], and recently, meiotic factors have been shown to contribute to telomere maintenance in cancer cells via the ALT pathway \[[@R13], [@R14]\]. The full extent of germline gene requirement is unclear, but these findings expose a new therapeutic opportunity by directly targeting the tumour-associated function of the CT gene products. Additionally, a number of studies have revealed another clinically important feature of CT genes; their expression appears to drive drug resistance as depletion of the gene products results in enhanced sensitization to anti-cancer drugs \[for example, see [@R15]\] expanding the therapeutic potential of this important class of cancer genes. Germline gene expression profiling has also recently been demonstrated to have applications in prognostics and patient stratification. In a seminal study, Rousseaux and co-workers demonstrated that expression of a sub-set of germ line genes in some lung cancers delineated patients with aggressive, metastasis prone tumours with poor prognosis \[[@R16]\]; they extended this by indicating that this cohort of patients might benefit from a drug therapeutic regime that had previously been dismissed for more general use in lung cancer patients, indicating that profiling patients for expression of a small sub-set of germline genes could be used in therapeutic decision making. Understanding germline gene expression is also critical as drug-induced augmentation of expression has also been postulated to be a potential enhancer of immunotherapeutics, the rationale being that further up-regulation of a tumour-specific antigen will result in enhanced immunological targeting of the tumour \[for example, see [@R17]\]. Taking all these factors together reveals the importance of understanding the regulatory mechanisms for somatic germline gene silencing and their aberrant activation in tumours. To date, the regulation of a number of CT genes has been studied and it has been demonstrated that DNA methylation of regulatory elements, such as promoter-associated CpG islands plays a fundamental role in the somatic silencing of these genes and the hypomethylation of these regulatory DNA regions in cancers is linked to gene activation \[for example, see [@R18]-[@R23]\], whereas gene body hypomethylation has been linked to gene down regulation in cancers \[[@R24]\]. Expression of these genes also becomes activated or further up-regulated upon enforced hypomethylation by the DNA methyltransferase inhibitor 5-aza-2′-deoxycytidine (5-aza-CdR), and to date, all CT genes studied have up-regulated expression in response to this chemotherapeutic agent, indicating a commonality in the mechanistic pathway for somatic CT gene silencing \[for example, see [@R18]-[@R23]\]. To date, most of the CT genes whose expression has been studied are located on the X chromosome (X-CT genes) and belong to large paralogous gene families \[[@R2]-[@R4]\]. Recently, a computational pipeline combining expressed sequence tag and microarray meta-analyses of the human orthologues of mouse spermatocyte-specific genes revealed a large cohort of new CT genes that were expressed in a broad spectrum of cancer types \[[@R25]-[@R29]\]. Unlike the X-CT genes, the majority of these genes are autosomally encoded and are single copy. To date, the clinical potential of these genes remains largely unexplored. In this current study, analysis of the expression of a small sub-set of these genes reveals a novel feature of CT genes, which indicates that some have a unique mechanism for somatic transcriptional silencing. This is a significant finding as these genes and their associated gene products have an increased prominence in clinical applications and hence the sub-classification of CT genes will play an important role in diagnostics, stratification and therapeutics. RESULTS {#s2} ======= All CT genes studied to date (mostly X-CT genes) require hypermethylation of regulatory DNA sequences for somatic silencing and are activated by the hypomethylating agent 5-aza-CdR. Given the clinical potential of enhanced up-regulation of immunogenic CT antigens, we set out to explore whether a similar DNA hypermethylation silencing mechanism was operating in the recently identified autosomally encoded CT genes \[[@R25],[@R27]\]. To do this, we selected a small sub-group of these genes that remained transcriptionally silenced in the colorectal cancer cell lines HCT116 and SW480 (*ARRDC5, C4orf17, C20orf201, DDX4, NT5C1B, STRA8, TDRD12*). We also selected two previously characterized CT genes (both X-CT genes) that remained transcriptionally silenced in these two cell lines to serve as exemplar controls for hypermethylation regulated CT genes, *SSX2* and *GAGE1*. To determine whether the novel CT genes are silenced via hypermethylation mediated mechanisms, similar to the characterized X-CT genes, we treated the two cell lines with the DNA methyltransferase inhibitor 5-aza-CdR to determine whether inhibition of DNA methyltransferase activity can activate these genes. Following 5-aza-CdR treatment of HCT116 and SW480 we made cDNA and carried out RT-PCR and agarose gel electrophoresis analysis of the products. The two X-CT genes were activated from the silent state with relatively low levels of 5-aza-CdR (0.1 μM; Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}). Some of the novel, autosomally encoded CT genes were similarly activated (*C20orf201, DDX4, STRA8, TDRD12*), although *C20orf201* and *DDX4* required a slightly higher 5-aza-CdR concentration for activation (0.5 μM; Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}). Additionally, activation of *STRA8* requires slightly higher concentrations of 5-aza-CdR in SW480 (Figure [2](#F2){ref-type="fig"}) than HCT116 (Figure [1](#F1){ref-type="fig"}), which indicates subtle regulatory differences between tumour cell types. However, surprisingly, three genes (*ARRDC5, C4orf17, NT5C1B*) remained tightly transcriptionally silenced, even at high concentrations of 5-aza-CdR in both cell lines (15.0 μM; Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}). This unexpected result reveals an important distinction in the way CT gene silencing is epigenetically regulated, revealing a hypermethylation-independent pathway. Interestingly, the X-CT genes (*GAGE1, SSX2*) remained activated for a prolonged period following removal of the hypomethylating agent, as did the autosomally encoded CT genes that were activated with the lowest concentration of 5-aza-CdR (*STRA8, TDRD12*) (Figure [3](#F3){ref-type="fig"}); however, the other two autosomally encoded CT genes, *C20orf201* and *DDX4*, which required slightly higher concentrations of 5-aza-CdR for activation, reverted to the silent state relatively soon after removal of the hypomethylating agent (Figure [2](#F2){ref-type="fig"}). This indicates a much greater transcriptional elasticity to the methylation-dependent silencing mechanisms for some CT genes. ![A sub-group of germline genes remain refractory to activation by epigenetic modulating agents\ RT-PCR was used to analyse activation of a group of germline genes that are normally silenced in the cancer cell line HCT116 (an additional colorectal cell line gives similar results \[see Supplementary Figure S1)\]. Whilst a cohort of known and newly identified germline genes become activated at low doses of the demethylating agent 5-aza-CdR (*GAGE1, SSX2, STRA8, TDRD12*) and others become activated with slightly higher levels of 5-aza-CdR (*C20orf201, DDX4*), some remain tightly silenced, even at high concentrations of 5-aza-CdR (*ARRDC5, C4orf17, NT5C1B*) (left column). The histone deacetylase inhibitor trichostatin A (TSA) has little activating potential (other than for *GAGE1* and *STRA8*, indicating the primary epigenetic regulation is mediated by DNA methylation (right column). Untreated and DMSO treated cells exhibit no activation of any of the genes analysed for expression activation. The chromosomal location of each gene is provided in parentheses to the right of the gene name. RT-PCR of β*ACT* shows uniform sample quality and loading.](oncoscience-01-0745-g001){#F1} ![Analysis of germline gene expression in response to epigenetic de-regulation in SW480 human colorectal cancer cells\ The analysis of expression of a number of germline genes is shown. The agarose gels show RT-PCR products for nine germline genes, including two control germline genes (***GAGE1*** and ***SSX2***) and the β***ACT*** gene as a quality control marker. All germline genes are normally expressed in the testis tissue, but silenced in untreated SW480. The SW480 cells were treated with ranges of concentrations of the epigenetic activators 5-aza-CdR (left hand column) and TSA (right hand column). dH~**2**~O replaced cDNA as a negative control. Gene names are provided to the left of the agarose gel images and the chromosomal location of each germline gene is given in parentheses. The specific concentrations of 5-aza-CdR and TSA are given above the appropriate lane.](oncoscience-01-0745-g002){#F2} ![Somatically silenced germline genes that are activated by 5-aza-CdR exhibit differential re-silencing profiles after 5-aza-CdR withdrawal\ RT-PCR was used to analyse the re-silencing of activated germline genes following removal of the activating agent 5-aza-CdR. Analyses shown are for HCT116 cells. ***GAGE1*** remained highly active following 9 days post 5-aza-CdR removal. RT-PCR indicates ***SSX2, STRA8*** and ***TDRD12*** expression was gradually diminished following the removal of the demethylating agent. Expression of ***C20orf201*** and ***DDX4*** was rapidly lost following removal of 5-aza-CdR. Untreated and DMSO treated cells exhibit no activation of any of the genes analysed for expression activation. The chromosomal location of each gene is provided in parentheses to the right of the gene name. RT-PCR of β***ACT*** shows uniform sample quality and loading.](oncoscience-01-0745-g003){#F3} To determine whether the silencing of hypermethylation-independent genes (*ARRDC5, C4orf17, NT5C1B*) was mediated via histone deacetylation we also treated the HCT116 and SW480 cells with the histone deacetylase (HDAC) inhibitor trichostatin A (TSA) (Figure [1](#F1){ref-type="fig"}; Figure [2](#F2){ref-type="fig"}) or a combination of 5-aza-CdR and TSA (data not shown). Remarkably, all three genes (*ARRDC5, C4orf17, NT5C1B*) remained tightly silenced under these highly transcriptionally permissive conditions. DISCUSSION {#s3} ========== CT antigens are potentially powerful targets for therapeutics, including immunotherapeutics. However, intratumour CT antigen gene expression is often heterogeneous and so there will be a lack of uniformity for any targeting strategy. To overcome this, it has been demonstrated that treatment of tumours with agents that deregulate epigenetic silencing, such as agents that result in DNA hypomethylation can generate a uniform expression of CT antigen genes within a tumour \[for example, see [@R18]-[@R23]\]. However, to date, the epigenetic regulation mechanisms for CT gene silencing has been restricted to a limited number of X-CT genes, all of which are activated by hypomethylating agents. Here we extended the analysis of epigenetic regulation of clinically important biomarkers and reveal that there is a cohort of CT genes that is not activated in response to hypomethylating agents (or HDAC inhibitors). This regulation is not simply due to a lack of methylation target CpG islands within the promoter regions as at least two of the genes (*ARRDC5* and *NT5C1B*) have reported CpG islands in their transcriptional promoter regions \[[https://genome.ucsc.edu/](http://genome.ucsc.edu/)\]. These observations indicate that there is a very broad range of mechanisms controlling CT gene regulation. This has implications for CT gene selection for clinical targeting strategies. Moreover, the mechanistic regulatory pathways might indicate sub-groups of CT genes that are co-regulated, which has implications for the study of these genes both as biomarkers, potential oncogenes and/or encoders of drug targets. Additionally, it has been demonstrated that some CT genes are required for tumour cell proliferation. Turning off these genes could reduce the proliferation-mediated burden of tumours, restricting their disease effect and/or enhancing other therapeutic approaches. MATERIALS AND METHODS {#s4} ===================== Maintenance and culturing of human colorectal cell lines HCT116 and SW480 {#s4_1} ------------------------------------------------------------------------- HCT116 and SW480 cell lines were obtained from the European Collection of Cell cultures. Both lines are tested for authenticity once per annum by LGC StandardsTM (authentication tracking number 710236782). HCT116 cells were grown in McCoy\'s 5A medium with GLUTAMAX^TM^ (Invitrogen, GIBCO 36600) and SW480 cells were grown in Dulbecco\'s modified Eagle\'s medium with GLUTAMAX^TM^ (Invitrogen, GIBCO 61965). Both media types were supplemented with 10% foetal bovine serum (Invitrogen; GIBCO 10270). Cells were incubated in humidified incubators at 37°C in a 5% CO~2~ atmosphere. Cells cultures were tested for mycoplasma infection using the LookOut^TM^ Mcycoplasma PCR Detection kit (Sigma Aldrich, MP0035). Epigentics modulating agents were added to the concentrations required as indicated in the main text. Treatment with 5-aza-CrD and TSA was for 48 hours (72 hour treatment yielded identical results). RNA extraction, cDNA synthesis and polymerase chain reaction {#s4_2} ------------------------------------------------------------ Total RNA was isolated using Trizol reagent (Invitrogen; 15596-026). Confluent cells were homogenised in Trizol (1 ml Trizol / 5×10^6^ cells) and held at room temperature (RT) for 5 minutes. Chloroform (200 μl per 1 ml of Trizol) was added to each sample and the homogenate was vigorously shaken for 15 seconds, followed by incubation for 5 minutes at RT. Samples were then centrifuged at 12,000 *g* for 15 minutes at 4°C. The aqueous layer was then removed to a new Eppendorf tube and 500 μl of isopropanol was added. After incubation at RT for 10 minutes, the samples were centrifuged again at 12,000 *g* for 20 minutes. The supernatant was removed and the pellet was washed with 70% ethanol and re-centrifuged at 7,500 *g* for 5 minutes at 4°C. The supernatant was discarded again and the cell pellet was left to dry at RT for 5-10 minutes, and then 100 μl RNase free water containing 2 μl DNase I (Sigma; D5319) was added to each RNA preparation sample. The samples were incubated at 37°C for 10 minutes and then at 75°C for 10 minutes. RNA quality and concentration was measured with a NanoDrop (ND 1000) spectrophotometer. Total RNA from normal human testis tissues was supplied by Clontech (Catalogue number; 636643) {#s4_3} ---------------------------------------------------------------------------------------------- Total RNA was used to synthesise cDNA using a SuperScript III First Strand Synthesis Kit (Invitrogen; 18080-051). Samples (1-2 μg) of total RNA were used according to the manufacturer\'s protocol. PCR using β*ACT* primers was used to check the cDNA quality. Gene sequences were obtained from the National Center for Biotechnology Information (<http://www.ncbi.nlm.nih.gov>). Primers were designed to span more than one intron where possible. Primers were designed using Primer 3 software (<http://primer3.ut.ee/>). Primer sequences are provided in the Supplementary Materials. For PCR amplification, 2 μl of diluted cDNA was supplemented with 25 μl of BioMixTM Red (Bioline; BIO-25006) and 1 μl each of the forward and reverse primer, and the final volume was adjusted with ddH~2~O to 50 μl. PCR for samples was initiated with a pre-cycling melting step at 96°C for 5 minutes, followed by 40 cycles of denaturing at 96°C for 30 seconds, an annealing step was carried out between 58-62°C for 30 seconds (specific annealing temperatures are provided in the [table](#T1){ref-type="table"} below), extension at 72°C for 30 seconds and the final extension temperature was 72°C for 5 minutes. All PCR products were evaluated on 1% agarose gels stained with ethidium bromide ###### PCR Primer sequence Gene Forward primer sequence (5′-3′) Reverse primer sequence (5′-3′) PCR annealing Temp. (°C) ------------- --------------------------------- --------------------------------- -------------------------- *βACT* TGCTATCCCTGTACGCCTCT CGTCATACTCCTGCTTGCTG 58.0 *GAGE1* TAGACCAAGGCGCTATGTAC CATCAGGACCATCTTCACAC 58.4 *SSX2* CAGAGATCCAAAAGGCC CTCGTGAATCTTCTCAGAGG 58.4 *ARRDC5* CAACAAGGCAGACTACGTGC GCGAGTGTGCATGATCTCAC 60.5 *C4orf17* CCTCATCCCAGAAGAGTCTG CTGCTGCTGGTTCCATTGAG 60.5 *C20orf201* ATCTGCTCTTCGGCGACCTG ACACTCTCAGTCGCCGTCAC 60.0 *DDX4* GTGCTACTCCTGGAAGACTG CCAACCATGCAGGAACATCC 60.5 *NT5C1B* CGGCAGGAAAATCTACGAGC CTGTAACCAGGTAGGTCCTG 60.5 *STRA8* TGGCAGGTTCTGAATAAGGC GAAGCTTGCCACATCAAAGG 58.4 *TDRD12* GAGCTAAAGTGCTGGTGCAG CTGAGGTCACCGACAATACC 60.5 AA was funded by the Government of the Kingdom of Saudi Arabia. JF was supported by the National Institute of Social Care and Health Research (grant HS/09/008). RJM, EGV and JAW were funded by Cancer Research Wales. RJM was funded by North West Cancer Research (project grants CR888 and CR950).
{ "pile_set_name": "PubMed Central" }
Every 98 seconds someone is sexually assaulted in the United States. Parenting Time Center Umbrella Tree, a safe place for supervised visitation and exchanges. Shelter Temporary Emergency Shelter Child Advocacy Support for Children Violence Intervention Project serves victims of domestic and sexual violence through crisis call response, emergency assistance, advocacy support, shelter, supervised visitation services and other housing support. A trained advocate is available 24 hours a day to listen, help assess needs and safety, and help locate needed resources or help in deciding if reporting a rape or an assault is the right option.We know how hard it is to take the first step or to be scared for someone you know who is in an abusive relationship. We’re here to listen and provide options in moving forward. All VIP services are free and confidential and open to people of all genders in Pennington, Kittson, Marshall, Red Lake and Roseau Counties. Services Include: A safe and confidential place for victims to share their story in a private and non-judgmental environment Safety planning Assistance with protection orders Accompaniment to court proceedings Support and accompaniment during a sexual assault exam at the emergency room Support Groups Supervised Visitation and Safe exchanges (located in Thief River Falls) Technology Safety Alert Violence Intervention Project serves victims of domestic and sexual violence through crisis call response, emergency assistance, advocacy support, shelter, supervised visitation services and other housing support. A trained advocate is available 24 hours a day to listen, help assess needs and safety, and help locate needed resources or help in deciding if reporting a rape or an assault is the right option.We know how hard it is to take the first step or to be scared for someone you know who is in an abusive relationship. We’re here to listen and provide options in moving forward. All VIP services are free and confidential and open to people of all genders in Pennington, Kittson, Marshall, Red Lake and Roseau Counties. Services Include: A safe and confidential place for victims to share their story in a private and non-judgmental environment Safety planning Assistance with protection orders Accompaniment to court proceedings Support and accompaniment during a sexual assault exam at the emergency room Support Groups Supervised Visitation and Safe exchanges (located in Thief River Falls)
{ "pile_set_name": "Pile-CC" }
AVAILABLE Last checked: 51 Minutes ago! Wisdom of the ages 60 days to enlightenment wayne w dyer on amazoncom free shipping on qualifying offers national bestseller this . Wisdom of the ages 60 days to enlightenment wayne w dyer on amazoncom free shipping on qualifying offers national bestseller this . Wisdom of the ages 60 days to enlightenment thu 17 jan 2019 151200 gmt wisdom of the ages 60 pdf 2 preface by acharya buddharakkhita the dhammapada is the. Wisdom of the ages 60 days to enlightenment english edition ebook wayne w dyer amazonnl kindle store. Wisdom of the ages 60 days to ebook bestselling author and personal development guru wayne w dyer shows us how to apply the insight of 60 of the worlds greatest
{ "pile_set_name": "Pile-CC" }
Surveyor Nuclease: a new strategy for a rapid identification of heteroplasmic mitochondrial DNA mutations in patients with respiratory chain defects. Molecular analysis of mitochondrial DNA (mtDNA) is a critical step in diagnosis and genetic counseling of respiratory chain defects. No fast method is currently available for the identification of unknown mtDNA point mutations. We have developed a new strategy based on complete mtDNA PCR amplification followed by digestion with a mismatch-specific DNA endonuclease, Surveyor Nuclease. This enzyme, a member of the CEL nuclease family of plant DNA endonucleases, cleaves double-strand DNA at any mismatch site including base substitutions and small insertions/deletions. After digestion, cleavage products are separated and analyzed by agarose gel electrophoresis. The size of the digestion products indicates the location of the mutation, which is then confirmed and characterized by sequencing. Although this method allows the analysis of 2 kb mtDNA amplicons and the detection of multiple mutations within the same fragment, it does not lead to the identification of homoplasmic base substitutions. Homoplasmic pathogenic mutations have been described. Nevertheless, most homoplasmic base substitutions are neutral polymorphisms while deleterious mutations are typically heteroplasmic. Here, we report that this method can be used to detect mtDNA mutations such as m.3243A>G tRNA(Leu) and m.14709T>C tRNA(Glu) even when they are present at levels as low as 3% in DNA samples derived from patients with respiratory chain defects. Then, we tested five patients suffering from a mitochondrial respiratory chain defect and we identified a variant (m.16189T>C) in two of them, which was previously associated with susceptibility to diabetes and cardiomyopathy. In conclusion, this method can be effectively used to rapidly and completely screen the entire human mitochondrial genome for heteroplasmic mutations and in this context represents an important advance for the diagnosis of mitochondrial diseases.
{ "pile_set_name": "PubMed Abstracts" }
Self Storage Units & Facilities in Dawson Springs, KY Find Movers and Helpers in Your Area If you are looking for Dawson Springs storage facilities, then you have found the right place. Finding outdoor and indoor storage units in Dawson Springs has never been so easy. Moverscorp.com allows you to compare different Dawson Springs, Kentucky self storage units in minutes. Computers are valuable and also very delicate items. Whether you hire professionals or move yourself, it is important to be extremely careful when packing a computer. Here are few tips on how to prepare a computer. How to Pack a Computer Self storage units are a cost-effective and safe way to store household and business items. Consider the following things before storing your items.Prepare Your Items For Storage Moving artwork and antiques requires professionals who have necessary knowledge and equipment to handle delicate pieces. We do not recommend to cut corners or hire a company that is not specialized in delicate items. However, if the items are not of the high price you can consider packing and moving them yourself.Moving Antiques and Artwork When hiring a professional residential moving company or just helpers, it's recommended to create an inventory list. Inventory list represents all of the household items that you plan on taking with you, this is done for your own protection.Moving Inventory Moving process is always hard on someone, but when you try to cut the corners and do it cheaply it can cost you even more. Like everybody else, you don't want to spend a lot, but on the other hand, you don’t want to hire someone who is not experienced. Can cheap movers actually be trustworthy?Cheap Movers Moverscorp.com has provided these listings which are public information drawn from the local internet directories and from our partner companies. To inquire about storage prices, availability, and units sizes please call the phone numbers listed above. Remeber, the cheapest storage unit is not always the best value. Secure and clean storage facility prices are usually higher than other places, but it's totally worth it.
{ "pile_set_name": "Pile-CC" }
Q: Solving system of linear equations (to determine a boundry) I'm puzzeled how to programmatically (in R) solve the following linear system: Given $\mathbf{R} \in \mathbb{R}^{n \times n}$, $\mathbf{R}^{-1}$, and a constant $c$ what is the solution to $\mathbf{u} \in \mathbb{R}^n$ with $\mathbf{u}^T = (u_1, \ldots, u_n)$ for $\mathbf{u}^T\mathbf{R}^{-1}\mathbf{u} = c$ Lets take the simple case for $n = 2$. Fixing a component, say $u_1$, the solution to $u_2$ can be found by explicitly writing down $u_1(u_1r_{11} + u_2r_{21}) + u_2(u_1r_{12}+u_2r_{22}) - c = 0$ and solving for $u_2$ using quadratic formula. But for more dimensions there must be a better way. I guess I need to bring it into the form $\mathbf{Ax = b}$, in order to use solve. But I havent figured out yet how exactly. Right now I'm stuck at the following: let $\mathbf{U} = \left(\begin{matrix} u_1 & \ldots & 0\\ \ldots & \ldots & \ldots\\ 0 & \ldots & 1 \end{matrix}\right)$ and $\mathbf{v}^T = (1, 1, \ldots, u_n)$ with $\mathbf{Uv=u}$ then i would have the fixed terms separated from the variable one ($u_n$) for which I need to determine the value. I can put it into the equation above, but how to proceed? Is this the right way ? The background is the answer I posted in How to draw confidence areas. I would like to explecitly compute the "exact" threshold boundry. I understand that I need to solve this linear system but I cannot get it quite right yet. I'm unsatisfied with the two possible solutions: 1. using Quadratic formula to hard code the solution and 2. using optimize routine. The first one would only work for 2 dimensions and the second one would be unreliable (because upto two different solutions are possible for every x). Furthermore, I think there should be a concise solution. edit (12.03) Thank you for the response. I played with the solution but still have some question. So as far as I understood, compute_scale would compute my decision boundry. Since I have two possibilities for $\gamma$, i.e. positive and negative, I can compute the critical values. However, if I plot them, I only get the half truth. I tinkered, but havent figured out how to compute the complete boundry. Any advice? compute_stat <- function(v, rmat) { transv <- qnorm(v) return(as.numeric(transv %*% rmat %*% transv)) } compute_scale <- function(v, rmat) { gammavar <- sqrt(threshold / (v %*% rmat %*% v)) return(c(pos = pnorm(v * gammavar), neg = pnorm(v * (-gammavar)))) } Rg <- matrix(c(1, .1, .2, 1), ncol = 2)#matrix(c(1,.01,.99,1), ncol = 2) Rginv <- MASS::ginv(Rg) gridval <- seq(10^-2, 1 - 10^-2, length.out = 100) thedata <- expand.grid(x = gridval, y = gridval) thestat <- apply(thedata, 1, compute_stat, rmat = Rginv) threshold <- qchisq(1 - 0.8, df = 2) colors <- ifelse(thestat < threshold, "#FF000077", "#00FF0013") #png("boundry2.png", 640, 480) plot(y ~ x, data = thedata, bg = colors, pch = 21, col = "#00000000") theboundry <- t(apply(thedata, 1, compute_scale, rmat = Rginv)) points(pos1 ~ pos2, data = theboundry, col = "blue") points(neg1 ~ neg2, data = theboundry, col = "purple") #dev.off() A: I understand your problem to be given an $n$ by $n$ matrix $R$ and scalar $c$, find a vector $\mathbf{u}$ such that $\mathbf{u}'R^{-1}\mathbf{u}=c$. First observe: You have $n$ unknowns (since $\mathbf{u}$ is an $n$ by 1 vector) $\mathbf{u}'R^{-1}\mathbf{u}=c$ is a single equation. (It isn't a system of equations.) In general, there won't be a unique solution $\mathbf{u}$. Almost any vector will work if it is properly scaled. Solution: Pick some arbitrary vector $\mathbf{a}$. Let $\mathbf{u} = \lambda \mathbf{a}$. Then $\mathbf{u}'R^{-1}\mathbf{u}=c $ becomes $\lambda^2 \mathbf{a}'R^{-1}\mathbf{a} = c$. Solving for the scalar $\lambda$ we have $\lambda = \sqrt{\frac{c}{\mathbf{a}'R^{-1}\mathbf{a}}}$. For any vector $\mathbf{a}$ such that $\mathbf{a}'R^{-1}\mathbf{a} \neq 0$, we'll have the solution: $$\mathbf{u} = \lambda \mathbf{a}\quad \text{where} \quad \lambda = \sqrt{\frac{c}{\mathbf{a}'R^{-1}\mathbf{a}}}$$
{ "pile_set_name": "StackExchange" }
# Copyright (c) 2003-2020, CKSource - Frederico Knabben. All rights reserved. # # !!! IMPORTANT !!! # # Before you edit this file, please keep in mind that contributing to the project # translations is possible ONLY via the Transifex online service. # # To submit your translations, visit https://www.transifex.com/ckeditor/ckeditor5. # # To learn more, check out the official contributor's guide: # https://ckeditor.com/docs/ckeditor5/latest/framework/guides/contributing/contributing.html # msgid "" msgstr "" "Language-Team: Lithuanian (https://www.transifex.com/ckeditor/teams/11143/lt/)\n" "Language: lt\n" "Plural-Forms: nplurals=4; plural=(n % 10 == 1 && (n % 100 > 19 || n % 100 < 11) ? 0 : (n % 10 >= 2 && n % 10 <=9) && (n % 100 > 19 || n % 100 < 11) ? 1 : n % 1 != 0 ? 2: 3);\n" msgctxt "Toolbar button tooltip for inserting an image or file via a CKFinder file browser." msgid "Insert image or file" msgstr "Įterpti vaizdą ar failą" msgctxt "Error message displayed when inserting a resized version of an image failed." msgid "Could not obtain resized image URL." msgstr "Nepavyko gauti pakeisto dydžio paveiksliuko URL." msgctxt "Title of a notification displayed when inserting a resized version of an image failed." msgid "Selecting resized image failed" msgstr "Nepavyko pasirinkti pakeisto vaizdo" msgctxt "Error message displayed when an image cannot be inserted at the current position." msgid "Could not insert image at the current position." msgstr "Nepavyko įterpti vaizdo į dabartinę vietą." msgctxt "Title of a notification displayed when an image cannot be inserted at the current position." msgid "Inserting image failed" msgstr "Nepavyko įterpti vaizdo"
{ "pile_set_name": "Github" }
Finnish Fanconi anemia mutations and hereditary predisposition to breast and prostate cancer. Mutations in downstream Fanconi anemia (FA) pathway genes, BRCA2, PALB2, BRIP1 and RAD51C, explain part of the hereditary breast cancer susceptibility, but the contribution of other FA genes has remained questionable. Due to FA's rarity, the finding of recurrent deleterious FA mutations among breast cancer families is challenging. The use of founder populations, such as the Finns, could provide some advantage in this. Here, we have resolved complementation groups and causative mutations of five FA patients, representing the first mutation confirmed FA cases in Finland. These patients belonged to complementation groups FA-A (n = 3), FA-G (n = 1) and FA-I (n = 1). The prevalence of the six FA causing mutations was then studied in breast (n = 1840) and prostate (n = 565) cancer cohorts, and in matched controls (n = 1176 females, n = 469 males). All mutations were recurrent, but no significant association with cancer susceptibility was observed for any: the prevalence of FANCI c.2957_2969del and c.3041G>A mutations was even highest in healthy males (1.7%). This strengthens the exclusive role of downstream genes in cancer predisposition. From a clinical point of view, current results provide fundamental information of the mutations to be tested first in all suspected FA cases in Finland.
{ "pile_set_name": "PubMed Abstracts" }
Dispatches from the 10th Crusade Entries from What's Wrong with the World tagged with 'Statues' Apparently, now that the left and the right have joined forces to start removing Confederate flags from public display, some commenters have started suggesting we need to go after street signs and statues next! This is basically insane. As bad...
{ "pile_set_name": "Pile-CC" }
Where's My Size? Customer Reviews for Freya Core Underwire Sports Bra Filter By: Star Rating Size Height Size: 34H Excellent. It is so hard to find an actually secure bra of any kind, much less a sports bra, at this size. But this one works! I feel secure, and while it's a touch constricting, it's not nearly as bad as your average sports bra/torture device. Annie from Height: Petite (5'3" and under) Age: 40s Posted: June 26,2017 Size: 38E Some reviews indicated that this bra produces pointy result, but I took a chance and bought it. The cups and band are just right for me, lots of uplift for my age, and comfortable, the fabric is less stretchy if you prefer stretchier fabric, this bra might be too stiff for you. As to the shape of the cups, I guess I have much to fill them, so they aren't as pointy as some others said. audrey from Manhattan/KS/USA Height: Petite (5'3" and under) Age: 60s Posted: June 5,2017 Size: 36F I discovered Freya several years ago. These "sport" bras are so comfortable that I wear them everyday. They give awesome support and difinition to your figure. I have the best service from ordering from HerRoom. The are always dependable CJ from Holden/Mo/USA Height: Average Height (5'4"-5'8") Age: 70s Posted: June 2,2017 Size: 34K Was really looking forward to a sport bra in my size. But, unfortunately, this bra didn't work for me. My normal size is a 34HH, but knowing sports bras run small I ordered up a size to 34J. Was too small so I returned & ordered 34K. That was too small too. Liked the 5 hooks, & was comfortable, wish the cups had fit. HerRoom Response: We suggest trying a 36J in this bra because the bra is high impact the band might be running a little tighter than a usual bra. Kim from California Height: Average Height (5'4"-5'8") Age: 50s Posted: May 16,2017 Size: 34DD This bra provides the absolute best support for my girls. Bounce is kept to a minimum as I dance in Zumba class. I love the fit and the durability. It is my favorite sports bra. Michelle K. from North Jersey Height: Petite (5'3" and under) Age: 50s Posted: May 10,2017 Size: 36DDD Great support and comfort TheUndies.com from Height: Petite (5'3" and under) Age: Teen Posted: April 14,2017 Size: 36 Holds perfect and hood support. No excesses under arm TheUndies.com from The underwire popped out in less than a month. I hand wash and line dry. No support when I exercise Freya Response: We’re sorry this bra didn’t work for you. If you felt like it wasn’t supportive enough we recommend going down a band and up in the cup. It is also very important to rotate your bras, even sports bras. Your bras need at least a day to rest to help them last longer. Jacqueline from 11412 Freya AA4002 Tomima's Tip "This bra is designed for high-impact. However, some women find it too tight. You may want to consider going up a band size and down a cup size, but you will gain some breast bounce. " This amazing high impact sports bra shapes the bust while offering the ultimate support and comfort. Multi-part cups shape the breasts. Made with CoolMax fabric, which dries 5 times faster, wicks moisture away from the body and feels soft against the skin. Welcome to HerRoom, the world’s premier online lingerie authority. Founded by Tomima Edmark in 1998, HerRoom has grown from bras and panties to include swimwear, sleepwear, and beyond. We offer over 250 brands, from classic brands you love like Wacoal, Chantelle, and Vanity Fair, to some you may not have heard of—Fantasie, Prima Donna, Elomi—but are sure to love. We work hard to provide you with as much information as possible, including extensive fit information, measured drawings of each bra, and honest customer reviews. Whether you’re a 28A or a 58J, we’re confident you’ll find something you love at HerRoom!
{ "pile_set_name": "Pile-CC" }
The physiological effects of vasopressin when used to control intra-abdominal bleeding. Vasopressin was used in ten critically ill patients with massive intra-abdominal bleeding unresponsive to conventional therapy. Vasopressin controlled bleeding in four patients, three of whom had continued to bleed following laparotomy for haemostasis; in two other patients, bleeding was reduced. All the patients were intensively monitored throughout the period of the vasopressin treatment; this enabled other physiological effects of vasopressin to be documented and reported. Mean arterial pressure and central venous pressure increased following the administration of vasopressin and there was a decrease in heart rate. Core body temperature rose significantly. Although all the patients had impaired renal function before receiving vasopressin, five had a prompt diuresis following its administration. Eight patients died but only three of intra-abdominal bleeding; two patients survived to leave hospital. Four patients had post-mortem evidence of ischaemia in the heart, liver and gastrointestinal tract; vasopressin may have contributed to the development of this. Vasopressin may have a place in the management of patients with life-threatening intra-abdominal haemorrhage but its use should be confined to those patients in whom conventional therapy has failed.
{ "pile_set_name": "PubMed Abstracts" }
KATY Skull Rose Print Dress 8-14 Skull Roses prints are back with a boom.Skull Rose Print Dress 8-14 is a Bandeau dress that is a must have for any girl. Combine it with flats for the day, or dress up in the evening with high heels and a statement handbag to complete the look!
{ "pile_set_name": "Pile-CC" }
Q: Is the complex form of the Fourier series of a real function supposed to be real? The question said to plot the $2\pi$ periodic extension of $f(x)=e^{-x/3}$, and find the complex form of the Fourier series for $f$. My work: $$a_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-x/3}e^{-inx}dx=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-x(1/3+in)}dx$$ $$=\frac{e^{\pi(\frac{1}{3}+in)} - e^{-\pi(\frac{1}{3}+in)}}{2\pi(\frac{1}{3}+in)}=\frac{1}{\pi(\frac{1}{3}+in)}\sinh(\pi(\frac{1}{3}+in))$$ $$\therefore F(x)=\frac{3\sinh(\pi/3)}{\pi}+\sum_{n=-\infty}^{\infty}\frac{3\sinh(\pi/3+in\pi)}{\pi+3in\pi}\cos(nx)$$ But, this is not always real-valued. Is it possible for the complex Fourier series of a real-valued function to have imaginary coefficients, or is my algebra just wrong? A: You are using the formula for the complex fourier coefficients which are usually denoted by $c_n$. These are usually complex, and they lead to the representation: $f_f(x) = \sum_{n=-\infty}^\infty c_n e^{inx}$ This is still (more or less) the original function and is therefore real. There is also a transformation into the sinus-cosinus representation: $f_f(x) = a_0 + \sum_{n=1}^\infty a_n \cos(nx) + b_n \sin(nx)$ Where the $a_n$ and $b_n$ are real if the original function was real. You can even go back and forth between the 'real' and the 'complex' coefficients. This comes from the fact that you can express the sinus as well as the cosinus as $\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$ and $\cos(x) = \frac{1}{2}(e^{ix}+e^{-ix})$. Or the other way around which might be more familiar: $e^{ix} = \cos(x)+i\sin(x)$ You can find all of this including the formulas for converting the real coefficients $a_n,b_n$ to the complex ones $c_n$ and vice versa here: http://mathworld.wolfram.com/FourierSeries.html
{ "pile_set_name": "StackExchange" }
ZTE ZMAX 2 Smartphone ZTE USA announced the release of ZMAX 2, a second generation, budget-priced Android smartphone for $150 with AT&T. The new phone features an updated design with the same long-lasting battery and 5.5-inch HD screen at an affordable price. The phone’s price-to-feature ratio is outstanding and the flexibility of a prepaid plan only sweetens the deal. The ZTE ZMAX 2 delivers one of the best viewing experiences with a large 5.5-inch HD display, Dolby Digital Plus Audio, a long-lasting 3,000 mAh removable battery and 4G LTE speeds. The 8-megapixel rear and 2-megapixel front cameras capture clear, crisp images and video. Thanks to 16 GB of internal memory you have plenty of space to store content and download the latest apps to edit or share your photos and videos. The ZTE ZMAX 2 is a premium phablet that features the latest Android 5.1 Lollipop operating system, a 1.2 GHz quad-core Snapdragon processor. Price and Availability The ZTE ZMAX 2 can be purchased online at att.com and at select AT&T retail locations for $149.99 beginning September 25, 2015.
{ "pile_set_name": "Pile-CC" }
Molecular Engineering of Phenylbenzimidazole-Based Orange Ir(III) Phosphors toward High-Performance White OLEDs. To develop B-O complementary-color white organic light-emitting diodes (WOLEDs) exhibiting high efficiency and low roll-off as well as color stability simultaneously, we have designed two orange iridium(III) complexes by simply controlling the position of the methoxyl group on the cyclometalated ligand. The obtained emitters mOMe-Ir-BQ and pOMe-Ir-BQ show good photophysical and electrochemical stabilities with a broadened full width at half-maximum close to 100 nm. The corresponding devices realize highly efficient electrophosphorescence with a maximum current efficiency (CE) and power efficiency (PE) of 24.4 cd A-1 and 15.3 lm W-1 at a high doping concentration of 15 wt %. Furthermore, the complementary-color all-phosphor WOLEDs based on these phosphors exhibit good performance with a maximum CE of 31.8 cd A-1, PE of 25.0 lm W-1, and external quantum efficiency of 15.5%. Particularly, the efficiency of this device is still as high as 29.3 cd A-1 and 14.2% at the practical brightness level of 1000 cd m-2, giving a small roll-off. Meanwhile, extremely high color stability is achieved by these devices with insignificant chromaticity variation.
{ "pile_set_name": "PubMed Abstracts" }
1 00:00:32,332 --> 00:00:37,337 ♪♪~ 2 00:00:37,337 --> 00:00:45,245 ♪♪~ 3 00:00:45,245 --> 00:00:49,349 (カメラのシャッター音) 4 00:00:49,349 --> 00:00:53,336 (小宮山志保)財布は現金のみ。 カード類 免許証 一切なし。 5 00:00:53,336 --> 00:00:56,373 (村瀬健吾)身元のわかる所持品は 一切なしか…。 6 00:00:56,373 --> 00:00:59,426 名無しの死体ってわけね。 7 00:00:59,426 --> 00:01:01,426 (村瀬)ああ。 8 00:01:03,363 --> 00:01:05,348 (浅輪直樹) あれ? どうしたんですか? 9 00:01:05,348 --> 00:01:07,417 (加納倫太郎)プルタブ開いてるのに 全然飲んでない。 10 00:01:07,417 --> 00:01:09,417 本当ですね。 11 00:01:10,503 --> 00:01:12,503 どうして飲まなかったんだろう? 12 00:01:13,306 --> 00:01:17,360 (青柳 靖) 昨日 男の声を聞いたんですね? 13 00:01:17,360 --> 00:01:20,380 ええ そうなんです。 夕方6時過ぎに→ 14 00:01:20,380 --> 00:01:24,351 犬の散歩に出たんですけど この下の道に来た時に…。 15 00:01:24,351 --> 00:01:26,353 ≪返してくれよ! 16 00:01:26,353 --> 00:01:28,338 (青柳)「返してくれ」か…。 17 00:01:28,338 --> 00:01:31,441 (早瀬川真澄)死因は 頭部を強打した事による脳挫傷。 18 00:01:31,441 --> 00:01:34,311 死亡推定時刻は 昨日の午後6時前後。 19 00:01:34,311 --> 00:01:37,330 ちょうど主婦が 男の声を聞いた時刻に一致するな。 20 00:01:37,330 --> 00:01:39,366 お疲れさまです。 あ お疲れさまです。 21 00:01:39,366 --> 00:01:43,353 現場の缶コーヒーの 分析結果が出ました。 22 00:01:43,353 --> 00:01:46,356 コーヒーの中から 致死量の スリーパーが検出されました。 23 00:01:46,356 --> 00:01:48,358 ん? スリーパー? 24 00:01:48,358 --> 00:01:51,394 スリーパーは粉末状の合成麻薬。 25 00:01:51,394 --> 00:01:54,347 多量に摂取すると 心拍や血圧が低下して→ 26 00:01:54,347 --> 00:01:56,299 眠るように死ねるって言われてる。 27 00:01:56,299 --> 00:01:59,336 最近 北欧経由で入ってきた 新しい麻薬で→ 28 00:01:59,336 --> 00:02:02,372 まだ それほど 出回ってないはずなんだけど。 29 00:02:02,372 --> 00:02:05,342 浅輪君 缶コーヒーに付着してた指紋は? 30 00:02:05,342 --> 00:02:07,444 被害者の指紋と一致しました。 31 00:02:07,444 --> 00:02:10,444 あと 気になる事があるんです。 (志保)何? 32 00:02:12,365 --> 00:02:15,368 これと同じような スリーパー入りの缶コーヒーを飲んで→ 33 00:02:15,368 --> 00:02:17,320 死亡した遺体が すでに3体発見されてます。 34 00:02:17,320 --> 00:02:21,408 えっ!? 所持品は現金入りの財布のみ。 35 00:02:21,408 --> 00:02:23,408 3体とも 身元がわからなかった。 36 00:02:24,344 --> 00:02:27,314 (青柳)けど なんで 3人とも 身元がわかんねえのかな? 37 00:02:27,314 --> 00:02:32,352 そうね。 身なりを見る限り 普通の生活者って感じなのにね。 38 00:02:32,352 --> 00:02:34,337 (矢沢英明) まず最初に発見されたのが→ 39 00:02:34,337 --> 00:02:38,341 Aさん。 発見されたのは4月2日 朝。 40 00:02:38,341 --> 00:02:41,244 死亡推定時刻は 前日の午後6時前後になります。 41 00:02:41,244 --> 00:02:44,364 場所 夕日の丘緑地。 42 00:02:44,364 --> 00:02:46,366 見た目 気持ちよく寝てるようにしか→ 43 00:02:46,366 --> 00:02:50,353 見えないですね。 次に発見されたのがBさん。 44 00:02:50,353 --> 00:02:53,323 発見された日が5月11日 朝。 45 00:02:53,323 --> 00:02:57,360 死亡推定時刻 前夜の午後10時前後。 46 00:02:57,360 --> 00:03:00,363 場所 あすか台丘陵 展望広場。 47 00:03:00,363 --> 00:03:03,450 知ってる ここ! 星 すっごいきれいに見えるのよ。 48 00:03:03,450 --> 00:03:06,450 星? 星とか見るのか? 小宮山君。 49 00:03:07,253 --> 00:03:09,272 (志保)悪い? (村瀬)いや…。 50 00:03:09,272 --> 00:03:11,341 矢沢 次。 51 00:03:11,341 --> 00:03:14,327 で 3番目に発見されたのが Cさん。 52 00:03:14,327 --> 00:03:17,347 発見されたのが5月28日 朝。 53 00:03:17,347 --> 00:03:21,368 死亡推定時刻 前夜の午後11時前後。 54 00:03:21,368 --> 00:03:24,354 場所 花園自然公園。 55 00:03:24,354 --> 00:03:28,358 あら~ 3人とも 景色のいいとこで死んだのね。 56 00:03:28,358 --> 00:03:30,443 この手だけどさ…。 (志保)手? 57 00:03:30,443 --> 00:03:33,443 うん。 なんか握ってるように見えない? 58 00:03:34,364 --> 00:03:36,366 ああ そう言われたら そうですね。 59 00:03:36,366 --> 00:03:41,354 この手も この手も この手も。 60 00:03:41,354 --> 00:03:44,357 確かに3人とも 何か握ってたように見えますね。 61 00:03:44,357 --> 00:03:47,360 (青柳)う~ん でも その握ってた何かは→ 62 00:03:47,360 --> 00:03:49,396 どこにいったのよ? いや 少なくともさ→ 63 00:03:49,396 --> 00:03:51,348 この3人が死んだ時に→ 64 00:03:51,348 --> 00:03:53,366 誰か そばにいた事だけは 確かだよね。 65 00:03:53,366 --> 00:03:55,352 えっ? なんで そんな事わかるんですか? 66 00:03:55,352 --> 00:03:57,387 だって スリーパーって→ 67 00:03:57,387 --> 00:03:59,422 粉末状の合成麻薬って 先生 言ってたじゃない。 68 00:03:59,422 --> 00:04:04,344 だとしたら 1人で缶に入れて 飲んで 死んだんだとしたら→ 69 00:04:04,344 --> 00:04:06,363 その包み紙っていうか パッケージが→ 70 00:04:06,363 --> 00:04:08,348 残ってなきゃいけない。 けど 報告書には→ 71 00:04:08,348 --> 00:04:12,385 その記載が どこにもない。 つまり 現場には他に誰かいて→ 72 00:04:12,385 --> 00:04:14,320 そのスリーパーの パッケージなりと→ 73 00:04:14,320 --> 00:04:16,256 握られてたものを 持ち去ったって事かな。 74 00:04:16,256 --> 00:04:20,360 いやいやいや ちょっと待て。 今回の事件は 犯行時に→ 75 00:04:20,360 --> 00:04:22,362 「返してくれ」っていう 男の声が聞かれてる。 76 00:04:22,362 --> 00:04:25,331 もしかして この3人は 犯人に だまされて→ 77 00:04:25,331 --> 00:04:27,350 スリーパー入り缶コーヒーを 飲まされ→ 78 00:04:27,350 --> 00:04:30,387 手に握っていた何かを奪われた。 79 00:04:30,387 --> 00:04:33,440 ところが今回は 事態に気づいた被害者が→ 80 00:04:33,440 --> 00:04:37,343 缶コーヒーを飲まずに抵抗して 殺害された。 81 00:04:37,343 --> 00:04:40,346 この一連のスリーパー関連事件は 連続強盗殺人の可能性がある。 82 00:04:40,346 --> 00:04:43,333 浅輪と係長は その3人を含めた 被害者の身元捜査。 83 00:04:43,333 --> 00:04:45,402 青柳さんたちは 昨日の事件の地取り。 84 00:04:45,402 --> 00:04:47,387 我々は スリーパーの出どころを洗う。 85 00:04:47,387 --> 00:04:49,387 行くぞ 小宮山君。 (志保)はい! 86 00:04:50,373 --> 00:04:52,342 なんだよ! 87 00:04:52,342 --> 00:04:54,344 なんで あのヒラ刑事が 指示出してんだよ。 88 00:04:54,344 --> 00:04:56,429 あれ? 行かないんですか? 89 00:04:56,429 --> 00:04:58,429 なんすか!? 90 00:04:59,532 --> 00:05:01,532 やる事があるんだぜ。 91 00:05:02,335 --> 00:05:04,354 (青柳)ポチッと。 (矢沢)なんすか? これ。 92 00:05:04,354 --> 00:05:07,357 (青柳)昨日の現場付近の 防犯カメラの映像。 93 00:05:07,357 --> 00:05:10,410 えっ それ みんなに隠してたんですか? 94 00:05:10,410 --> 00:05:12,295 みんなにって… いや 誰にも聞かれてないもの。 95 00:05:12,295 --> 00:05:14,364 うわっ もう意味がわからない。 96 00:05:14,364 --> 00:05:18,384 (青柳)あ 被害者みっけ。 (矢沢)あ…。 97 00:05:18,384 --> 00:05:20,353 (青柳)3時50分か。 98 00:05:20,353 --> 00:05:22,338 (矢沢) 死亡推定時刻は6時前後だから→ 99 00:05:22,338 --> 00:05:24,457 被害者 2時間あまり 現場にいたって事になりますね。 100 00:05:24,457 --> 00:05:27,457 うん。 2時間 何してたんだろうな? 101 00:05:28,344 --> 00:05:31,414 (青柳)おっ 現場へ向かう女子高生 かわいい。 102 00:05:31,414 --> 00:05:33,466 もう… そんなの関係ないでしょ。 103 00:05:33,466 --> 00:05:36,466 犯行時刻前後見せてくださいよ 早く。 104 00:05:37,454 --> 00:05:40,454 (青柳)このおばちゃん 事情聴取した人。 105 00:05:41,341 --> 00:05:43,409 あっ。 106 00:05:43,409 --> 00:05:45,409 さっきの女子高生…。 107 00:05:50,366 --> 00:05:53,303 (真澄)残念ながら 特に手術の痕もないし→ 108 00:05:53,303 --> 00:05:56,389 職業に結びつくような 身体的な特徴もない。 109 00:05:56,389 --> 00:05:58,341 そうですか…。 110 00:05:58,341 --> 00:06:01,361 身元の確認 難航しそうね。 111 00:06:01,361 --> 00:06:06,349 うん…。 自殺として処理された 前の3人も当たってみますよ。 112 00:06:06,349 --> 00:06:09,252 発見された時の服なんかも 保管されてるはずですからね。 113 00:06:09,252 --> 00:06:13,339 先生。 前の3人の右手の写真って ないですかね? 114 00:06:13,339 --> 00:06:15,325 右手の写真ですか? 115 00:06:15,325 --> 00:06:18,344 ええ。 3人とも 何か握ってたようなんですが→ 116 00:06:18,344 --> 00:06:21,431 手のアップの写真があったら 何かわかるかもしれないと思って。 117 00:06:21,431 --> 00:06:23,431 わかりました 探してみます。 118 00:06:24,467 --> 00:06:27,467 あ これだ。 ありました。 119 00:06:29,422 --> 00:06:35,422 これが 最初に発見されたAさんが 着てた服ですね。 120 00:06:36,296 --> 00:06:38,448 なんか 量販店に売ってる 普通の服みたい。 121 00:06:38,448 --> 00:06:41,448 うーん 確かに そうですね。 122 00:06:42,335 --> 00:06:44,304 あ なんだ? これ。 123 00:06:44,304 --> 00:06:46,339 それ クリーニング屋のタグじゃない? 124 00:06:46,339 --> 00:06:49,325 あ そうですよ。 ああ これ 上着脱いだ時とかに→ 125 00:06:49,325 --> 00:06:51,377 ここに まだ くっついたままになってて→ 126 00:06:51,377 --> 00:06:53,329 慌てて引きちぎって ズボンの中に入れるって→ 127 00:06:53,329 --> 00:06:55,448 これ 僕も よくやるんですよ。 128 00:06:55,448 --> 00:06:57,448 あ これがあれば 身元わかるかも。 129 00:06:59,302 --> 00:07:01,304 (志保)あれが江木圭一ね。 (村瀬)ああ。 130 00:07:01,304 --> 00:07:04,374 以前 麻薬の不法所持で 挙げられた事がある。 131 00:07:04,374 --> 00:07:08,361 戸倉組の末端だが スリーパーの売人という噂だ。 132 00:07:08,361 --> 00:07:10,513 随分 金回りよさそうね。 133 00:07:10,513 --> 00:07:14,513 フッ ひょっとしたら アガリを ピンハネしてるのかもしれんな。 134 00:07:17,353 --> 00:07:20,356 (志保)ちょっと 持ち物 見せてもらっていいですか? 135 00:07:20,356 --> 00:07:22,342 (江木圭一)ああっ! 離せ! 136 00:07:22,342 --> 00:07:24,394 うわっ! おい てめえ! 137 00:07:24,394 --> 00:07:27,347 オラッ! ああっ…。 大丈夫ですか? 138 00:07:27,347 --> 00:07:30,350 (志保)これ あなたの セカンドバッグに入っていた→ 139 00:07:30,350 --> 00:07:32,335 スリーパー。 140 00:07:32,335 --> 00:07:35,405 あなたが これを売っていた相手を 全員言ってもらうわよ。 141 00:07:35,405 --> 00:07:38,341 (江木)俺は 誰にも売ったりしてませんよ。 142 00:07:38,341 --> 00:07:40,360 嘘をつくな。 本当っすよ。 143 00:07:40,360 --> 00:07:43,313 それは 俺が使うために持ってたんすよ。 144 00:07:43,313 --> 00:07:46,399 それとも そいつを人に売ったって 証拠でもあるんすか? 145 00:07:46,399 --> 00:07:52,338 ♪♪~ 146 00:07:52,338 --> 00:07:54,440 (志保)この人たちに見覚えは? 147 00:07:54,440 --> 00:07:57,440 (江木)いやぁ 見た事ないっすね。 148 00:07:59,429 --> 00:08:01,429 じゃあ この人は? 149 00:08:02,448 --> 00:08:04,448 知らないなぁ。 150 00:08:08,354 --> 00:08:10,323 (田辺瑞恵)警視庁? 151 00:08:10,323 --> 00:08:13,443 常盤木アズサさんは ごたい… ございた… ご在宅でしょうか? 152 00:08:13,443 --> 00:08:15,443 ご在宅でしょうか? 153 00:08:16,446 --> 00:08:18,446 (青柳)ご在宅でしょうか? ああ 今 言える…。 154 00:08:21,367 --> 00:08:23,436 (瑞恵)お嬢様…。 155 00:08:23,436 --> 00:08:26,436 (青柳)ちょっと 話を聞きたいだけなんで。 156 00:08:30,259 --> 00:08:32,345 (矢沢)ご家族の方は? 157 00:08:32,345 --> 00:08:34,347 (常盤木アズサ) 父も母も 今は演奏旅行で→ 158 00:08:34,347 --> 00:08:36,316 ヨーロッパを回っています。 159 00:08:36,316 --> 00:08:39,402 (矢沢)ああ ご両親とも 音楽家ですもんね。 160 00:08:39,402 --> 00:08:41,337 (青柳)おい 写真。 (矢沢)はい。 161 00:08:41,337 --> 00:08:44,290 (青柳)こんな写真 ごめんね。 (矢沢)いつの間に…。 162 00:08:44,290 --> 00:08:46,376 (青柳)この人 会った事あるかな? 163 00:08:46,376 --> 00:08:48,411 ありません。 (青柳)じゃあ 昨日の→ 164 00:08:48,411 --> 00:08:51,447 午後4時から6時ぐらいまで どこにいたかな? 165 00:08:51,447 --> 00:08:54,447 答えなければならない理由は なんですか? 166 00:08:55,251 --> 00:08:57,437 写真。 え? 167 00:08:57,437 --> 00:08:59,437 持ってるじゃないですか。 168 00:09:00,406 --> 00:09:05,445 (青柳)昨日 この階段の先の神社で 人が殺されたんだけど。 169 00:09:05,445 --> 00:09:07,445 ニュースでも言ってたでしょ? 170 00:09:08,464 --> 00:09:11,464 制服で学校を調べたんですね。 171 00:09:15,338 --> 00:09:17,440 (アズサ)学校がわかれば→ 172 00:09:17,440 --> 00:09:20,440 生徒の名前と住所は すぐにわかりますよね。 173 00:09:23,312 --> 00:09:27,300 (矢沢)あのさ 何か 見たり聞いたりした事があれば→ 174 00:09:27,300 --> 00:09:29,369 教えて…。 (青柳)神社の方向に行ってから→ 175 00:09:29,369 --> 00:09:31,337 戻ってくるまで2時間あまり→ 176 00:09:31,337 --> 00:09:33,356 何してたの? 散歩してました。 177 00:09:33,356 --> 00:09:36,359 へえ~ わざわざ あそこまで 電車で行って 散歩したの? 178 00:09:36,359 --> 00:09:38,327 ええ。 179 00:09:38,327 --> 00:09:40,430 おう 写真。 もういいっすよ。 180 00:09:40,430 --> 00:09:43,430 帰りだけ走ってるのは なんでかな? 181 00:09:45,351 --> 00:09:48,304 気がついたら 思っていたより 時間が経っていたので→ 182 00:09:48,304 --> 00:09:50,440 急いだんです。 183 00:09:50,440 --> 00:09:55,440 質問が それだけなら もう帰ってもらえますか? 184 00:09:57,430 --> 00:09:59,430 はい。 185 00:10:06,439 --> 00:10:08,439 大人なめんなよ。 186 00:10:14,280 --> 00:10:16,349 (店主)このタグ うちのですわ。 本当ですか? 187 00:10:16,349 --> 00:10:19,352 じゃあ このタグから クリーニングに出した人物って→ 188 00:10:19,352 --> 00:10:21,387 特定出来ませんかね? ああ わかりました。 189 00:10:21,387 --> 00:10:23,356 ちょっと待ってくださいね。 はい お願いします。 190 00:10:23,356 --> 00:10:27,360 ええ… 3の546の…。 191 00:10:27,360 --> 00:10:30,329 ああ ありました。 192 00:10:30,329 --> 00:10:32,381 中谷実さんですね。 193 00:10:32,381 --> 00:10:36,302 (田村)この寮にいた中谷実さんに 間違いないです。 194 00:10:36,302 --> 00:10:39,322 工場の期間工だったんですが いきなり いなくなっちゃって。 195 00:10:39,322 --> 00:10:42,358 いなくなっちゃったって いつ頃ですか? 196 00:10:42,358 --> 00:10:45,344 4月の初め頃でしたかね。 197 00:10:45,344 --> 00:10:47,346 その中谷さんと 親しくしてた方って→ 198 00:10:47,346 --> 00:10:50,299 どなたか いらっしゃいます? さあ…。 199 00:10:50,299 --> 00:10:53,269 あまり人と付き合うタイプじゃ ありませんでしたからね。 200 00:10:53,269 --> 00:10:55,288 部屋 拝見していいですか? いや ちょっと… すいません! 201 00:10:55,288 --> 00:10:58,391 ちょっと待って。 あ ちょっと待ってください。 202 00:10:58,391 --> 00:11:01,444 もう新しい人が入っちゃってて…。 203 00:11:01,444 --> 00:11:05,348 え じゃあ 中谷さんの荷物って ここには ないんですか? 204 00:11:05,348 --> 00:11:09,302 中谷さんは両親を亡くしてて 兄弟もないしで…。 205 00:11:09,302 --> 00:11:13,406 仕方ないんで 管理人の私が 処分させてもらいました。 206 00:11:13,406 --> 00:11:15,341 じゃあ 荷物はないって事ですね? 207 00:11:15,341 --> 00:11:18,344 き… 決まってるじゃないですか…。 208 00:11:18,344 --> 00:11:26,319 ♪♪~ 209 00:11:26,319 --> 00:11:28,321 何 これ? 中谷さんの? 210 00:11:28,321 --> 00:11:30,456 ダメだよ 人のもの 勝手に自分のものにしちゃ。 211 00:11:30,456 --> 00:11:33,456 逮捕するよ。 すみません…。 212 00:11:34,260 --> 00:11:37,280 これが管理人がネコババしてた パソコンってわけね。 213 00:11:37,280 --> 00:11:39,365 うん。 (村瀬)その男は一体→ 214 00:11:39,365 --> 00:11:41,334 人のパソコンで何やってたんだ? 215 00:11:41,334 --> 00:11:44,337 無料のオンラインゲームを 楽しんでたそうです。 216 00:11:44,337 --> 00:11:47,340 悲しいほどケチくせえ奴だな。 217 00:11:47,340 --> 00:11:49,358 コーヒー飲む? はい。 218 00:11:49,358 --> 00:11:51,360 で ちょっと これ 見てもらっていいですか? 219 00:11:51,360 --> 00:11:54,430 これ 失踪前の中谷さんが 頻繁に訪れていたサイトです。 220 00:11:54,430 --> 00:11:56,349 (志保) 「Heart&Voice」? 221 00:11:56,349 --> 00:11:59,352 色んな事があって 生きるのが つらくなった人たちが→ 222 00:11:59,352 --> 00:12:01,337 ここに 自分の思いを書き込むんですよ。 223 00:12:01,337 --> 00:12:05,341 (村瀬)それにしても これ すごい数の書き込みだな。 224 00:12:05,341 --> 00:12:07,260 こんなに大勢 死にたがってんの? 225 00:12:07,260 --> 00:12:11,347 日本の年間自殺者数は 3万人を超えてますからね。 226 00:12:11,347 --> 00:12:13,349 先進国でも 常にトップクラスですよ。 227 00:12:13,349 --> 00:12:15,351 僕のコーヒー。 自分で淹れろ。 228 00:12:15,351 --> 00:12:17,370 えっ…。 このサイトには→ 229 00:12:17,370 --> 00:12:19,338 会員専用ルームっていうのが あるんですね。 230 00:12:19,338 --> 00:12:22,341 これは 自分が置かれている状況を お互いに話し合う事で→ 231 00:12:22,341 --> 00:12:25,294 もう一度 生きる希望を見いだすっていうか→ 232 00:12:25,294 --> 00:12:27,346 まあ いわば アメリカ型の→ 233 00:12:27,346 --> 00:12:29,348 セラピーサイトみたいな もんなんですよ。 234 00:12:29,348 --> 00:12:32,451 ちょっと 僕もコーヒー…。 なるほどねぇ。 235 00:12:32,451 --> 00:12:36,451 で その会員専用ルームが これね。 236 00:12:38,257 --> 00:12:40,359 ああ それ 僕もやってみたんですけど→ 237 00:12:40,359 --> 00:12:42,345 会員になる手続きが むちゃくちゃ面倒なんですよ。 238 00:12:42,345 --> 00:12:44,330 メールアドレスとか書いて 入力すれば→ 239 00:12:44,330 --> 00:12:47,366 IDが送られてくるんじゃない? ああ それが違うんですよ。 240 00:12:47,366 --> 00:12:49,252 ここのサイトのログインIDは→ 241 00:12:49,252 --> 00:12:52,288 本人限定の受け取り郵便が 送られてくるんですね。 242 00:12:52,288 --> 00:12:55,291 それって 本人が 身分証明書を提示しないと→ 243 00:12:55,291 --> 00:12:57,343 受け取れないってやつだよね。 244 00:12:57,343 --> 00:12:59,328 ネットバンクとかで 身分確認のために使ってる。 245 00:12:59,328 --> 00:13:01,330 そうです そうです そうです。 246 00:13:01,330 --> 00:13:03,366 しかし なんで そんな厳重に チェックする必要があるんだ? 247 00:13:03,366 --> 00:13:06,369 それはですね 身元確認を厳しく行う事で→ 248 00:13:06,369 --> 00:13:09,322 相手が なりすましや 冷やかしでない事を保証し→ 249 00:13:09,322 --> 00:13:11,324 深い信頼…。 「深い信頼関係の中で→ 250 00:13:11,324 --> 00:13:14,343 生きる希望を回復する場を 実現する」 251 00:13:14,343 --> 00:13:17,330 これ 僕 覚えたんですよ。 まあ あの 会員専用っていっても→ 252 00:13:17,330 --> 00:13:19,398 みんなハンドルネーム 使ってるんですけどね。 253 00:13:19,398 --> 00:13:21,367 あのさぁ…。 はい? 254 00:13:21,367 --> 00:13:24,420 このハンドルネーム ジェミニっていう人だけどさ…。 255 00:13:24,420 --> 00:13:26,420 ジェミニ? 256 00:13:27,506 --> 00:13:31,506 本当 頻繁に書き込んでるよね。 257 00:13:32,361 --> 00:13:34,447 本当ですね。 258 00:13:34,447 --> 00:13:38,447 誰の質問にも 懇切丁寧に答えてる。 259 00:13:40,252 --> 00:13:43,289 よし とにかく 一応 そのサイトの管理人を→ 260 00:13:43,289 --> 00:13:45,358 サイバー犯罪対策課に 調べてもらおう。 261 00:13:45,358 --> 00:13:47,343 なんで また ヒラ刑事のお前が 仕切ってんだよ? 262 00:13:47,343 --> 00:13:49,395 ヒラ…? ああ そうそう。 もう一つ→ 263 00:13:49,395 --> 00:13:52,365 気になる事があったんですよ。 (青柳)なんだ? ヒラ刑事その4。 264 00:13:52,365 --> 00:13:54,367 えっと… その4? (青柳)うん。 265 00:13:54,367 --> 00:13:57,353 寮の管理人さんが 中谷さんの部屋を片付けてる時に→ 266 00:13:57,353 --> 00:14:00,506 なくなってるものが あったんですって。 267 00:14:00,506 --> 00:14:03,506 中谷さん 確か 熱帯魚を 飼ってたはずなんですがね。 268 00:14:04,343 --> 00:14:08,297 あ…。 え? なんですか? 269 00:14:08,297 --> 00:14:10,333 熱帯魚… 熱帯魚。 あっ! 270 00:14:10,333 --> 00:14:12,401 え 何やってんの? いや 主任たちは→ 271 00:14:12,401 --> 00:14:14,437 ちょっと用事を思い出したので 出かけてくる。 272 00:14:14,437 --> 00:14:17,437 え? ちょっと…。 273 00:14:19,442 --> 00:14:21,442 (矢沢)あああ…。 (青柳)ごめんなさい! 274 00:14:23,245 --> 00:14:26,449 私 何かした? ああ いや… ごめんね。 275 00:14:26,449 --> 00:14:29,449 気にしないで。 いつもの事だから。 あ そう。 276 00:14:31,370 --> 00:14:35,324 頼まれてた右手の写真です。 ああ ありがとう。 277 00:14:35,324 --> 00:14:38,444 最近 お疲れなんじゃないですか? うん ちょっと。 278 00:14:38,444 --> 00:14:41,444 右手 出してください。 279 00:14:47,303 --> 00:14:49,372 元気注入しておきましたんで。 280 00:14:49,372 --> 00:14:51,372 ラブ注入の方がいいんですけど。 281 00:14:55,444 --> 00:14:59,444 それは おいおい。 失礼します。 282 00:15:01,350 --> 00:15:04,303 えーっと? えーっと えっと えっと…。 283 00:15:04,303 --> 00:15:07,390 なんの話してたっけ? ああ あれだ…。 284 00:15:07,390 --> 00:15:09,341 中谷さんですよ。 それだ それ。 285 00:15:09,341 --> 00:15:12,361 そうそうそう。 それはそうとね この中谷実さんが→ 286 00:15:12,361 --> 00:15:15,331 このサイトに 頻繁に訪れていたって事は→ 287 00:15:15,331 --> 00:15:18,417 やっぱり 彼は自殺だったのかな? 288 00:15:18,417 --> 00:15:22,304 仮に強盗殺人だったとして 中谷さんみたいな人から→ 289 00:15:22,304 --> 00:15:24,373 一体 何を奪い取るって いうんですかね? 290 00:15:24,373 --> 00:15:26,442 (村瀬)「みたいな人」って 失礼だろ お前 それ。 291 00:15:26,442 --> 00:15:28,442 たとえ どん底にいるからって。 292 00:15:29,528 --> 00:15:31,528 (三沢成明)お待たせ致しました。 293 00:15:36,435 --> 00:15:38,435 (三沢)ありがとうございました。 294 00:15:43,309 --> 00:15:46,345 浅輪君 ちょっと見て。 はい? 295 00:15:46,345 --> 00:15:48,347 ん? ここ→ 296 00:15:48,347 --> 00:15:52,451 なんか 固いものを押しつけられた 痕みたいじゃない? 297 00:15:52,451 --> 00:15:56,451 ここも ここも ここも。 298 00:15:57,406 --> 00:16:01,406 本当だ。 なんでしょうね? 299 00:18:25,254 --> 00:18:28,274 4月の初めに 宅配便で届いたんですよね? 300 00:18:28,274 --> 00:18:31,327 (青柳)何便ですか? ハヤブサ。 301 00:18:31,327 --> 00:18:35,281 (矢沢)4月の初め頃 北芝3丁目の18の→ 302 00:18:35,281 --> 00:18:37,333 常盤木アズサさん宛てに→ 303 00:18:37,333 --> 00:18:39,335 熱帯魚が 送られてるはずなんですけど。 304 00:18:39,335 --> 00:18:42,338 ああ 送り主は 中谷実さんとなってますね。 305 00:18:42,338 --> 00:18:45,424 4月の2日に配達されてます。 4月2日。 306 00:18:45,424 --> 00:18:47,424 どうもありがとうございました。 どうも。 307 00:18:48,310 --> 00:18:51,263 中谷実は死亡する3日前に→ 308 00:18:51,263 --> 00:18:53,299 常盤木アズサに 熱帯魚を送っていると。 309 00:18:53,299 --> 00:18:56,352 これで2人は繋がったな。 310 00:18:56,352 --> 00:18:59,438 でも なんで 熱帯魚だったんですかね? 311 00:18:59,438 --> 00:19:03,438 (青柳)それは… 常盤木アズサを洗えばわかるだろ。 312 00:19:05,311 --> 00:19:08,314 ねえ ちょっと聞いて。 おかしな事になってる。 313 00:19:08,314 --> 00:19:10,299 ん? どうした? 314 00:19:10,299 --> 00:19:13,485 中谷実さんの 死亡手続きをしようとしたら→ 315 00:19:13,485 --> 00:19:15,485 中谷さんの住民票が 動かされてるの。 316 00:19:16,322 --> 00:19:18,290 住民票が動かされてる? 317 00:19:18,290 --> 00:19:21,310 しかも 彼が死亡したあとによ。 318 00:19:21,310 --> 00:19:24,380 中谷さんは現在 港区北麻布に 暮らしてる事になってる。 319 00:19:24,380 --> 00:19:27,333 (村瀬)どういう事だ? それ…。 320 00:19:27,333 --> 00:19:41,313 ♪♪~ 321 00:19:41,313 --> 00:19:44,283 中谷実の部屋に 出入りしている人を見てる人は→ 322 00:19:44,283 --> 00:19:46,418 誰もいないわ。 そうか…。 323 00:19:46,418 --> 00:19:49,418 実際は 誰も住んでないんじゃないかしら。 324 00:19:50,322 --> 00:19:53,275 しかし 住所があれば 郵便物が届く。 325 00:19:53,275 --> 00:19:55,411 何? これ。 326 00:19:55,411 --> 00:19:57,411 (村瀬)城西銀行の督促状。 327 00:19:58,230 --> 00:20:00,316 (青柳)田辺さんでしたっけ? (瑞恵)はい。 328 00:20:00,316 --> 00:20:02,301 (青柳)こちらには いつ頃から? 329 00:20:02,301 --> 00:20:04,336 最初に来たのは→ 330 00:20:04,336 --> 00:20:07,406 お二人がお生まれになって すぐの頃でしたので→ 331 00:20:07,406 --> 00:20:09,325 かれこれ16年になります。 332 00:20:09,325 --> 00:20:11,360 お二人? ええ。 333 00:20:11,360 --> 00:20:17,360 アズミ様という 二卵性の双子の弟さんです。 334 00:20:22,254 --> 00:20:24,273 (矢沢)双子…。 335 00:20:24,273 --> 00:20:26,325 あ。 え? 何? 336 00:20:26,325 --> 00:20:29,294 ジェミニですよ。 例のサイトに熱心に書き込んでた。 337 00:20:29,294 --> 00:20:31,263 ジェミニって 双子座の事なんです。 338 00:20:31,263 --> 00:20:33,399 え そうなの? ええ。 339 00:20:33,399 --> 00:20:36,268 へえ~。 …え? えっ そのアズミ君は? 340 00:20:36,268 --> 00:20:39,288 今年の1月に亡くなりました。 341 00:20:39,288 --> 00:20:42,424 亡くなったって ご病気か何かですか? 342 00:20:42,424 --> 00:20:45,424 いえ… 自殺です。 343 00:20:47,396 --> 00:20:49,396 自殺? 344 00:20:50,482 --> 00:20:53,482 右側がアズミ様です。 345 00:20:56,321 --> 00:20:59,258 双子の弟が自殺ですか? (青柳)うん。 346 00:20:59,258 --> 00:21:01,360 (志保) ハンドルネーム ジェミニって→ 347 00:21:01,360 --> 00:21:03,395 ひょっとしたら そのアズサさんじゃないの? 348 00:21:03,395 --> 00:21:05,314 彼女 中谷さんとも 知り合いだったし。 349 00:21:05,314 --> 00:21:08,283 (青柳)なあ。 あなたは このような重要人物の→ 350 00:21:08,283 --> 00:21:11,353 情報の全てを 今の今まで 我々に隠してたんですか? 351 00:21:11,353 --> 00:21:14,273 いや 隠してたわけじゃない。 ちゃんと報告したじゃないか。 352 00:21:14,273 --> 00:21:17,292 そっちこそ どこ行ってたんだよ? ああ よくぞ聞いてくれました。 353 00:21:17,292 --> 00:21:20,345 城西銀行に行ってたんですよ。 (青柳)ふーん。 354 00:21:20,345 --> 00:21:25,317 この中谷実は死亡後 城西銀行から 相当額の住宅融資を受けてます。 355 00:21:25,317 --> 00:21:27,252 (青柳)死んでから家建てて どうすんだよ。 356 00:21:27,252 --> 00:21:29,321 普通 墓だろ。 なあ? いや お墓だって→ 357 00:21:29,321 --> 00:21:31,340 死んだ人には建てられません。 あ そりゃそうだ。 358 00:21:31,340 --> 00:21:34,326 (村瀬)頭の血の巡りの 悪い人たちのために→ 359 00:21:34,326 --> 00:21:37,329 順序立てて説明しましょうか。 ふーん。 360 00:21:37,329 --> 00:21:39,314 (2人)浅輪 ちゃんと聞いとけ。 いや 俺じゃないでしょ。 361 00:21:39,314 --> 00:21:44,253 中谷実さんの死後 何者かが彼の住民票を港区へ移動。 362 00:21:44,253 --> 00:21:47,322 そこで中谷実さんの戸籍を元に→ 363 00:21:47,322 --> 00:21:50,342 商社役員の中谷実が 作り上げられたわけですよ。 364 00:21:50,342 --> 00:21:52,327 実際の手口としては→ 365 00:21:52,327 --> 00:21:57,349 まず 商社で在籍証明と 源泉徴収票を偽造する。 366 00:21:57,349 --> 00:22:01,320 それを港区役所に提出して 課税証明書を手に入れた。 367 00:22:01,320 --> 00:22:04,323 そうして 書類を一切整えたうえで→ 368 00:22:04,323 --> 00:22:07,326 中谷実を名乗る別人が銀行に赴き→ 369 00:22:07,326 --> 00:22:09,394 多額の融資を引き出したってわけ。 370 00:22:09,394 --> 00:22:12,314 で いざ 返済する段になってみれば→ 371 00:22:12,314 --> 00:22:15,317 返済すべき 当の中谷実は どこにも存在しない。 372 00:22:15,317 --> 00:22:18,320 つまり 中谷実を利用した融資詐欺。 373 00:22:18,320 --> 00:22:21,406 (志保)そういう事。 城西銀行は なんにも知らないで→ 374 00:22:21,406 --> 00:22:25,310 せっせと死んだ人間に 督促状を送っていたわけ。 375 00:22:25,310 --> 00:22:28,280 頭取も真っ青で すぐに 二課に被害届を出すそうよ。 376 00:22:28,280 --> 00:22:30,349 (村瀬のせき払い) 377 00:22:30,349 --> 00:22:32,384 で ここからは 私の推測なんですが→ 378 00:22:32,384 --> 00:22:36,321 この4人の被害者を出した 一連のスリーパー事件→ 379 00:22:36,321 --> 00:22:38,290 これ そもそもの目的は…。 (青柳)彼らの戸籍を→ 380 00:22:38,290 --> 00:22:40,309 手に入れる事が 目的だったんだろうな。 381 00:22:40,309 --> 00:22:42,327 ええ。 それ なんで あなたが今…。 382 00:22:42,327 --> 00:22:44,379 確かに そう考えると 身元が わからなくなっていた事にも→ 383 00:22:44,379 --> 00:22:46,315 合点が…。 合点がいくね。 384 00:22:46,315 --> 00:22:48,267 いきますね 合点がいきますね。 合点がいくね。 385 00:22:48,267 --> 00:22:51,353 遺体の身元が公になれば 詐欺が即座に ばれてしまう。 386 00:22:51,353 --> 00:22:54,406 だから 名無しの遺体にしとく 必要があったってわけ。 387 00:22:54,406 --> 00:22:57,326 でも そうなってくると 常盤木アズサという少女は→ 388 00:22:57,326 --> 00:23:00,279 どう この事件に 関わってきたんですかね? 389 00:23:00,279 --> 00:23:03,365 彼女は 中谷実さんと 繋がりがあっただけでなく→ 390 00:23:03,365 --> 00:23:06,268 今朝の事件で 被害者が殺された時にも→ 391 00:23:06,268 --> 00:23:08,287 現場の近くにいたわけですよね。 392 00:23:08,287 --> 00:23:10,422 これ 偶然とは思えないでしょ。 393 00:23:10,422 --> 00:23:13,422 (電話) 394 00:23:14,343 --> 00:23:16,328 はい 9係。 「サイバー犯罪課です」 395 00:23:16,328 --> 00:23:19,331 「“ハート・アンド・ボイス”の サイト管理者がわかりました」 396 00:23:19,331 --> 00:23:21,316 「三沢成明 53歳」 397 00:23:21,316 --> 00:23:24,386 「住まいは 港区西赤坂3の5の9」 398 00:23:24,386 --> 00:23:27,256 「メグレスという宝石店の セールスチーフです」 399 00:23:27,256 --> 00:23:29,408 了解です。 ありがとうございました。 400 00:23:29,408 --> 00:23:31,408 三沢成明か…。 401 00:23:32,494 --> 00:23:34,494 メグレス…? 402 00:23:35,297 --> 00:23:37,299 (志保)メグレスって ほら あの江木の立ち寄った店よ! 403 00:23:37,299 --> 00:23:39,318 ああっ! あの江木の→ 404 00:23:39,318 --> 00:23:41,353 アドレス帳のリスト! おっ… ちょっと待て。 405 00:23:41,353 --> 00:23:44,323 あれ? どこにあるんだよ おい! 406 00:23:44,323 --> 00:23:47,326 あった。 え? そっちかよ! 407 00:23:47,326 --> 00:23:50,329 三沢 三沢…。 (村瀬)三沢…。 408 00:23:50,329 --> 00:23:52,447 (村瀬・志保)あっ! (村瀬)三沢成明…。 409 00:23:52,447 --> 00:23:55,447 (チャイム) 410 00:23:56,318 --> 00:23:58,387 こんばんは。 411 00:23:58,387 --> 00:24:02,387 とりあえず 麻薬の不法所持容疑で ガサ入れでーす。 412 00:24:05,394 --> 00:24:07,394 はい 失礼しまーす。 413 00:24:09,414 --> 00:24:11,414 おお~。 414 00:24:12,301 --> 00:24:17,339 (志保)うわっ。 えーと… 私 こっち。 415 00:24:17,339 --> 00:24:23,328 ♪♪~(オーディオの音楽) 416 00:24:23,328 --> 00:24:35,274 ♪♪~ 417 00:24:35,274 --> 00:24:37,309 発見。 (村瀬)なんだ? 418 00:24:37,309 --> 00:24:39,278 スリーパーよ。 419 00:24:39,278 --> 00:24:47,269 ♪♪~ 420 00:24:47,269 --> 00:24:49,304 青柳さん。 あ? 421 00:24:49,304 --> 00:24:51,373 青柳さん! あ!? 422 00:24:51,373 --> 00:24:53,373 これ…。 423 00:24:58,430 --> 00:25:01,430 「前川俊夫」…。 424 00:25:02,451 --> 00:25:05,451 この写真 今朝の被害者だ。 425 00:25:08,323 --> 00:25:11,259 三沢成明 まずは…。 426 00:25:11,259 --> 00:25:16,415 まずは 麻薬… 麻薬取締法違反で逮捕だ! 427 00:25:16,415 --> 00:25:19,301 だから うるせえから 止めろっつってんだよ! 428 00:25:19,301 --> 00:25:21,286 ♪♪~(オーディオの音楽) 429 00:25:21,286 --> 00:25:23,422 ♪♪~(大音量の音楽) アーッ! 430 00:25:23,422 --> 00:25:25,422 それ ボリューム! 431 00:25:27,409 --> 00:25:29,409 何が『死と乙女』だよ。 432 00:25:31,480 --> 00:25:35,480 おい 三沢 昨日の 午後4時から6時 どこにいた? 433 00:25:36,284 --> 00:25:39,237 その時間なら 店にいましたよ。 434 00:25:39,237 --> 00:25:43,358 (村瀬)殺害された前川俊夫さんの 保険証や戸籍謄本を→ 435 00:25:43,358 --> 00:25:47,429 なぜ あなたが持っていたのか 説明してもらえますか。 436 00:25:47,429 --> 00:25:52,429 彼の生前 本人から譲り受けました。 437 00:25:53,301 --> 00:25:57,339 そういったものを他人に譲ると 譲った本人は→ 438 00:25:57,339 --> 00:26:00,375 その後 生きていくのが 大変 困難になるとは→ 439 00:26:00,375 --> 00:26:02,327 考えませんでしたか? 440 00:26:02,327 --> 00:26:05,263 うん… 場合によってはね。 441 00:26:05,263 --> 00:26:08,300 場合によっては…。 442 00:26:08,300 --> 00:26:12,337 あなたの運営するウェブサイト 「ハート・アンド・ボイス」。 443 00:26:12,337 --> 00:26:16,324 あそこには 自殺を考えてる人間が 随分 大勢集まってますね。 444 00:26:16,324 --> 00:26:18,260 しかも 入会するには→ 445 00:26:18,260 --> 00:26:21,329 厳重な本人確認が 必要とされている。 446 00:26:21,329 --> 00:26:24,282 セラピーには 信頼関係が必要なんですよ。 447 00:26:24,282 --> 00:26:26,318 あなたは あのサイトを利用して→ 448 00:26:26,318 --> 00:26:29,371 ある条件に合う人間を 探してたんじゃないんですか? 449 00:26:29,371 --> 00:26:32,257 「失踪しても 誰も捜さない人間」 450 00:26:32,257 --> 00:26:35,293 「言い換えれば 犯罪に使うには もってこいの→ 451 00:26:35,293 --> 00:26:37,329 条件のいい戸籍の持ち主」 452 00:26:37,329 --> 00:26:39,364 「その手の戸籍は ちょっと手を加えれば→ 453 00:26:39,364 --> 00:26:42,334 いくらでも 経歴を塗り替える事が出来る」 454 00:26:42,334 --> 00:26:45,320 「本人が 身元不明で死亡していれば→ 455 00:26:45,320 --> 00:26:48,306 本人の口から事が露見する 恐れもない」 456 00:26:48,306 --> 00:26:50,425 (三沢)「とんだ言いがかりですね」 457 00:26:50,425 --> 00:26:54,425 (村瀬)あなたのサイトの 会員だった 中谷実さん。 458 00:26:59,401 --> 00:27:03,401 彼の戸籍は 住宅融資詐欺に利用されてました。 459 00:27:04,339 --> 00:27:06,308 そうですか。 460 00:27:06,308 --> 00:27:08,326 あなた自身は無関係だと? 461 00:27:08,326 --> 00:27:11,363 ええ。 彼の戸籍が…。 462 00:27:11,363 --> 00:27:15,383 「欲しいという人がいたので 差し上げました」 463 00:27:15,383 --> 00:27:18,383 三沢は 戸籍ブローカーってわけですか。 464 00:27:20,338 --> 00:27:23,325 (村瀬)あなたは 条件に合う自殺志願者から→ 465 00:27:23,325 --> 00:27:26,328 戸籍を手に入れ 代わりにスリーパーを与えた。 466 00:27:26,328 --> 00:27:29,381 そして その戸籍を売って 利益を得ていたわけだ。 467 00:27:29,381 --> 00:27:32,334 まるで見てきたような おっしゃりようだ。 468 00:27:32,334 --> 00:27:35,420 ところが その うまくいっていたビジネスに→ 469 00:27:35,420 --> 00:27:37,420 問題が起こった。 470 00:27:38,423 --> 00:27:41,326 (村瀬)前川俊夫さんは 自殺してくれなかった。 471 00:27:41,326 --> 00:27:47,466 人間 誰しも自分の望む死に方で 死ねるとは限りませんねぇ。 472 00:27:47,466 --> 00:27:49,466 ≪(ノック) 473 00:27:51,419 --> 00:27:53,419 あ~ チキショー。 474 00:27:56,341 --> 00:27:59,427 三沢のアリバイ 成立しちゃってました。 475 00:27:59,427 --> 00:28:02,427 (矢沢)店の防犯カメラに 映っちゃってました。 476 00:28:04,399 --> 00:28:07,399 無論 私は 誰も殺しちゃいませんよ。 477 00:28:11,373 --> 00:28:19,373 ♪♪~ 478 00:30:36,301 --> 00:30:44,409 ♪♪~ 479 00:30:44,409 --> 00:30:48,313 (志保)常盤木アズサという女の子 ご存じですね? 480 00:30:48,313 --> 00:30:52,334 あなたのパソコンに入っていた 会員名簿に→ 481 00:30:52,334 --> 00:30:55,320 彼女の名前がありました。 482 00:30:55,320 --> 00:30:59,424 常盤木アズサ ハンドルネーム ジェミニ。 483 00:30:59,424 --> 00:31:02,424 ええ 知ってますよ。 484 00:31:03,294 --> 00:31:06,314 (志保) 彼女 随分熱心な会員ですね。 485 00:31:06,314 --> 00:31:11,319 (三沢)彼女は 他人の全てを受け入れるんです。 486 00:31:11,319 --> 00:31:16,324 頑張れとも言わず 励ますでもなく→ 487 00:31:16,324 --> 00:31:21,379 ただ 共感を示し 相手の話を聞き続ける。 488 00:31:21,379 --> 00:31:25,379 何時間でも どんな相手の話でも。 489 00:31:27,318 --> 00:31:30,321 いささか 常軌を逸した情熱ですが…。 490 00:31:30,321 --> 00:31:32,273 (矢沢)三沢の話は事実です。 491 00:31:32,273 --> 00:31:36,344 彼女 1人終われば また次 って感じで 話を聞いてて→ 492 00:31:36,344 --> 00:31:39,297 何日も一睡もしてない事が あるようです。 493 00:31:39,297 --> 00:31:43,334 彼女は自分の事も顧みずに→ 494 00:31:43,334 --> 00:31:47,222 死を望む人の気持ちを 受け入れようとした。 495 00:31:47,222 --> 00:31:52,327 あなた その彼女の気持ちを 利用したんじゃありませんか? 496 00:31:52,327 --> 00:31:55,330 どういう事ですか? 497 00:31:55,330 --> 00:31:58,333 あなたは死を望む人間に こう持ちかけた。 498 00:31:58,333 --> 00:32:03,405 戸籍を譲る意思があれば ジェミニがスリーパーを届けると。 499 00:32:03,405 --> 00:32:06,405 彼女はバロック真珠のような子だ。 500 00:32:08,293 --> 00:32:11,396 (三沢)美しいが ゆがんでいる。 501 00:32:11,396 --> 00:32:15,396 彼女は 人の死に とりつかれているんです。 502 00:32:16,317 --> 00:32:20,321 アズサさんと 亡くなった 彼女の双子の弟 アズミ君の事→ 503 00:32:20,321 --> 00:32:22,323 教えて頂けますか? 504 00:32:22,323 --> 00:32:26,294 それが アズサさんの役に立つんですか? 505 00:32:26,294 --> 00:32:29,264 はい。 (瑞恵)わかりました。 506 00:32:29,264 --> 00:32:32,350 (瑞恵)アズサ様とアズミ様は→ 507 00:32:32,350 --> 00:32:35,420 ご両親が 演奏旅行で ご不在がちでしたので→ 508 00:32:35,420 --> 00:32:38,323 お小さい時から ほとんどの時間を→ 509 00:32:38,323 --> 00:32:42,293 お二人っきりで 過ごしていたんです。 510 00:32:42,293 --> 00:32:45,380 まるで 言葉で喋らなくても→ 511 00:32:45,380 --> 00:32:48,316 お互いに 気持ちが通じているような→ 512 00:32:48,316 --> 00:32:51,319 本当に仲のいい双子でした。 513 00:32:51,319 --> 00:32:59,327 ♪♪~ 514 00:32:59,327 --> 00:33:03,281 アズミ君って どんな少年でした? 515 00:33:03,281 --> 00:33:06,367 (瑞恵)アズミ様は とても おとなしい性格で→ 516 00:33:06,367 --> 00:33:10,321 ペーパーグライダーを飛ばすのが 好きでした。 517 00:33:10,321 --> 00:33:13,324 アズミ様にとって しっかりしたアズサ様は→ 518 00:33:13,324 --> 00:33:15,393 姉であると同時に→ 519 00:33:15,393 --> 00:33:19,393 どこか 母親のような 存在でもあったようです。 520 00:33:21,332 --> 00:33:24,319 アズミ君の自殺の原因は? 521 00:33:24,319 --> 00:33:26,354 (瑞恵)わかりません。 522 00:33:26,354 --> 00:33:33,328 ただ このまま生きていく事に 不安と疑問を持ったらしく→ 523 00:33:33,328 --> 00:33:36,331 深く悩んでいらっしゃいました。 524 00:33:36,331 --> 00:33:39,317 そんなアズミ様を アズサ様は→ 525 00:33:39,317 --> 00:33:42,220 頑張りなさいと 励まし続けていたんです。 526 00:33:42,220 --> 00:33:46,291 でも その日の昼頃→ 527 00:33:46,291 --> 00:33:48,276 ビルの屋上に向かうアズミ様を→ 528 00:33:48,276 --> 00:33:51,329 近くに住む子供が 見ていたそうです。 529 00:33:51,329 --> 00:34:01,306 ♪♪~ 530 00:34:01,306 --> 00:34:06,344 (瑞恵)純粋なアズミ様は 苦しめば苦しむほど→ 531 00:34:06,344 --> 00:34:10,331 心のバランスを 崩されていたようです。 532 00:34:10,331 --> 00:34:13,334 そして…。 533 00:34:13,334 --> 00:34:33,404 ♪♪~ 534 00:34:33,404 --> 00:34:35,404 アズミ…? 535 00:34:41,329 --> 00:34:45,300 アズミが…。 536 00:34:45,300 --> 00:34:47,352 飛んだ…。 537 00:34:47,352 --> 00:35:11,292 ♪♪~ 538 00:35:11,292 --> 00:35:14,362 (瑞恵)アズミ様を失った アズサ様は→ 539 00:35:14,362 --> 00:35:17,315 まるで 心が壊れてしまったみたいに→ 540 00:35:17,315 --> 00:35:20,318 涙を流す事すら出来なかった。 541 00:35:20,318 --> 00:35:31,279 ♪♪~ 542 00:35:31,279 --> 00:35:35,333 (瑞恵)何か 事件に巻き込まれているのなら→ 543 00:35:35,333 --> 00:35:39,454 どうか アズサ様を 助けて差し上げてください。 544 00:35:39,454 --> 00:35:43,454 あの… アズサさんの部屋 拝見出来ますか? 545 00:35:44,392 --> 00:35:46,392 あっ 青柳さん これ 読んでください! 546 00:35:48,313 --> 00:35:50,315 (青柳)「では ノーバディさん→ 547 00:35:50,315 --> 00:35:54,435 午後3時… アクアリバーテラスに行きます」 548 00:35:54,435 --> 00:35:56,435 彼女 今日も届けに…! 549 00:35:58,323 --> 00:36:00,325 (青柳)三沢! てめえ どれだけ→ 550 00:36:00,325 --> 00:36:02,277 あの子を利用すりゃ 気が済むんだよ! 551 00:36:02,277 --> 00:36:04,345 (矢沢)青柳さん! (村瀬)何やってんだ!? あんた! 552 00:36:04,345 --> 00:36:07,215 こいつ また 彼女に スリーパー届けに行かせたんだよ。 553 00:36:07,215 --> 00:36:11,319 私は彼女に 何一つ 強制した覚えはありませんよ。 554 00:36:11,319 --> 00:36:14,405 なんだ この野郎…! (矢沢)青柳さん! 555 00:36:14,405 --> 00:36:17,405 今は彼女を捜す方が先でしょう。 556 00:39:04,292 --> 00:39:06,327 あんた ここで 女の子を待ってるの? 557 00:39:06,327 --> 00:39:09,447 え…? 558 00:39:09,447 --> 00:39:11,447 警察だ。 559 00:39:12,400 --> 00:39:15,386 ジェミニに会ったのか? 560 00:39:15,386 --> 00:39:19,386 僕が来た時 これがベンチに置かれてました。 561 00:39:21,342 --> 00:39:24,345 スリーパーは? え…? 562 00:39:24,345 --> 00:39:28,316 彼女があんたに届けるはずだった スリーパーは!? 563 00:39:28,316 --> 00:39:32,303 僕が来た時には メモだけ…。 564 00:39:32,303 --> 00:39:35,323 彼女 スリーパーを持って どこかに…。 565 00:39:35,323 --> 00:39:37,308 自殺すんなよ。 566 00:39:37,308 --> 00:39:48,419 ♪♪~ 567 00:39:48,419 --> 00:39:52,419 あれ? 係長 これ…。 568 00:39:58,346 --> 00:40:00,331 (加納の声)「警察の方へ」 569 00:40:00,331 --> 00:40:03,434 「一昨日 神社で 男の人を殺したのは私です」 570 00:40:03,434 --> 00:40:07,434 「罪を償います。 常盤木アズサ」 571 00:40:09,440 --> 00:40:11,440 浅輪です。 572 00:40:14,328 --> 00:40:16,330 わかった。 573 00:40:16,330 --> 00:40:20,384 死なせねえよ。 もしかして 彼女…。 574 00:40:20,384 --> 00:40:28,326 ♪♪~ 575 00:40:28,326 --> 00:40:31,329 (志保)どこに行ったか 心当たりはないの? 576 00:40:31,329 --> 00:40:34,332 彼女 スリーパーを持ってるのよ! 577 00:40:34,332 --> 00:40:40,304 追い詰められた人間にとって 死は一番の安らぎですよ。 578 00:40:40,304 --> 00:40:45,443 お前 彼女を自殺させるつもりで スリーパーを…! 579 00:40:45,443 --> 00:40:56,443 ♪♪~ 580 00:41:07,331 --> 00:41:30,354 ♪♪~ 581 00:41:30,354 --> 00:41:32,406 おい 矢沢… なんとかしろ。 582 00:41:32,406 --> 00:41:34,406 何 びびってるんですか。 583 00:41:37,328 --> 00:41:40,431 (青柳)どうせ死ぬなら スリーパーにしたらどうだ! 584 00:41:40,431 --> 00:41:42,431 (矢沢)えっ!? 585 00:41:43,467 --> 00:41:47,467 (青柳)少なくとも あれだ 痛くねえぞ。 586 00:41:51,359 --> 00:41:56,464 最初は そうしようと 思ったんですけど→ 587 00:41:56,464 --> 00:41:58,464 それじゃあ ダメなんです。 588 00:42:00,434 --> 00:42:04,434 眠るように死んだんじゃ 罰にならない。 589 00:42:06,290 --> 00:42:08,342 私は人殺しだから。 590 00:42:08,342 --> 00:42:11,412 (青柳)おととい スリーパーを届けに行った先で→ 591 00:42:11,412 --> 00:42:13,412 何があった? 592 00:42:16,334 --> 00:42:20,438 人殺しっつうのは 黙って死んじゃダメなんだよ! 593 00:42:20,438 --> 00:42:23,438 どうせ死ぬんだったら 喋ってから死ね。 594 00:42:31,415 --> 00:42:39,415 私… スリーパーを渡して あの人の話を聞いたんです。 595 00:42:41,342 --> 00:42:45,396 (アズサの声) つらかった事や色んな事。 596 00:42:45,396 --> 00:42:48,396 そのあと…。 597 00:42:49,350 --> 00:42:53,337 (前川俊夫) 今日は よしてもいいかな。 598 00:42:53,337 --> 00:42:56,440 なんか 延々 話 聞いてもらったあとで→ 599 00:42:56,440 --> 00:42:58,440 悪いんだけど…。 600 00:43:01,312 --> 00:43:07,385 いいえ 全然そんな事ないです。 601 00:43:07,385 --> 00:43:10,354 俺の一生って→ 602 00:43:10,354 --> 00:43:15,443 人にだまされっぱなしの つまらない人生だったけど→ 603 00:43:15,443 --> 00:43:21,443 いざ死ぬとなると やっぱ 迷うよね。 604 00:43:25,336 --> 00:43:31,342 あの… 渡したもの 返してもらえるんだよね? 605 00:43:31,342 --> 00:43:34,245 渡したものって なんの事ですか? 606 00:43:34,245 --> 00:43:37,264 えっ? 何言ってんの? 607 00:43:37,264 --> 00:43:41,352 戸籍だよ。 戸籍…? 608 00:43:41,352 --> 00:43:45,372 管理人が指定した私書箱に ちゃんと送ったじゃないか。 609 00:43:45,372 --> 00:43:47,341 届いたから あんたが来たんだろ? 610 00:43:47,341 --> 00:43:49,343 戸籍情報の類を送ったら→ 611 00:43:49,343 --> 00:43:52,346 ジェミニがスリーパーを届ける って約束じゃないか! 612 00:43:52,346 --> 00:43:55,316 約束って どういう事です? 613 00:43:55,316 --> 00:43:58,335 ふざけるなよ…。 614 00:43:58,335 --> 00:44:02,339 何も知らずに スリーパーだけ 運んできたっていうのか!? 615 00:44:02,339 --> 00:44:06,360 私 本当に 戸籍の事なんて何も…! 616 00:44:06,360 --> 00:44:09,280 いい加減にしろよ。 617 00:44:09,280 --> 00:44:12,333 あんたまで 俺をだますのか!? 618 00:44:12,333 --> 00:44:15,436 返せよ! 返してくれよ! 619 00:44:15,436 --> 00:44:17,436 返せよ…! 620 00:44:21,342 --> 00:44:24,395 うっ… あぁー! 621 00:44:24,395 --> 00:44:28,395 ぐわっ! あぁ…。 622 00:44:32,319 --> 00:44:35,289 あの人 死んでしまった…。 623 00:44:35,289 --> 00:44:38,476 君のせいじゃない! 君は身を守ろうとしただけだ。 624 00:44:38,476 --> 00:44:40,476 なんで こんな事 始めたんだ? 625 00:44:44,348 --> 00:44:47,351 1人で死ぬのは寂しすぎる。 626 00:44:47,351 --> 00:44:50,321 三沢に そう言われたのか? (アズサ)違う。 627 00:44:50,321 --> 00:44:52,323 私が本当に そう思ったから。 628 00:44:52,323 --> 00:44:55,392 だから こんなふうに 自分を罰してきたのか? 629 00:44:55,392 --> 00:44:58,392 アズミ君が死んでから ずっと。 630 00:45:01,432 --> 00:45:03,432 そうなのか? 631 00:45:10,307 --> 00:45:18,399 私… アズミの気持ちを わかろうとしなかった。 632 00:45:18,399 --> 00:45:23,437 アズミに 強くなってもらいたくて…。 633 00:45:23,437 --> 00:45:29,437 あの子の苦しみを 理解してあげられなかった。 634 00:45:32,363 --> 00:45:37,434 本当は 頑張れなんて 言っちゃいけなかったんです。 635 00:45:37,434 --> 00:45:43,434 アズミは つらいとも苦しいとも 言わなくなって…。 636 00:45:55,452 --> 00:45:58,452 青柳さんでしたっけ? 637 00:46:00,391 --> 00:46:06,391 あの日 アズミは お昼頃に ここに来たんです。 638 00:46:08,315 --> 00:46:11,285 それから 日が傾いて→ 639 00:46:11,285 --> 00:46:16,407 夕焼けになって すっかり暗くなるまで→ 640 00:46:16,407 --> 00:46:19,410 あの子は ひとりぼっちで→ 641 00:46:19,410 --> 00:46:22,410 何時間も ここで空を見てた。 642 00:46:25,416 --> 00:46:32,416 私 あの子を 寂しい気持ちのまま…。 643 00:46:36,443 --> 00:46:39,443 ひとりぼっちで 逝かせてしまったんです。 644 00:46:41,432 --> 00:46:44,432 私がいたのに…。 645 00:46:46,437 --> 00:46:49,437 私がいたのに…! 646 00:46:52,426 --> 00:46:55,426 あの子を 1人で逝かせてしまった…。 647 00:46:56,497 --> 00:46:58,497 ダメだ。 648 00:47:01,452 --> 00:47:04,452 アズミ…。 649 00:47:06,423 --> 00:47:11,423 私も飛んで そばに行くね。 650 00:47:14,431 --> 00:47:18,431 飛ぶ前に教えて! 係長…。 651 00:47:20,337 --> 00:47:23,290 スリーパー飲んで 亡くなった人→ 652 00:47:23,290 --> 00:47:27,328 死ぬ前に なんか 持ってたようなんだけど。 653 00:47:27,328 --> 00:47:29,346 (矢沢)今 そこ? 654 00:47:29,346 --> 00:47:33,283 3人とも 右手の甲の この辺に→ 655 00:47:33,283 --> 00:47:37,471 硬いものを押しつけたような 痕があった。 656 00:47:37,471 --> 00:47:40,471 君 時計 内側にしてるよね? 657 00:47:45,295 --> 00:47:50,334 彼らが握ってたのは 君の手なんじゃないのかな。 658 00:47:50,334 --> 00:48:07,351 ♪♪~ 659 00:48:07,351 --> 00:48:09,303 君は 死んでいく人の手を→ 660 00:48:09,303 --> 00:48:12,339 握っていてあげたんじゃ ないのかな? 661 00:48:12,339 --> 00:48:15,409 ひとりぼっちで 寂しい気持ちのまま→ 662 00:48:15,409 --> 00:48:18,409 逝かせたくない。 そう思って。 663 00:48:19,329 --> 00:48:24,334 そばに座って ずっと手を握ってあげてた。 664 00:48:24,334 --> 00:48:28,305 自死を願う人に 君 ずっと 一生懸命 寄り添おうとした。 665 00:48:28,305 --> 00:48:32,326 熱帯魚をかわいがってた 中谷さんからは→ 666 00:48:32,326 --> 00:48:34,344 熱帯魚を預かり→ 667 00:48:34,344 --> 00:48:37,331 きれいな夕日の中で死にたい っていう人がいたら→ 668 00:48:37,331 --> 00:48:40,434 そういう場所を探してあげて→ 669 00:48:40,434 --> 00:48:45,434 最後の最後まで 手を握ってあげてた。 670 00:48:49,326 --> 00:48:54,348 他に 何もしてあげられないから。 671 00:48:54,348 --> 00:48:59,436 違う。 君 たくさんの事をした。 672 00:48:59,436 --> 00:49:03,436 それに 君が 気がついてないだけだ。 673 00:49:04,341 --> 00:49:08,362 いつの間にか あのサイトから いなくなってた人たち いたよね。 674 00:49:08,362 --> 00:49:10,397 覚えてるかな? 675 00:49:10,397 --> 00:49:15,352 上司から嫌がらせ受けて 死にたいって言ってた男の人。 676 00:49:15,352 --> 00:49:18,338 あの人 ふるさとに帰って→ 677 00:49:18,338 --> 00:49:21,358 今 立派に 運送会社で働いてるんだよ。 678 00:49:21,358 --> 00:49:23,410 君にありがとうって言ってた。 679 00:49:23,410 --> 00:49:27,331 嘘!? 嘘じゃない。 これ見て。 680 00:49:27,331 --> 00:49:30,384 三沢のパソコンにあった あのサイトの会員名簿。 681 00:49:30,384 --> 00:49:33,384 これで連絡とって 確認したんだよ。 682 00:49:34,421 --> 00:49:36,340 長年連れ添った奥さん 亡くして→ 683 00:49:36,340 --> 00:49:40,310 1人で生きがいがないって 言ってた人 いたよね。 684 00:49:40,310 --> 00:49:44,448 あの人は 地域のボランティアに参加して→ 685 00:49:44,448 --> 00:49:48,448 児童館で 子供たちに遊びを教えてる。 686 00:49:49,286 --> 00:49:53,340 君が ずっと 話を聞いてあげたおかげで→ 687 00:49:53,340 --> 00:49:57,494 生き直そうと 歩き始めた人たちがいる。 688 00:49:57,494 --> 00:50:13,494 ♪♪~ 689 00:50:16,296 --> 00:50:18,432 あっ…。 690 00:50:18,432 --> 00:50:22,432 (泣き声) 691 00:50:24,338 --> 00:50:27,341 大人 怖がらせんなよ。 692 00:50:27,341 --> 00:50:31,345 (泣き声) (青柳)よしよし…。 693 00:50:31,345 --> 00:50:38,352 (泣き声) 694 00:50:38,352 --> 00:50:43,340 常盤木アズサ 確保。 矢沢が確保。 695 00:50:43,340 --> 00:50:46,426 保護 保護です。 696 00:50:46,426 --> 00:50:55,426 ♪♪~ 697 00:50:58,438 --> 00:51:03,438 常盤木アズサさん 無事 保護されたわよ。 698 00:51:07,297 --> 00:51:10,417 人を死に追いやるのも 人なら→ 699 00:51:10,417 --> 00:51:13,417 人を救えるのも 人なのよ。 700 00:51:16,340 --> 00:51:20,310 彼女が色んな事 証言してくれるだろうな。 701 00:51:20,310 --> 00:51:23,330 さあ あんたにも そろそろ→ 702 00:51:23,330 --> 00:51:27,434 洗いざらい 全部 喋ってもらおうか。 703 00:51:27,434 --> 00:51:32,434 まず手始めに 戸籍を誰に いくらで売ったとか。 704 00:51:38,295 --> 00:51:42,349 (アナウンサー)「大手銀行が 相次いで 多額の融資金をだまし取られた→ 705 00:51:42,349 --> 00:51:46,403 住宅融資詐欺事件で 警視庁捜査二課は…」 706 00:51:46,403 --> 00:51:49,339 二課に 特大の恩を売ってやったぞ。 707 00:51:49,339 --> 00:51:53,327 ハハハ… まあ 調べてきたのは 私と小宮山君ですがね。 708 00:51:53,327 --> 00:51:57,397 ちっちぇえな~! 誰が調べたとかって…。 709 00:51:57,397 --> 00:51:59,233 なんすか? それ。 自分の手柄って…。 710 00:51:59,233 --> 00:52:02,336 いつも 自分が…! まあまあ… 2人とも! 711 00:52:02,336 --> 00:52:05,339 より小さく見えますよ。 (村瀬)より小さ…!? 712 00:52:05,339 --> 00:52:08,342 小宮山君 そうめん食ってないで 言ってやれよ 君も。 713 00:52:08,342 --> 00:52:13,347 いや~ 夏のそうめんは最高だね。 ねえ この のど越しが最高! 714 00:52:13,347 --> 00:52:17,284 そうめんってさ びっくり水がポイントなんだよね。 715 00:52:17,284 --> 00:52:19,353 なんですか? びっくり水って。 716 00:52:19,353 --> 00:52:22,422 あれ? 差し水の事 びっくり水って言わない? 717 00:52:22,422 --> 00:52:25,422 (浅輪・志保)言わないよね~。
{ "pile_set_name": "Github" }
Amino acid substitutions in mitochondrial ATPase subunit 9 of Saccharomyces cerevisiae leading to oligomycin or venturicidin resistance. A series of isonuclear oligomycin-resistant mutants of Saccharomyces cerevisiae carrying mutations in the mitochondrial oli1 gene has been studied. DNA sequence analysis of this gene has been used to define the amino acid substitutions in subunit 9 of the mitochondrial ATPase complex. A domain of amino acids involved in oligomycin resistance can be recognized which encompasses residues in each of the two hydrophobic portions of the subunit 9 polypeptide that are thought to span the inner mitochondrial membrane. Certain amino acid substitutions also confer cross-resistance to venturicidin: these residues define an inner domain for venturicidin resistance. The expression of venturicidin resistance resulting from one particular substitution is modulated by nuclear genetic factors.
{ "pile_set_name": "PubMed Abstracts" }
/* * CDDL HEADER START * * The contents of this file are subject to the terms of the * Common Development and Distribution License, Version 1.0 only * (the "License"). You may not use this file except in compliance * with the License. * * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE * or http://www.opensolaris.org/os/licensing. * See the License for the specific language governing permissions * and limitations under the License. * * When distributing Covered Code, include this CDDL HEADER in each * file and include the License file at usr/src/OPENSOLARIS.LICENSE. * If applicable, add the following below this CDDL HEADER, with the * fields enclosed by brackets "[]" replaced with your own identifying * information: Portions Copyright [yyyy] [name of copyright owner] * * CDDL HEADER END */ /* * Copyright 2008 Sun Microsystems, Inc. All rights reserved. * Use is subject to license terms. */ #ifndef _LIBSPL_LIBDEVINFO_H #define _LIBSPL_LIBDEVINFO_H #endif /* _LIBSPL_LIBDEVINFO_H */
{ "pile_set_name": "Github" }
1. Field of the Present Invention The present invention relates to a semiconductor memory device, and more particularly to an array of flash memory cells, in which a unit cell includes a single transistor and to methods for programming and erasing the same. 2. Discussion of the Related Art An ideal memory element allows for easy programming (writing), easy erasing, and retains a memory state even if power is removed, i.e., is nonvolatile. Nonvolatile semiconductor memories (NVSM) are classified into two typesxe2x80x94a floating gate type and a metal insulator semiconductor (MIS) type. The MIS type also may have two or more kinds of stacked dielectric films. The floating gate type memory uses a potential well to implement memory functions. ETOX (EPROM Tunnel Oxide) structure, which has recently been the most applicable technology for the flash EEPROM (Electrically Erasable Read Only Memory), is typical of the floating gate type. The floating gate type structure can be used to implement a memory cell using a single transistor. On the other hand, the MIS type memory function uses traps formed in a dielectric film bulk, a boundary layer between two dielectric films, or a boundary layer between a dielectric film and a semiconductor. The MONOS/SONOS (Metal/Poly Silicon Oxide Nitride Oxide Semiconductor) structure, which is used as a full-featured EEPROM, is typical. To execute the program and erase operations in these memory cells, it is essential that selection transistors be included in addition to the transistors of MONOS/SONOS structure. In other words, each memory cell must include at least two transistors. The array of conventional flash memory cells and the methods for programming and erasing the same are explained in detail by referring to the accompanying drawings FIGS. 1-2B. As seen in FIG. 1, a unit cell of the conventional flash memory cells includes two transistors. FIG. 2A shows an array of conventional flash memory cells using the cell in FIG. 1 as the unit cell, and bias conditions for programming the cells. FIG. 2B shows the array of conventional flash memory cells using the cell in FIG. 1 as a unit cell and the bias conditions for erasing the cells. As seen, the array of conventional flash memory cells is constructed by arranging unit cells in a form of matrix. Each cell includes two transistorsxe2x80x94a memory transistor having a MONOS/SONOS structure and a selection transistor for determining whether the cell is selected or not. A plurality of word lines are constructed in a direction so that the gates memory transistors arranged in a row are commonly connected. A plurality of word selection lines are constructed in a direction parallel to the word lines so that the gates of selection transistors arranged in a row are commonly connected. A plurality of bit lines are constructed in a direction perpendicular to the word lines so that the drains of memory transistors arranged in a column are commonly connected. A plurality of bit selection lines are constructed in a direction parallel to the bit lines so that the drains of selection transistors arranged in a column are commonly connected. As mentioned above, the conventional unit cell includes a memory transistor having the MONOS/SONOS structure and a selection transistor. A cell is selected by selecting the selection transistor, and program and erase operations are performed on the associated memory transistor. As shown in FIG. 1, the memory transistor has an ONO (Oxide Nitride Oxide) structure including a first oxide film 11, a nitride film 12, and a second oxide film 13 sequentially stacked on a portion of a semiconductor substrate 10. A first gate electrode 15a is formed on the oxide film 13. The selection transistor includes a gate oxide film 14 and a second gate electrode 15b formed on the gate oxide film 14. The gate oxide film 14 of the selection transistor is thicker than the first and second oxide films 11 and 13 so that a portion of the selection transistor is isolated from the first gate electrode 15a. A common source region 16a is formed in a portion of the semiconductor substrate 10 between the memory transistor and the selection transistor. Drain regions 16b are formed in portions of the semiconductor substrate 10 at the outside of the memory and selection transistors. In the conventional flash memory cell, programming is accomplished by applying a high positive voltage to the first gate electrode 15a. When the high voltage is so applied, electrons from the semiconductor substrate 10 tunnel through the first oxide film 11 and are injected into the nitride film 12. Thus, the first oxide film 11 is called a tunneling oxide. The second oxide film 13 prevents electrons injected into the nitride film 12 from leaking into the first gate electrode 15a. The second oxide film 13 also prevents electrons from being injected from the first gate electrode 15a into the nitride film 12. Thus, the second oxide film 13 is called a blocking oxide. Since the program operation uses traps in the boundary layer between the nitride film 12 and the second oxide film 13, electrons should be injected into or emitted from the entire region of a substrate channel to perform the program and erase operations. When performing a programming operation, the array of cells is biased in a certain manner. The programming bias condition for the array of conventional flash memory cells is explained as follows. Referring to FIG. 2A, a unit cell, among the plurality of flash memory cells, is selected for programming. Thereafter, a voltage Vp is applied to the word line connected to the gate of the selected memory transistor. Vp is also applied to the word selection line connected to the gate of the selection transistor of the selected cell. Due to the arrangement, the gates of memory transistors and selection transistors of other cells in the same row are also applied with the same Vp voltage. However, a ground voltage is applied to the word lines of the non-selected rows. Also, the word selection lines of the non-selected rows have their voltages left floating. For the bit line connected to a drain of the selected memory transistor, the ground voltage is applied. However, for the non-selected bit lines, a voltage Vi is applied. Similarly, for the bit selection line connected to the drain of the selected selection transistor, voltage is left floating, while the non-selected bit selections lines have ground voltages applied. Finally, ground voltage is also applied to the well (semiconductor substrate) at the lower portion of all the cells regardless of whether that cell is selected or not. The aforementioned bias conditions are simultaneously applied. Table 1 describes the bias conditions for the programming operation in a table form. Note that multiple cells maybe selected at a time for programming, such as a byte at a time. When performing an erasing operation, the array of cells is differently biased from the programming operation. The erasing bias condition for the array of conventional flash memory cells is explained as follows. Referring to FIG. 2B, a unit cell is selected for erasing. Thereafter, the ground voltage is applied to the word line connected to the gate of the selected memory transistor. Also, Vp is applied to the word selection line connected to the gate of the selection transistor of the selected cell. However, for the non-selected word lines, voltage Vp is applied, while the word selection lines are left floating. For the bit line connected to a drain of the selected memory transistor, the ground voltage is applied. However, for the non-selected bit lines, a voltage Vi is applied. Similarly, for the bit selection line connected to the drain of the selected selection transistor, voltage is floating, while the non-selected bit selection lines have ground voltages applied. Finally, as in the programming operation described above, ground voltage is applied to the well (semiconductor substrate) at the lower portion of all the cells regardless of whether that cell is selected or not. The aforementioned bias conditions are simultaneously applied. The table 2 describes the bias conditions for the erasing operation in a table form. Again, multiple cells may be selected for erasing, such as a byte at a time. The array of conventional flash memory cells and the program and erase methods using the same have the following problems. First, because two transistors are used for a single cell, the area for a chip becomes large and it is difficult to isolate cells from each other. Second, programming the chip is complex. It is therefore an object of the present invention to improve the integrity of a chip by using a single transistor for a single cell. It is another object of the present invention to easily implement the program operation by the byte and the erase operation in bulk by providing an array whose single cell comprises a single transistor. Additional features and advantages of the invention will be set forth in the description that follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings. To achieve these and other advantages in accordance with the purpose of the present invention, as embodied and broadly described, an array of flash memory cells according to the present invention comprises a plurality of flash memory cells, each of the cells having a MONOS/SONOS structure and being arranged in the form of a matrix, a plurality of word lines arranged in one line direction so that the gates of the flash memory cells arranged in one and the same row are commonly connected, a plurality of selection lines arranged in a direction perpendicular to the word lines so that the sources of the flash memory cells arranged in one and the same column are commonly connected, and a plurality of bit lines arranged in a direction parallel to the selection lines so that the drains of the flash memory cells arranged in one and the same column are commonly connected. A data program method using the array of the flash memory cells according to the present invention, in a plurality of word lines, selection lines and bit lines respectively connected to the gates, sources and drains of a plurality of flash memory cells arranged in the form of a matrix and wells formed in the lower portion of each of the flash memory cells, comprises a first step for selecting one cell among a plurality of the flash memory cells; a second step for applying a power supply voltage Vcc to the word line connected to the gate of the selected cell and a voltage xe2x88x92Vpp to the well in the lower portion of the selected cell and to the selection and bit lines connected to the source and drain of the selected cell; a third step for performing the second step and at the same time applying a ground voltage to the selection and bit lines of the cells connected to the same word line as the selected cell and a voltage xe2x88x92Vpp to the wells; and a fourth step for performing the first and second steps and at the same time applying a ground voltage to the word lines of the cells not connected to the same word line as the selected cell and a voltage xe2x88x92Vpp to the wells and to the selection and bit lines of the cells not connected to the same word line as the selected cell. A data erase method using the array of the flash memory cells according to the present invention, in a plurality of word lines, selection lines and bit lines which are connected, respectively, to the gates, the sources and the drains of a plurality of flash memory cells arranged in the form of a matrix and wells formed in the lower portion of each of the flash memory cell, comprises a first step for applying a voltage xe2x88x92Vpp to the word lines of the cells and a second step for performing the first step and at the same time applying a power supply voltage Vcc to the selection and bit lines of the cells and to the wells in the lower portion of the cells. Other objects, features, and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description that follows. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
{ "pile_set_name": "USPTO Backgrounds" }
Pierce the squash several times with a sharp knife to let the steam escape during cooking. Microwave on high until tender, about 5 minutes. Turn the squash after 3 minutes to ensure even cooking. Place the squash on a cutting board and cut in half. Scrape the seeds out of the center of each half and discard the seeds. Fill the hollowed squash with the apple mixture. Return the squash to the microwave and cook until the apples are softened, about 2 minutes. Transfer the squash to a serving dish. Top each half with 1 teaspoon margarine and serve immediately. Notes Acorn squash is a good source of vitamins A and C, potassium, and fiber. Here it’s paired with apples and brown sugar to make a hearty dish. Serve along with whole-grain crackers and a small wedge of your favorite cheese to round out all food groups.
{ "pile_set_name": "Pile-CC" }
Lung Cancer Causes You're Exposed to Everyday If you don’t smoke, you probably don’t spend a lot of time worrying about lung cancer risks. While smoking is the leading cause of lung cancer, it’s not the only one. Radiation oncologist Andrew Nish, MD, UnityPoint Health, explains the lung cancer causes you could be exposed to every day, plus the steps you can take to reduce your risk. Lung Cancer Causes Without Smoking Dr. Nish says about 15 percent of lung cancers in men occur in those who have never smoked and the same is true for 20 percent of women with lung cancer. Besides smoking, here are other main causes of lung cancer. Radon exposure (second most common cause of lung cancer) Secondhand smoke Workplace exposure, like asbestos, diesel fumes, etc. Air pollution, both outdoor and indoor Chest radiation (if you’ve been treated for another cancer in the chest, such as lymphoma) “These increase the risk of lung cancer due to exposure to an external chemical or radiation that causes damage to normal cells in the lung, which increases the risk of those cells to divide abnormally resulting in cancer,” Dr. Nish says. Another environmental exposure in the Midwest is arsenic in ground water. Arsenic gets into the ground water and aquifers naturally from dissolving rocks. It’s recommended all private wells be tested for arsenic. Public water systems are tested and safe from arsenic. “Almost all lung cancer is from environmental exposure with very few being from direct inheritance of a genetic mutation. Direct genetic inheritance of lung cancer risk is actually quite rare. The development of family lung cancer involves shared environmental and genetic factors among family members. Bottom line, the vast majority of lung cancers are due to environmental exposure,” Dr. Nish says. Radon and Lung Cancer Radon is a naturally occurring odorless, colorless radioactive gas that forms from the decay of uranium in the soil and rocks in the ground. Dr. Nish says all Midwestern states have relatively high levels of radon. For example, in Iowa, 7 out of ten homes exceed the United States Environmental Protection Agency’s (EPA) radon action level of 4 pCi/L. Iowa leads the nation as far as the percentage of homes that exceed 4pCi/L. “Even though you may not spend a lot of time in your basement, you should be concerned about radon, as it will spread throughout your house. Radon seeps in from the ground and through cracks in the foundation, around pipes and in sump pits. Once in the house, it will spread, resulting in exposure to you and your family,” Dr. Nish says. Testing for radon is the only way to detect radon, as there are no symptoms of radon exposure. Organizations, like the EPA and American Lung Association, recommend testing your home for radon levels. If radon levels are more than 4 pCi/L, then a mitigation system should be installed. For levels between 2 and 4 pCi/L, you should consider installing a mitigation system. Secondhand Smoke Tied to Lung Cancer There is no safe level of exposure to secondhand smoke. Even brief exposure can be harmful. Secondhand smoke will start taking a toll on your body immediately. “All secondhand smoke is equally harmful. It doesn’t make any difference whether it came from a cigarette, pipe or cigar. All sources of secondhand smoke carry the same risks,” Dr. Nish says. Research on secondhand smoke from vaping is less certain than for tobacco, but at this time, it’s thought the chemicals released from vaping may be just as harmful as those from tobacco. Dr. Nish also says there are greater risks associated with secondhand smoke in pregnant women and children. Exposure to secondhand smoke during pregnancy can increase the risk of miscarriage, low birth weight babies, premature birth, learning and behavioral abnormalities in the child and sudden infant death in the infant. Secondhand smoke harms older children as well. “Children exposed to secondhand smoke have more episodes of pneumonia, bronchitis, wheezing and cough and more ear infections, plus [secondhand smoke] can worsen existing asthma and can cause new cases of asthma,” Dr. Nish says. Top Signs of Lung Cancer Dr. Nish says the signs and symptoms of lung cancer are the same whether you are a smoker or not. The top signs of lung cancer include: Eat an anti-inflammatory diet. Emphasize six to nine servings of fruits and vegetables. This allows the body to more readily fix damage done to lungs, as well as protects against infections. Learn to manage stress, and sleep more. As an adult, getting eight hours of good sleep each night is important. This and stress management help our immune systems better fight off infections and eliminate cancers before they become obvious.
{ "pile_set_name": "Pile-CC" }
Pages Thursday, May 6, 2010 ICICI Bank: Back in Reckoning? ICICI BANK: (BSE: 532174 | NSE: ICICIBANK)Seems to be back in the reckoning. YoY jump is on low base, but the steady sequential growth hints at again being the heavyweight it once was. No wonder even RBI is encouraging with more time for attaining 70% PCR. But with 67% foreign ownership, and the bank existing as ‘Indian’ only on a proposed RBI exemption, it needs to tread carefully during these times when capital markets worldwide are holding their breath on Goldman Sachs.
{ "pile_set_name": "Pile-CC" }
Gastelum ran outside, trying to distract Butler, Reynolds said. Butler followed him and fired a shot but missed again, she said. Butler took about $200 from Reynolds and ran away, she said. Reynolds said she didn't know Butler at the time of shooting, but later found out that she had bought a motor home from the suspect's sister. Reynolds testified that she was surprised the suspect called her by name in the house. Butler is charged with robbery, burglary, attempted murder, two counts of assault with a firearm, obstructing a peace officer and a being a felon with a firearm. Court records show that Butler has prior convictions on possession of controlled substance and possession of a firearm. To subscribe to the Daily Press in print or online, call (760) 241-7755 or click here. Tomoya Shimura may be reached at [email protected] or (760) 955-5368.
{ "pile_set_name": "Pile-CC" }
[Posibility of term pregnancy in patients with chronic kidney disease]. Pregnancy is a physiological process that brings many changes to the mother's body, undergoing a process of adaptation to complications avoid, however, in the context of chronic kidney disease, these become manifest most frequently maternal and fetal impact on morbidity and mortality. A review of the subject is presented, emphasizing aspects that must be present at the time of a patient with this condition, accompanied by one of our cases treated with the sole purpose of gaining a better understanding of the issue and how to address it based on problems at the time of evaluation initial. The case of patient pregnant with chronic kidney disease in stage advanced that required start of replacement therapy in renal function due to uremia and retention fluid in malaise their 20 weeks of gestation, referred late for your attention. However, it was possible to improve you overall condition in all respects, allowing take up to 38 weeks with abdominal delivery via scheduled electively, with favorable results for the newborn exceed the statistics described in these patients.
{ "pile_set_name": "PubMed Abstracts" }
The purpose of life, including our human life, is the evolution of consciousness. The aim of our human existence is to become as fully human as possible, to realise or make real the full potential inherent in the human condition. This applies to all levels of being human, to the biological, the emotional as well as the mental level. Instead of seeing yourself as just a personality with a mind and a body, imagine yourself as part of an individualised life-stream that flows eternally towards increasingly higher levels of consciousness. This life-stream builds itself different personalities as vehicles for its flow of consciousness. You are one of these temporary vehicles. Your personality is temporary, the present cutting edge of your life-stream as perceived in our space-time. The aim of your life-stream at this level is to develop a vehicle that allows the full expression of your soul suitable for travel beyond the human condition. When this is accomplished, no further personalities are needed. As an ascended being the life-stream can now traverse the trans-human conditions of consciousness. SPIRITUAL AWAKENING This evolution of consciousness normally is a slow process and most personalities are not aware that they are part of an evolutionary growth. However, as the personality comes closer to perfection, it does become aware of its purpose and it begins to long for spiritual unfoldment. Before this spiritual awakening, the personality may already have believed in God and followed a religious faith in the traditional way. But there is a difference. In the traditional religious life the believer may try to follow the doctrines of his faith the best he can, but he basically remains on the same slow evolutionary path of consciousness. With the spiritual awakening, spiritual growth becomes the main focus of this personality. The slow evolutionary process now becomes a spiritual revolution, not only in regard to the speed of the growth of consciousness but also in regard to the overthrow of established values. This spiritual revolution may take place within the confines of an established religion or it may happen outside a religious faith. Within a religion, these awakening spirits will sooner or later come in conflict with established doctrines. They may try to revolutionise or restore their church to its highest ideals and eventually become mystics or saints or martyrs or just start their own church or sect. Most established religions had at best an uneasy relationship with their mystics and (later) saints. In any way, whether the awakening and subsequent spiritual growth takes place within or without an established religion, the awakening personality will begin to follow the spiritual guidance that comes from within rather than the established rules and doctrines of church and society. Spirituality is no longer something that one does on Sunday mornings or other established times and according to established rules, but rather it becomes the central purpose and focus of life. With this, we may say that the personality is now on the spiritual path. VARIETIES OF SPIRITUAL PATHS While the real goal of the spiritual path is the same for all, there are many different roads that one can travel and one may even travel without a road. Also the goal may be described in different terms without always recognising that it is the same goal. This is so because we all have different starting points and different preferences and opportunities. Many spiritual travellers will not expect to reach their goal in their present lifetime, while others may realise that they do not really need to travel anywhere. Some of the better-defined roads are the different forms of Yoga, the Tao, the Gnostic teachings, the Rosicrucians and various ancient and modern MysterySchools. While the Mystics are generally regarded as a distinct group, they really travelled each on his or her own individual path. At present Ascension teachings through books and channelled guidance attract many followers. These teachings are largely individualised outgrowths from the writings of the Theosophical Society since the end of the nineteenth century. Looked at from a different perspective, we may say that there are two different approaches that often intermingle in each individual spiritual path. One is the path of the Doer or the Magus, while the other is the path of the Knower or the Mystic. The Doer mainly works in the world as a healer, alchemist or white magician, trying to master the non-physical energies and forces. The Mystic, on the other hand, follows mainly the inner path, seeking Union with the Divine Self and in the process becomes a Knower and a teacher. However, ultimately Knower and Doer both become one. THE PATH OF THE MYSTICS The recorded experiences of the many mystics in all religions is of particular interest because we can learn from these what the spiritual path looks like when it is not guided by specific verbal or written teachings. Most of these individuals in past centuries did not know that there was such a thing as a spiritual path and that others had travelled the same or a similar route. They just felt a longing to come closer to God and to feel at one with Him, to melt into Divinity like a drop of water becomes one with the ocean. Each of them had different experiences and many of them walked only part of the way, but by constructing a composite picture from a large number of their writings we can discern a distinct pattern of the typical mystical path. Few mystics seem to have travelled all of this 'typical' path, but this may be because we can see only one life-time of a multiple-life-time journey. This typical mystical path may best be described as a five-stage journey and we may easily see its relationship to contemporary spiritual practises. 1. The Awakening The spiritual awakening is the beginning of the mystical path and also of the spiritual path in general. It may be sudden or gradual. Often it comes as an emotionally overwhelming 'mystical experience', possibly ending a period of great emotional torment and suffering. Another frequent setting is during a spiritual initiation ceremony, such as presently during water baptism for 'Born-Again Christians'. Others may have this awakening experience during an especially intense prayer or meditation or drift gradually onto the path by attending workshops or reading spiritual books. The latter as well as others coming to the path because of philosophical considerations turn out to become like the 'mental mystics' as compared to the 'emotional mystics' who had their mystical experiences mainly at the emotional level. This initial peak experience may last for hours or days with gradually declining intensity. Commonly it is a combination of deep feelings, such as universal love, with profound spiritual intuition or insight, although depending on the personality traits the emotional or the mental component may dominate. Also psychic phenomena are frequently involved in the form of visions and voices. 2. Purification After the awakening the individual does not only want to repeat the mystical experience but really wants to live in a permanent state of grace close to God. However, the aspiring mystic realises that he is not yet worthy, he becomes acutely aware of his imperfections, of the impurity of his thoughts and feelings. He starts to cleanse himself of all selfish or impure desires. Commonly this is combined with periods of fasting or food denial. Medieval mystics tended to be very severe with themselves in order to 'mortify' the body and with it the unruly senses and the desires of the flesh. Worldly pursuits no longer hold any interest. Periods of prayer and meditation are greatly extended. While this process of purification requires strong willpower and causes much self-inflicted pain, it is also increasingly interspersed with beautiful and uplifting mystical experiences that spurn the seeker on to greater sacrifices for the sake of more rapturous experiences. This purification process continues for many years. Instead of turning the spiritual energies released by the awakening experience inward to purify, it may also be directed outwards. These individuals then become religious fanatics and may remain at this level for the rest of their lives. 3. Illumination After the Self has been sufficiently cleansed or purified and the mind remains fully focused on the presence and the qualities of the Divine, a state of illumination may unexpectedly arise. This will be similar to the experience of awakening but even more intense and longer lasting. Again, it may be predominantly emotional with feelings of indescribable Divine love, being at one with 'All That Is' and combined with visions and voices, or it may be predominantly mental with profound insights and states of pure knowing. Commonly the initial peak experience is combined with perceiving an intense or blinding inner white light. For many months these deep feelings and insights will keep flowing in, but gradually settle into a state of a permanently raised and more intuitive consciousness. The illuminated mystic will now always remain aware of the Divine presence in everyone and everything and radiate a presence of love and goodwill. Intuitive abilities will be greatly increased. Commonly, one will receive a spiritual gift with the illumination. This may be the ability to heal or to see or to know. By developing this ability one will become a great healer, seer or spiritual teacher. The perception of mysticism as being incomprehensible to normal, unenlightened humans is due to the fact that most mystical writings were composed by the emotional type of mystics. They focused on emotional states of being and perceived and described their visions and experiences with emotional metaphors that remained incomprehensible to the uninitiated. This is similar to the present descriptions of reality given by quantum physicists. Life in this state of illumination is easy and full of joy, a permanent state of great happiness and contentment. It is like living in a state of grace that can never be lost again. But sometimes it is being lost. 4. The Dark Night of the Soul There is a saying that he whom God loves most, will be given the severest tests. This certainly seems to be true of the few mystics who continue to travel further along the spiritual path even after illumination. During the long and difficult purification process the aspiring mystic gave up most of his sense attachments, the pleasure of eating, of sexual union, of bodily comfort and of pleasant company. Instead all the energies were focused only on what was perceived to be transcendental and divine. But there was still one thing left that had not been surrendered, that is the will, the central part of the Self. To give up the will, the innate instinct for personal happiness had to be surrendered. This is called the 'mystical death' or the spiritual crucifixion. During illumination the mystic seemed to walk hand in hand with God and sunned himself in eternal Divine bliss. This makes the contrast even greater to the feeling of having lost all the former grace and glory and being totally abandoned by God to a life in abject misery. However, gradually the Self begins to accept total surrender. It gives up itself, its personality and individuality, asks for nothing and desires nothing. Now there is total passivity and acceptance of whatever God has ordained or whatever comes. Now the mystic is finally ready for the last stage of his earthly journey. 5. The Unitive Life Finally, by giving up everything, the mystic has gained everything. By being completely passive, he is able to be filled with the Divine Spirit. No longer does he experience the strong mood swings between divine bliss and utter desolation, he has now settled into a feeling of constant peaceful and love-filled union with the Divine. However, this is not the goal of his long journey, just a by-product. The real achievement is that the mystic is now an ideal tool or instrument of the Divine Will to be used for manifesting the Great Plan on earth. While now the mystic still lives in the world, he is no longer of the world. He may be a great reformer, healer, inventor, statesman, educator or spiritual teacher but whatever he does is never for any small or selfish interests, instead it is always for the greater good of All That Is. He now lives a life of service. Whatever he does, he is now a spiritual master, even if that is not always apparent to the outside world. After his passing, he will be an ascended being, not needing to return to earth. Comments on the Mystical Path In this description it is easy to draw comparisons to the life of Jesus of Nazareth and see in it a symbolic enactment of the mystical path. The baptism symbolises the spiritual awakening, the 40 days in the desert correspond to the period of purification, followed by illumination (the transfiguration on the mountain) and selfless service in the form of a healing and teaching ministry. The experience on the cross and the events leading up to it are his dark night. While the unitive life is cut short, it is signified by the ascension and the following teaching period in ascended form. It may also be mentioned that the described states of the mystical path do not necessarily follow one after the other but some may occur simultaneously, such as enlightenment and the dark night of the soul or purification, enlightenment and selfless work. These may coexist or alternate, causing emotional extremes over short periods. Coexistence is possible because we do not need to be perfect to reach illumination and, therefore, continue with our purification on deeper levels. Furthermore, these extremes and the perceived sufferings are mainly the lot of the emotional type of mystics while the mental type follows a much smoother path. Both have the difficult task of giving up their attachments to gratifying the senses and surrendering their will, it is just that the emotional mystics are so much more sensitive and take things very much to heart. It also must be understood that it is not necessary to give up things, only the attachment to things but that was not always understood by those leading a life of extreme asceticism. ASCENSION TEACHINGS The ascension teachings are a New Age phenomenon and largely based on channelled teachings. These started with Madame Blavatsky and led to the foundation of the Theosophical Society late in the nineteenth century. The aim of these teachings is to present a greatly accelerated path to ascension. Ascension is highlighted as the main aim and achievement of the spiritual path. Ascension means that we are no longer in need of experiences at the human level. For our continuing progress towards higher levels of consciousness we develop a spiritual light body through our ascension work. Presently ascension teachings focus mainly on guided imagery to open, balance and extend the chakra system and build a light body. A key feature is a system of initiations as a measure of progress on the spiritual path. These initiations are to take place in our higher energy bodies and may or may not be remembered with our normal body consciousness. While there are some variations in the way of counting, the basic requirements for passing the five initiations before ascension may be described as follows. Spiritual Awakening. Purifying the body, learning to work with the energies and controlling the etheric body. Purifying and controlling the emotions as well as developing a higher emotional body. Purifying and controlling the mind as well as developing a higher mental body. Submitting the will to Divine Guidance. Selfless work for the greater good of humanity. Ascension. While formerly initiations in the physical body took place in the form of tests and rituals within secret societies, presently they appear to be incorporated as special situations in our normal life. As an example the third initiation requires us to have good control of the mental, emotional and etheric forces. In ancient Egypt an appropriate test might have been for the disciple to demonstrate that he can control an untamed lion with his mind and the lion obeying him. At the slightest sign of fear the lion would kill him. A modern equivalent might be a diagnosis of advanced and incurable cancer. To overcome this without much outside help, one needs to use the mind constructively to guide the emotions in a positive way, release all negativity, especially be free of the fear of death and also learn to direct the life-force energies as in guided imagery. While only a small percentage of sufferers of advanced incurable diseases may have been given their disease as an initiation test, most of those who did recover from near-terminal conditions reportedly had profound spiritual experiences. The relationship of the ascension initiations as portrayed above to the divisions of the mystical path are quite obvious. Starting with the spiritual awakening in both, the first three initiations are related to the mystical period of purification. In contrast to the yoga system of purification, which concentrates strongly on controlling the etheric energies, the mystical path does not address the etheric or life-force body directly. However, this level is purified indirectly by removing emotional energy blockages and by learning to control the emotional energies, which in turn direct the etheric life-force energies. Illumination is the first major initiation and is the result of purifying and learning to control the mind and developing a higher mental body as the requirement for the third initiation. The fourth initiation will be passed after waking from the dark night of the soul and the fifth initiation is the result of the unitive life. The ascension teachings portray largely a magical or doer path in contrast to the knower-based mystical path. A combination of both may in some ways be easier to follow than either one of them on its own. The severity of the mystical path can be greatly eased and progress speeded up by suitable guided imagery, while the present ascension teachings seem to neglect the purification and control of the emotional and mental bodies that feature so prominently in the mystical path.
{ "pile_set_name": "Pile-CC" }
Letter to the editor: Safety, not money, counts The decision to install a traffic camera should be based on accident report statistics that are dramatically higher than the average, not on finding revenue to replace funding cuts. Cameras should be a last resort. - Curtis Christiansen, Des Moines ADVERTISEMENT Most Popular Most Commented More Headlines Most Viewed ADVERTISEMENT ADVERTISEMENT Email this article Letter to the editor: Safety, not money, counts The decision to install a traffic camera should be based on accident report statistics that are dramatically higher than the average, not on finding revenue to replace funding cuts. Cameras should be
{ "pile_set_name": "Pile-CC" }
Dolph Aluck Smokehouse The Dolph Aluck Smokehouse is a stone smokehouse located on the north side of Milford Rd., in Pendleton County, Kentucky near Falmouth. It faces the confluence of the North Fork Licking River and the Licking River. It was listed on the National Register of Historic Places in 1987. It was built in the mid-1800s and was deemed significant as a "Typical early Kentucky smokehouse in good condition." It is believed to have been built by Dolph Aluck, owner of brick Greek Revival house at the site. References Category:Smokehouses Category:National Register of Historic Places in Pendleton County, Kentucky Category:Greek Revival architecture in Kentucky
{ "pile_set_name": "Wikipedia (en)" }
; Joomla! Project ; Copyright (C) 2005 - 2017 Open Source Matters. All rights reserved. ; License GNU General Public License version 2 or later; see LICENSE.txt, see LICENSE.php ; Note : All ini files need to be saved as UTF-8 COM_POSTINSTALL="Post-installation Messages" COM_POSTINSTALL_BTN_HIDE="Hide this message" COM_POSTINSTALL_BTN_RESET="Reset Messages" COM_POSTINSTALL_CONFIGURATION="Post-installation Messages: Options" COM_POSTINSTALL_LBL_MESSAGES="Post-installation and Upgrade Messages" COM_POSTINSTALL_LBL_NOMESSAGES_DESC="You have read all the messages." COM_POSTINSTALL_LBL_NOMESSAGES_TITLE="No Messages" COM_POSTINSTALL_LBL_RELEASENEWS="Release news <a href="_QQ_"https://www.joomla.org/announcements/release-news.html"_QQ_">from the Joomla! Project</a>" COM_POSTINSTALL_LBL_SINCEVERSION="Since version %s" COM_POSTINSTALL_MESSAGES_FOR="Showing messages for" COM_POSTINSTALL_MESSAGES_TITLE="Post-installation Messages for %s" COM_POSTINSTALL_XML_DESCRIPTION="Displays post-installation and post-upgrade messages for Joomla and its extensions."
{ "pile_set_name": "Github" }
320 F.3d 691 UNITED STATES of America, Plaintiff-Appellee, Cross-Appellant,v.John SERPICO and Gilbert Cataldo, Defendants-Appellants, Cross-Appellees. No. 02-1702. No. 02-1726. No. 02-1925. United States Court of Appeals, Seventh Circuit. Argued October 31, 2002. Decided February 20, 2003. COPYRIGHT MATERIAL OMITTED David A. Glockner (argued), Office of U.S. Attorney, Crim. Div., Chicago, IL, for U.S. Matthias A. Lydon (argued), Winston & Strawn, Chicago, IL, for John Serpico. Jeffrey Schulman (argued), Wolin & Rosen, Chicago, IL, Donald Hubert, Hubert, Fowler & Quinn, Chicago, IL, for Gilbert Cataldo. Before RIPPLE, MANION, and EVANS, Circuit Judges. TERENCE T. EVANS, Circuit Judge. 1 For 12 years, John Serpico and Maria Busillo held and abused various influential positions with the Central States Joint Board ("CSJB"), a labor organization that provides support to its member unions. Among other responsibilities, Serpico and Busillo controlled the management of the unions' money. The pair, along with longtime friend and business associate Gilbert Cataldo, collaborated on three schemes involving the misappropriation of the unions' funds. Two of those schemes are the focus of this appeal by Serpico and Cataldo (Busillo has not appealed her conviction). 2 In their "loans-for-deposits" scheme, Serpico and Busillo deposited large sums of union money in various banks. In exchange, the two received overly generous terms and conditions on personal loans totaling more than $5 million. In the more complicated hotel loan kickback scheme, several groups entered into the 51 Associates Limited Partnership, which planned to construct a hotel. The partnership was unable to obtain financing for the construction of the building without first securing a commitment for a mortgage loan that would guarantee repayment of the construction loan after the hotel was built. Serpico used union funds to make a mortgage loan to the developers, after which Mid-City Bank agreed to make the construction loan. In exchange for Serpico's help in securing the loan, 51 Associates paid $333,850 to Cataldo's corporation, Taylor West & Company, for "consulting services" that Cataldo never actually performed. Cataldo then kicked back $25,000 to Serpico by paying Serpico's share of a $50,000 investment into an unrelated business project (the Studio Network project) in which the two were partners. 3 Serpico, Busillo, and Cataldo were tried on charges of racketeering, mail fraud, and bank fraud. At the close of the evidence, the court granted motions by Serpico and Busillo for acquittal on the racketeering and bank fraud counts. The jury convicted Serpico and Busillo on mail fraud charges relating to the loans-for-deposits scheme and Serpico and Cataldo on mail fraud charges for the hotel loan kickback scheme. 4 At sentencing, the district court determined that Serpico and Cataldo were each responsible for a loss of $333,850, the amount paid to Cataldo, for the hotel loan kickback scheme. For the loans-for-deposits scheme, the court found the damage to the unions to be equal to the additional amount of interest the union assets would have earned had Serpico purchased CDs at banks offering the highest interest rates instead of those offering him special deals on his personal loans. The court totaled loans from Capitol Bank as well as six others, estimating the loss to be between $30,000 and $70,000. The court thus increased Serpico's base offense level of 6 by 9 levels, plus 2 levels for more than minimal planning and 2 levels for abuse of trust (19 total). Serpico and Cataldo were sentenced to 30 and 21 months in prison, respectively. 5 Serpico and Cataldo (collectively "Serpico" as we go forward) appeal, challenging the verdicts and the application of the sentencing guidelines on a number of grounds. In its cross-appeal, the government also contests the application of the sentencing guidelines. 6 First, Serpico argues that his convictions should be overturned because his schemes did not "affect" a financial institution. The 5-year statute of limitations for mail and wire fraud offenses under 18 U.S.C. § 3282 is extended to 10 years "if the offense affects a financial institution," 18 U.S.C. § 3293(2), and Serpico could not have been prosecuted without that extension. Serpico claims that an offense only "affects a financial institution" if the offense has a direct negative impact on the institution. The district court instructed the jury that the schemes affected the banks if they "exposed the financial institution[s] to a new or increased risk of loss. A financial institution need not have actually suffered a loss in order to have been affected by the scheme." 7 Although Serpico agreed to the jury instruction, he now points to United States v. Agne, 214 F.3d 47, 53 (1st Cir.2000) and United States v. Ubakanma, 215 F.3d 421, 426 (4th Cir.2000), to support his claim that the financial institution must suffer an actual loss. In Agne, however, the court found that the bank "experienced no realistic prospect of loss," so it did not have to reach the question of whether the bank must suffer an actual loss. Agne, 214 F.3d at 53. Similarly, Ubakanma simply held that "a wire fraud offense under section 1343 `affected' a financial institution only if the institution itself were victimized by the fraud, as opposed to the scheme's mere utilization of the financial institution in the transfer of funds." Ubakanma, 215 F.3d at 426. Neither side here argues that "mere utilization" is sufficient; the question is whether an increased risk of loss is enough, even if the institution never suffers an actual loss. 8 Several courts, including this one and the Fourth Circuit, which produced Ubakanma, have concluded that an increased risk of loss is sufficient in similar contexts. See, e.g., United States v. Longfellow, 43 F.3d 318, 324 (7th Cir.1994) (quoting United States v. Hord, 6 F.3d 276, 282 (5th Cir.1993) ("risk of loss, not just loss itself, supports conviction" for bank fraud)); United States v. Colton, 231 F.3d 890, 907 (4th Cir.2000); see also Pattern Criminal Federal Jury Instructions for the Seventh Circuit (1990), p. 217 (The mail interstate carrier wire fraud statute "can be violated whether or not there is any [loss or damage to the victim of the crime] [or] [gain to the defendant]."). 9 More importantly, the whole purpose of § 3293(2) is to protect financial institutions, a goal it tries to accomplish in large part by deterring would-be criminals from including financial institutions in their schemes. Just as society punishes someone who recklessly fires a gun, whether or not he hits anyone, protection for financial institutions is much more effective if there's a cost to putting those institutions at risk, whether or not there is actual harm. Accordingly, we find no error in the district court's jury instruction. 10 Serpico next argues that, even if the district court correctly interpreted § 3292(2), his schemes did not "affect" a financial institution because they did not create increased risks for the banks involved in the schemes. Essentially, Serpico claims that the banks in both schemes were willing participants who would not have chosen to participate unless it was in their best interests (that is, factoring in the risks, they expected to make money on the deals). But the mere fact that participation in a scheme is in a bank's best interest does not necessarily mean that it is not exposed to additional risks and is not "affected," as shown clearly by the various banks' dealings with Serpico. 11 For example, the hotel loan kickback scheme affected Mid-City even though Mid-City believed it would make money on the deal. Mid-City made a $6.5 million construction loan, one it obviously would not have made if it believed the risks associated with the loan outweighed the expected payoff. But the loan, as all loans do, did carry some risk. Since Mid-City did not want to be a long-term real estate lender, it agreed to the loan only after Serpico misappropriated Midwest Pension Plan ("MPP") funds in making the MPP's $6.5 million end-mortgage loan (which meant that, if all went well, Mid-City would quickly be repaid). Therefore, Mid-City never would have been exposed to the risks of its loan absent Serpico's scheme because it never would have made the loan. 12 Serpico responds that MPP's $6.5 million essentially guaranteed the loan, so there was no risk to Mid-City. But, under the terms of the loan, if the hotel was not completed on time and under budget, the money MPP put up would be returned to it. That would leave Mid-City with a risky long-term loan it didn't want. On top of that, the kickback scheme increased the chances that the project would run into trouble. Certainly a construction project is more likely to be delayed when those running it and putting up the money for it are doing so illegally, making them subject to the disruption of investigation and arrest at any time. 13 The loans-for-deposits scheme shows even more dramatically that a bank can take on higher risk while acting in what it believes to be its own best interests. Banks, including Capitol Bank (which no longer exists as a result of punishments it received after pleading guilty to conspiring with Serpico and Busillo to defraud the CSJB entities), decided the benefits from the deposits made it worth the risk of loss resulting from the generous terms and conditions of the loans it gave Serpico. But the fact remained that the bank made risky loans at low interest rates that it never would have made absent the scheme. 14 In fact, at trial, Serpico's counsel told the court that "if the defendants were convicted of a loans-for-deposits scheme, that conduct in and of itself would mean that they affected a financial institution" and "I don't know I could conceivably argue that the particular scheme did not affect a financial institution." He now tries two arguments. In addition to arguing that the bank was acting in its own best interest, Serpico claims that a financial institution is not "affected" if it is an active perpetrator in the offense. We find that argument unpersuasive. It is not supported by Ubakanma, as Serpico claims, and we find it hard to understand how a bank that was put out of business as a direct result of the scheme was not "affected," even if it played an active part in the scheme. 15 Next, Serpico argues that he was prejudiced by the admission of evidence relating to various charges on which the district court acquitted him before the case went to the jury. In United States v. Holzer, 840 F.2d 1343, 1349 (7th Cir.1988), we addressed almost this very issue: 16 When, as is often the case (it was here), the jury acquits a defendant of some counts of a multi-count indictment, the defendant is not entitled to a new trial on the counts of which he was convicted, on the theory that the conviction was tainted by evidence, which the jury heard, relating to the counts on which it acquitted.... No rule of evidence is violated by the admission of evidence concerning a crime of which the defendant is acquitted, provided the crime was properly joined to the crime for which he was convicted and the crimes did not have to be severed for purposes of trial. It makes no difference, moreover, whether the jury acquits on some counts or the trial or reviewing court sets aside the conviction. 17 Serpico notes that his case is different in that the admitted evidence here concerned claims that the court dismissed before they reached the jury. Still, the Holzer reasoning applies. No rule of evidence was violated, and the district court did not abuse its discretion in failing to award a new trial to Serpico. 18 We also reject Serpico's claims that his conviction should be overturned or he is entitled to a new trial because there was not sufficient evidence to convict him and that the record does not permit a confident conclusion that Serpico is guilty beyond a reasonable doubt. Given the evidence, the jury reasonably concluded that the $333,850 payment to Cataldo was made in exchange for the loan from Serpico and that some of that $333,850 trickled down to Serpico. 19 Similarly, we reject Cataldo's claim that the district court abused its discretion in denying his severance motion. To succeed, Cataldo must show that he was "unable to obtain a fair trial, not merely that a separate trial would have offered [him] a better chance of acquittal." United States v. Bruce, 109 F.3d 323, 327 (7th Cir.1997). Cataldo claims he should have been tried separately because there was a gross disparity in the evidence presented against him and his co-defendants. But a "simple `disparity in the evidence' will not suffice to support a motion for severance," United States v. Caliendo, 910 F.2d 429, 438 (7th Cir.1990). Moreover, the trial court instructed the jury to consider the evidence against each defendant individually, and juries are presumed to be capable of following such limiting instructions. United States v. Williams, 858 F.2d 1218, 1225 (7th Cir.1988). Because there is no reason the jury here could not follow that instruction, the district court did not abuse its discretion in denying Cataldo's motion. 20 Finally, both Serpico and the government challenge the district court's application of the sentencing guidelines. Serpico challenges the calculation of the loss from both schemes. With regards to the hotel loan kickback scheme, Serpico claims that the union entities suffered no loss because the loan was repaid (the district court found the loss attributable to both defendants to be $333,850, the amount paid to Cataldo). But Serpico's theory fails to consider the fact that, although none of the $6.5 million was lost, more money could have been earned. Obviously, the 51 Associates partnership was willing to pay (and did pay) an extra $333,850 in order to secure the loan. That money could have gone to the union entities instead of Cataldo if Serpico had been acting in the entities' best interests instead of his own. See generally United States v. Briscoe, 65 F.3d 576, 589 (7th Cir.1995) (kickbacks "represent money that should have gone to the Union" and, as such, were properly included in the loss calculation). 21 We also reject Serpico's alternative argument that he (individually, as opposed to with Cataldo) should only be accountable for the $25,000 that Cataldo kicked back to him. Serpico and Cataldo are each accountable for "all reasonably foreseeable acts and omissions of others in furtherance of the jointly undertaken criminal activity." USSG § 1B1.3(a)(1)(B). Since Serpico knew money was going to be paid back to Cataldo for work he never did, the district court correctly held him responsible for the full $333,850 loss. 22 Serpico also argues that, in computing the losses from the loans-for-deposits scheme, the district court should not have included losses arising from CD purchases from banks other than Capitol because the court did not have sufficient evidence to support a conclusion that the scheme extended to those banks. But the 12-year pattern of union deposits into banks from which Serpico simultaneously sought personal loans, combined with documents from Gladstone-Norwood Bank and Exchange Bank suggesting that loans were approved in order to secure union deposits, presented sufficient evidence. 23 In its cross-appeal, the government first claims the district court applied the wrong offense guideline. The district court calculated the defendants' sentences under § 2F1.1 (the fraud guideline), but the government argues it should have applied § 2E5.1 (the benefit plan bribery guideline) during sentencing. We review the district court's selection of the applicable guideline section de novo. United States v. Dion, 32 F.3d 1147, 1148 (7th Cir.1994). 24 Clearly, the government wants it both ways; having chosen to prosecute Serpico under the mail fraud statute, it wants him sentenced based on bribery. As unjust as this practice might seem in a case like this (Serpico essentially would be sentenced for a crime, bribery, he could not have been charged with because the statute of limitations had run), the guidelines not only allow but even encourage this scheme, which someone could argue is a little like an old bait-and-switch. Under § 1B1.2(a), however, the district court is instructed to "[d]etermine the offense guideline section in Chapter Two (Offense Conduct) applicable to the offense of conviction (i.e., the offense conduct charged in the count of the indictment or information of which the defendant was convicted)." USSG § 1B1.2(a) (1990). The commentary to that section elaborates: "When a particular statute proscribes a variety of conduct that might constitute the subject of different offense guidelines, the court will determine which guideline section applies based upon the nature of the offense conduct charged in the count of which the defendant was convicted." § 1B1.2, comment. (n.1). In other words, as in United States v. Hauptman, 111 F.3d 48 (7th Cir. 1997), the sentencing judge should endeavor to locate the "essence" of the defendant's conduct, not merely the name attached to the statute violated. 25 Therefore, § 1B1.2(a) encourages the district court to find an appropriate guideline section to fit the conduct (not just the charge). Section 2F1.1 of the 1990 guidelines, under which Serpico was convicted and sentenced, notes: 26 [T]he mail or wire fraud statutes, or other relatively broad statutes, are used primarily as jurisdictional bases for the prosecution of other offenses.... Where the indictment or information setting forth the count of conviction (or a stipulation as described in § 1B1.2(a)) establishes an offense more aptly covered by another guideline, apply that guideline rather than § 2F1.1. 27 § 2F1.1, comment. (n.13). 28 The indictment charged that Serpico "sought and received a substantial personal benefit and kickback in exchange for influencing" the MPP to provide the $6.5 million loan, which is closer to bribery than mail fraud (in § 2E5.1, a "bribe" is "the offer or acceptance of an unlawful payment with the specific understanding that it will corruptly affect an official action of the recipient." § 2E5.1, comment. (n.1)). Similarly, the loans-for-deposits scheme was basic bribery, with Serpico promising union deposits to the banks in exchange for favorable personal loans. Therefore, the district court should have sentenced Serpico and Cataldo under § 2E5.1. 29 Finally, the government argues that the district court erred by failing to consider the approximately $475,000 that Serpico derived from the loans-for-deposits scheme through his investment in Studio Network. The government argues that it showed that Serpico invested $25,000 in the partnership in 1983, then sold his share 5 years later for $500,000. The government claims that the district court should have used Serpico's gain as an alternative measure of loss under § 2F1.1 (which also would be used under § 2E5.1). See § 2F1.1, comment. (n.8) ("The offender's gain from committing the fraud is an alternative estimate that ordinarily will underestimate the loss."). 30 We have interpreted § 2F1.1 of the 1990 guidelines to require that the amount of loss be based on "the net detriment to the victim." See United States v. Mount, 966 F.2d 262, 265. Therefore, the defendant's gain should only be used when the loss to the victim cannot be reasonably estimated (as the 2001 guidelines make clear: "The court shall use the gain that resulted from the offense as an alternative measure of loss only if there is a loss but it reasonably cannot be determined."), § 2B1.1, comment. (n.2(B)). The district court reasonably estimated the loss from the loans-for-deposits to be equal to the additional amount the union entities could have earned at banks offering more favorable rates on CDs. To use the $475,000 that Serpico earned suggests that the unions would have received an astounding 1,900 percent return on their investments, a wholly unsubstantiated claim. 31 To summarize: We affirm the convictions and the loss calculations but remand for sentencing under § 2E5.1. 32 AFFIRMED IN PART, REVERSED IN PART, AND REMANDED.
{ "pile_set_name": "FreeLaw" }
Wow - there are some truly amazing photos in this thread..if you like to shoot birds but are not familiar with their name (proper or otherwise) try using http://www.whatbird.com it has an excellent built in wizard Here's a few I shot over the past 6 months - all photos shot in the wild - no zoo or otherwise with 7D + 400mm hand held: Belted Kingfisher (difficult to capture..you get anywhere near these birds they fly about 100 yards away. You go that direction they revert back to original starting point. I had to creep up and hide to nab this one..oh the fun! munsoned I took the following at Great Falls National Park, Virginia side. A lot of stalking went into taking these photos. I found this really awesome spot that gave me all 3 of these pictures and more. Shot with canon 5D II and 100-400 L. I wish I knew why a sparrow will sit on a bird feeder for half an hour, but a cardinal won't stay more than half a minute. When I saw him, he and his female mate were both right there perfectly positioned. While I grabbed the camera and changed lenses, Mrs. Cardinal took off. This was the only shot I got with the whole bird in the frame and in focus. Now every time it snows I stalk back and forth past this window, but no more luck since then.
{ "pile_set_name": "Pile-CC" }
This 1993 Nissan Skyline R33 GTS25-T Type M has just arrived to our showroom and looks absolutely beautiful. The Super Black (KH3) exterior has been lightly modified and is has been very well maintained. A super rare Trial front bumper replaces the OEM piece and helps create a more aggressive looking front end. The OEM Type M optional Side Skirts are present and flow well with the Trial front bumper and Knight Racer rear spats which are a bit more aggressive than the stock pieces. The body is in beautiful condition with no major dings or dents and the paint is just stunning. The GTR rear wing really improves the rear of the car and continues the slightly more aggressive feel of this beautiful R33. Finally a set of 18" Monoblock wheels have been added and contrast the black exterior perfectly. The interior has also received several upgrades and is in great condition. You will first notice a set set of gauges have been added to keep an eye on engine stats. These include Boost, Oil Pressure, Coolant Temp, and Oil Temp by Auto Gauge. The dash itself is in great condition and has no major sun damage, cracks, or bubbles. An aftermarket steering wheel has been added along with a Razo shift knob. The door panels are very clean and the fabric inserts are in great shape. There is on small tear on the lower right part of the drivers door panel which in not noticeable when the door is closed. The seats are all in fantastic condition and have no major stains. The trunk panels have been previously removed, but the OEM strut bar and OEM floor mats are included. Mechanically the RB25DET starts right away and idles smoothly. Exhaust tone has been much improved thanks to a stainless Reinhard Takumi Catback, and a M's air intake lets the engine breathe a bit easier. Shifts through the 5 speed transmission are smooth without any grinds. It's clear the engine is healthy with just 114k original miles. Boost comes on strong and pulls hard throughout the RPM range and sounds great doing it! This R33 has received the following since arriving to our shop: 4 brand new tires, new spark plugs, new fuel filters and fresh oil change. It's ready to drive home today!
{ "pile_set_name": "Pile-CC" }
Anybody have any thoughts on how the economy will effect us this year.People can't afford to go far from home over the summer wonder if it will boost sales for local events.I know movie ticket sales are up our little up town concert thingy is packed.A friend of mine does the events for Carolina Harley Davidson she says they have big turnouts for bike nights,bbqs,poker runs and this weekend they are hosting something for speed week and the turnout looks good.Will any of you change ticket prices offer deals or coupons?I was thinking of offering a season ticket so the customers can come as many times as they like and of course they will bring friends each time they come mo $$ for me. What about more advertising to help bump up sales or just do same as always and hope for the best.I myself believe this will be a big year for local events.What are your thoughts on this? Jim Warfield 05-23-2010, 04:28 PM Will your business be set up in time to take Euros as payment? Or maybe Eros? Blame it on the Greeks? Eros was Greek. SO many things can influence our ticket sales. ????????? I feel our admission price is a bargain at $12.oo If some people don't feel that way..let them eat cake, From the three-day old bakery! bhays 05-23-2010, 08:01 PM Last year was our biggest ever and with a slight uptick in the economy since last season, I am hoping for some really great things. In a down economy entertainment always prospers... I think it will be an outstanding season. To be more specific, I expect to see the same trend from last year, those major market haunts who have already peaked in their markets should hold their own and those us in expanding markets or who haven't maxed out our markets should see some nice growth. JamBam 05-24-2010, 11:07 AM Last year we broke the paid attendance record from the previous year in the last hour. Despite the fact that I didn't finish a ton of marketing because of our twin girls being born eleven weeks early on Sept 11. They are doing great by the way. I am already doing some marketing on myspace and facebook. We have gone from 400 to 2000 friends on myspace and have the FB fan page just starting to build. We are half way through our changes for the year and have a huge increase for the 2010 goal. Indiana unemployment is at 10% but the economy is picking up in the area. The GM plant I work at just added 900 workers for a third shift of production and the real estate market is jumping. People are starting to spend money again!!! P.S. Aren't these the two cutest babies you have ever seen. Good thing they look like mom. LOL
{ "pile_set_name": "Pile-CC" }
Q: Device Token - Apple push Notification Service is device token changes each time when i opens my application? Apple server uses the same device token every time or the new regenerated device token? A: You can check developer documentaion, following is mentioned there - The form of this phase of token trust ensures that only APNs generates the token which it will later honor, and it can assure itself that a token handed to it by a device is the same token that it previously provisioned for that particular device—and only for that device. If the user restores backup data to a new device or reinstalls the operating system, the device token changes. So they are uniques to iPhone until OS is reinstalled. You can check details here - http://developer.apple.com/library/ios/#documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/ApplePushService/ApplePushService.html#//apple_ref/doc/uid/TP40008194-CH100-SW12
{ "pile_set_name": "StackExchange" }
// Copyright 2008 The Closure Library Authors. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS-IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. /** * @fileoverview An alternative custom button renderer that uses even more CSS * voodoo than the default implementation to render custom buttons with fake * rounded corners and dimensionality (via a subtle flat shadow on the bottom * half of the button) without the use of images. * * Based on the Custom Buttons 3.1 visual specification, see * http://go/custombuttons * * @author [email protected] (Emil A Eklund) * @see ../demos/imagelessbutton.html */ goog.provide('goog.ui.ImagelessButtonRenderer'); goog.require('goog.dom.classes'); goog.require('goog.ui.Button'); goog.require('goog.ui.ControlContent'); goog.require('goog.ui.CustomButtonRenderer'); goog.require('goog.ui.INLINE_BLOCK_CLASSNAME'); goog.require('goog.ui.registry'); /** * Custom renderer for {@link goog.ui.Button}s. Imageless buttons can contain * almost arbitrary HTML content, will flow like inline elements, but can be * styled like block-level elements. * * @constructor * @extends {goog.ui.CustomButtonRenderer} */ goog.ui.ImagelessButtonRenderer = function() { goog.ui.CustomButtonRenderer.call(this); }; goog.inherits(goog.ui.ImagelessButtonRenderer, goog.ui.CustomButtonRenderer); /** * The singleton instance of this renderer class. * @type {goog.ui.ImagelessButtonRenderer?} * @private */ goog.ui.ImagelessButtonRenderer.instance_ = null; goog.addSingletonGetter(goog.ui.ImagelessButtonRenderer); /** * Default CSS class to be applied to the root element of components rendered * by this renderer. * @type {string} */ goog.ui.ImagelessButtonRenderer.CSS_CLASS = goog.getCssName('goog-imageless-button'); /** * Returns the button's contents wrapped in the following DOM structure: * <div class="goog-inline-block goog-imageless-button"> * <div class="goog-inline-block goog-imageless-button-outer-box"> * <div class="goog-imageless-button-inner-box"> * <div class="goog-imageless-button-pos-box"> * <div class="goog-imageless-button-top-shadow">&nbsp;</div> * <div class="goog-imageless-button-content">Contents...</div> * </div> * </div> * </div> * </div> * @override */ goog.ui.ImagelessButtonRenderer.prototype.createDom; /** @override */ goog.ui.ImagelessButtonRenderer.prototype.getContentElement = function( element) { return /** @type {Element} */ (element && element.firstChild && element.firstChild.firstChild && element.firstChild.firstChild.firstChild.lastChild); }; /** * Takes a text caption or existing DOM structure, and returns the content * wrapped in a pseudo-rounded-corner box. Creates the following DOM structure: * <div class="goog-inline-block goog-imageless-button-outer-box"> * <div class="goog-inline-block goog-imageless-button-inner-box"> * <div class="goog-imageless-button-pos"> * <div class="goog-imageless-button-top-shadow">&nbsp;</div> * <div class="goog-imageless-button-content">Contents...</div> * </div> * </div> * </div> * Used by both {@link #createDom} and {@link #decorate}. To be overridden * by subclasses. * @param {goog.ui.ControlContent} content Text caption or DOM structure to wrap * in a box. * @param {goog.dom.DomHelper} dom DOM helper, used for document interaction. * @return {Element} Pseudo-rounded-corner box containing the content. * @override */ goog.ui.ImagelessButtonRenderer.prototype.createButton = function(content, dom) { var baseClass = this.getCssClass(); var inlineBlock = goog.ui.INLINE_BLOCK_CLASSNAME + ' '; return dom.createDom('div', inlineBlock + goog.getCssName(baseClass, 'outer-box'), dom.createDom('div', inlineBlock + goog.getCssName(baseClass, 'inner-box'), dom.createDom('div', goog.getCssName(baseClass, 'pos'), dom.createDom('div', goog.getCssName(baseClass, 'top-shadow'), '\u00A0'), dom.createDom('div', goog.getCssName(baseClass, 'content'), content)))); }; /** * Check if the button's element has a box structure. * @param {goog.ui.Button} button Button instance whose structure is being * checked. * @param {Element} element Element of the button. * @return {boolean} Whether the element has a box structure. * @protected * @override */ goog.ui.ImagelessButtonRenderer.prototype.hasBoxStructure = function( button, element) { var outer = button.getDomHelper().getFirstElementChild(element); var outerClassName = goog.getCssName(this.getCssClass(), 'outer-box'); if (outer && goog.dom.classes.has(outer, outerClassName)) { var inner = button.getDomHelper().getFirstElementChild(outer); var innerClassName = goog.getCssName(this.getCssClass(), 'inner-box'); if (inner && goog.dom.classes.has(inner, innerClassName)) { var pos = button.getDomHelper().getFirstElementChild(inner); var posClassName = goog.getCssName(this.getCssClass(), 'pos'); if (pos && goog.dom.classes.has(pos, posClassName)) { var shadow = button.getDomHelper().getFirstElementChild(pos); var shadowClassName = goog.getCssName( this.getCssClass(), 'top-shadow'); if (shadow && goog.dom.classes.has(shadow, shadowClassName)) { var content = button.getDomHelper().getNextElementSibling(shadow); var contentClassName = goog.getCssName( this.getCssClass(), 'content'); if (content && goog.dom.classes.has(content, contentClassName)) { // We have a proper box structure. return true; } } } } } return false; }; /** * Returns the CSS class to be applied to the root element of components * rendered using this renderer. * @return {string} Renderer-specific CSS class. * @override */ goog.ui.ImagelessButtonRenderer.prototype.getCssClass = function() { return goog.ui.ImagelessButtonRenderer.CSS_CLASS; }; // Register a decorator factory function for goog.ui.ImagelessButtonRenderer. goog.ui.registry.setDecoratorByClassName( goog.ui.ImagelessButtonRenderer.CSS_CLASS, function() { return new goog.ui.Button(null, goog.ui.ImagelessButtonRenderer.getInstance()); }); // Register a decorator factory function for toggle buttons using the // goog.ui.ImagelessButtonRenderer. goog.ui.registry.setDecoratorByClassName( goog.getCssName('goog-imageless-toggle-button'), function() { var button = new goog.ui.Button(null, goog.ui.ImagelessButtonRenderer.getInstance()); button.setSupportedState(goog.ui.Component.State.CHECKED, true); return button; });
{ "pile_set_name": "Github" }
Q: Calculation of integers $b,c,d,e,f,g$ such that $\frac{5}{7} = \frac{b}{2!}+\frac{c}{3!}+\frac{d}{4!}+\frac{e}{5!}+\frac{f}{6!}+\frac{g}{7!}$ There are unique integers $b,c,d,e,f,g$ such that $\displaystyle \frac{5}{7} = \frac{b}{2!}+\frac{c}{3!}+\frac{d}{4!}+\frac{e}{5!}+\frac{f}{6!}+\frac{g}{7!}$ Where $0\leq b,c,d,e,f,g <i$ for $i=2,3,4,5,6,7$. Then the value of $b+c+d+e+f+g = $ $\bf{My\; Try}::$ $\displaystyle \frac{5}{7} = \frac{2520\cdot b+840\cdot c+210\cdot d+42\cdot e+7\cdot f+g}{7\times 720}$ $\displaystyle 2520\cdot b+840\cdot c+210\cdot d+42\cdot e+7\cdot f+g = 720\times 5 = 3600$ Now I did not understand how can I solve after that. Help Required Thanks A: Look at this equality that you've got: $$ 2520 \cdot b + 840 \cdot c + 210 \cdot d + 42 \cdot e + 7 \cdot f + g = 3600.$$ Note that if you consider everything modulo $7$, then most of the summands disappear, because $2520,840,210,42$ and $7$ are all multiples of $7$. So, taking remainders modulo $7$, we get $g \equiv 2 \pmod 7$. Since $0 \leq g < 7$, it follows that $g = 2$. Now substitute $2$ for $g$ in your equality, subtract $2$ from both sides and divide everything by $7$. You get $$ 360 \cdot b + 120 \cdot c + 30 \cdot d + 6 \cdot e + f = 514. $$ Now consider both sides modulo $6$, and go on in a similar fashion. You will eventually find the values for all variables.
{ "pile_set_name": "StackExchange" }